Senior Data Engineer

Flex is building a finance super app for business owners — reimagining every single aspect of the financial workflow and financial services for any entrepreneur. The company has grown revenue 25x+ since publicly launching in September 2023 and is on track to achieve profitability by early 2025. Flex is focused on mid-market businesses ($3 - $100M revenue) that are largely overlooked by existing fintech solutions and reliant on slow and outdated regional banks.  We are targeting a ~$1T revenue opportunity that is largely up for grabs.

Flex is a fully remote company and this role can be performed from anywhere.

The Role

We are looking for engineers who are excited to be part of our early story and help us build a diverse and vibrant company. As a senior data engineer you will be responsible for building systems that bridge the gap between raw business data and consumable data products that the business can trust and take action on. You should have a strong sense of ownership and enjoy taking projects from inception to release. As an early employee, you’ll be working with a nimble team of committed and talented engineers and having a large, long-term impact on technical design and engineering culture.

What You’ll Do

    • Collaborate with stakeholders across the company to build aggregated and curated datasets, dashboards, and APIs to support getting the right data to the right people at the right time.
    • Architect, implement, and manage batch and real-time data pipelines, models, and ETLs to surface new internal and third-party data sources for easy analysis.
    • Identify gaps and invent processes, automated scripts, and tools to efficiently carry out data processing tasks in a scalable fashion.
    • Utilize logging, observability, and notifications for workflows.
    • Contribute to advancing goals around data governance policy creation/enforcement, business metadata cataloging, and access control.
    • Own the process and automation for data intake, QA, and data delivery.
    • Be a catalyst for driving best-in-class technology frameworks and tooling while staying engaged with the latest technology trends.

What You Need

    • Advanced experience building, optimizing, and maintaining data pipelines and data sets.
    • Demonstrated strong experience working with relational databases, query authoring as well as working familiarity with a variety of databases and data warehouses like Postgres, Snowflake, or similar.
    • Ability to perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
    • Strong analytic skills related to working with unstructured datasets.
    • Experience with data pipeline and workflow management tools like Airflow.
    • Familiarity with cloud based providers like Google Cloud, AWS, or Azure and related data services like BigQuery, Dataproc, Dataflow, etc.
    • Experience with programming languages like Python, C#, Java, Scala, etc.
    • Knowledge of Retool or Tableau.
    • Exposure to stream-processing systems: Beam, Spark Streaming, Kafka Streams, etc.
    • Four or more years of relevant industry experience with a track record of shipping high-quality products and features at scale.

Similar Jobs