Senior Data Engineer

We are looking for a Senior Data Engineer with 4–6 years of experience to join our team and help us build reliable, scalable, and impactful data applications. You will be building real systems that move data between products, platforms, and people.

You will work with a modern data stack (Snowflake, Airflow, dbt, Apache Beam, and more) and collaborate closely with product, engineering, and business teams. If you love designing clean pipelines, shipping fast, and choosing the right tool for the job, we’d love to hear from you.

The team

This role will be part of the Data Science & Engineering team. This team also consists of analytics engineers and data scientists. The data engineers have a mandate to empower our customers with data by building the necessary systems and infrastructure to support it. You will be the second data engineer on the team. 

This team impacts the entire company and works with a wide variety of stakeholders. At a high level we have the following areas of impact:

  • Enabling decision-makers
  • Creating operational efficiencies
  • Driving growth

Our objectives for 2025 are to:

  • Unlock value of data and AI
  • Improve reliability of data products and platforms
  • Democraticize access to data

Our stac

Our data tech stack

  • Snowflake Data Warehouse
  • Airflow
  • Rudderstack
  • Stitch
  • Cube
  • DBT
  • Pub/Sub
  • Apache Beam
  • Postgres
  • Metabase

APIs and data pipelines built and managed by the DE team run on the GCP Kubernetes cluster

This tech stack is what we have today, but it will evolve in the future as the company and team grow.

Your role & Impact

You’ll play a key role in shaping how we design, build, and scale our data infrastructure. This role comes with a lot of autonomy—we're looking for someone who’s eager to ideate, initiate, and lead impactful projects, while also taking ownership of core data domains.

In this role, you will:

  • Design, build, and optimize data pipelines that power APIs and internal data products—ensuring performance, scalability, and data quality.
  • Implement robust batch and real-time ingestion pipelines that capture, transform, and store event data for analytics and operational use.
  • Define and maintain data contracts with producers—clarifying formats, delivery expectations, and quality standards.
  • Collaborate with product, engineering, and analytics teams to turn business requirements into scalable data solutions.
  • Build automation and integration tools that enable self-serve data access for teams like sales, marketing, and product.
  • Champion reliability, observability, and security across all stages of the data lifecycle, embedding best practices for data governance and monitoring.

 

Your profile

  • 4–6 years of professional experience as a software or data engineer, with a strong focus on building data-intensive systems.
  • Solid experience building and maintaining production-grade data pipelines (batch or streaming).
  • Strong programming skills in Python (or equivalent experience in another language with willingness to learn).
  • Experience integrating third-party tools and services—whether cloud-native solutions or open source software.
  • Comfortable building POCs and MVPs to validate ideas and iterate quickly.
  • Comfortable with containerization and modern infrastructure practices (e.g., Docker, Kubernetes).
  • Experience working with application databases—PostgreSQL preferred.
  • Familiarity with designing APIs and working with real-time or event-driven systems (e.g., Kafka, Pub/Sub) is a plus.
  • Exposure to batch processing, ETL pipelines, or workflow orchestration tools (like Airflow or dbt) is a bonus—not a requirement.
  • Experience with CI/CD workflows—e.g., Jenkins for continuous integration and Helm for managing deployments.
  • Familiarity with infrastructure-as-code tools such as Terraform.
  • Experience setting up and improving observability and telemetry using tools like GCP Cloud Monitoring, Prometheus, and Grafana.
  • Curiosity or experience in machine learning systems, MLOps, or collaborating with data science teams.
  • A pragmatic mindset and an ability to balance ideal architecture with practical delivery.
  • Excellent communication skills—you can explain data trade-offs to both engineers and non-technical stakeholders.

 

What’s in it for you 

  • Unlimited paid holidays, minimum-based not maximum!
  • Hybrid working policy with 2 days working from the office.
  • Flexibility to work remotely for up to 2 months per year!
  • 1,000 EUR personal development budget.
  • Complete coverage for commuting.
  • 30% ruling application assistance. 
  • Personal equipment, including laptop and ergonomic setup.
  • Gym membership discount with GoVital or OneFit.
  • Dutch/English classes budget.
  • Variable pension scheme.
  • Diverse international community (46+ nationalities).
  • Pet-friendly office in Rotterdam city centre. 
  • Fun team-building and after-work drinks every Friday.



About HousingAnywhere Group

HousingAnywhere is Europe’s largest mid-term rental platform, covering over 125 cities across Europe and several in the US. With Kamernet and Studapart under its umbrella, HousingAnywhere empowers people to live wherever and however they choose. Through our advanced online platforms, which together attract over 30 million users annually, we connect young professionals and students directly with accommodation providers. Our team of 250 professionals is dedicated to helping tenants find comfort and peace of mind in their rental search, whether they're looking for a home across the globe or just across town.



Our mission

Rent Easy, Live Free.

 

Our Values

  • Ownership
  • We are Enablers
  • We are Changemakers
  • We are Connectors

 

If you have further questions, please email Iris at y.kaboorappan@housinganywhere.com

By applying to work at HousingAnywhere, you agree to our Candidate Privacy Policy.

Similar Jobs