Data Engineer (Full Remote)
- Job Ref: 8057
- IT - Data (Engineers, Science etc)
The Data Engineering team is in charge of the Data Platform, which supports the Analytics needs and data-driven projects of the company. It is built around modern ELT pipelines deployed on AWS and uses Python, Spark, dbt, Redshift, and a wide range of AWS services.
We are looking for a Data Engineer to join our Data Engineering Squad, where you will support, improve and extend our Data Platform. Your work will empower our Analysts and Data Scientists, enable data-driven Product features, and support the data-centered culture.
More specifically, based on your experience, you will:
- Design, build and support modern and scalable data pipelines using 3rd party platforms or internal solutions.
- Collaborate with data scientists, other engineers, and stakeholders to understand what data is required and how best to make it available in our Data Platform.
- Support the daily work of our Data Scientists by ensuring they have easy access to data and tools (development environment, notebook instances, etc.).
- Write and maintain code to orchestrate our ELT workflows.
- Improve the design and data feeding of our Data Lake.
- Help and train Data Scientists to optimize SQL queries for performance.
- Provide data and infrastructure for building and deploying ML models to production.
- Use best practices around CI/CD, automation, testing, and monitoring of analytics pipelines (inspired by DataOps).
The ideal teammate for us would be someone who believes that communication, empathy, inquisitiveness, and open-mindedness are fundamental to succeed in any endeavor.
Ideally, it would help if you had some or all of the following:
- Be interested in building a platform that enables our Data Analysts and Data Scientists.
- Be fluent in one or more high-level programming languages (Python, Ruby, Java, Scala, or similar).
- Be comfortable with analytical SQL.
- Be familiar with software development best practices and their applications to Analytics (version control, testing, CI/CD, automation).
- Have experience building modern ETL pipelines, possibly at a large scale.
- Have experience working with a modern data warehouse (Redshift, Snowflake, BigQuery, or similar).
- Have a self-driven approach to learning new technologies and moving projects forward.
We also consider these as nice to have:
- Experience working with Data Scientists and Analysts.
- Experience in data quality and governance.
- Experience in the BigData ecosystem (Hadoop, Spark, PrestoDB, etc.).
- Familiarity with infrastructure and automation tools (Terraform, Ansible, or similar).
English is a must. We are a multicultural team, and we are providing a service in English. And although we don't care about certificates, we expect you to be able to communicate fluently.
You should feel equally comfortable communicating in long-form writing. Given the circumstances, we have become a fully remote company. We are firm believers that being articulate in both spoken and written long-form asynchronous communication is key to working efficiently together.
- Base Compensation Range: (Mid Data Engineer) €45-60k
- Ideal starting date: September 12th or 19th