Senior Data Engineer - Fintech Madrid Office - Hybrid: 4 days in the office, 1 day working from home Join Our Data Team at Ebury Madrid Office.
Ebury's strategic growth plan would not be possible without our Data team and we are seeking a Senior Data Engineer to join our Data Platform Engineering team!
Our data mission is to develop and maintain Ebury's Data Warehouse and serve it to the whole company, where Data Scientists, Data Engineers, Analytics Engineers and Data Analysts work collaboratively to:
Build ETLs and data pipelines to serve data in our platformProvide clean, transformed data ready for analysis and used by our BI toolDevelop department and project specific data models and serve these to teams across the company to drive decision makingAutomate end solutions so we can all spend time on high-value analysis rather than running data extractsAbout our technology and Data stack: Google Cloud Platform as our main Cloud providerApache Airflow as orchestration toolDocker as PaaS to deliver software in containersdbt for data transformationLooker and Looker Studio as BI toolsGithub as code management toolJira as project management toolSynq as a data observability toolResponsibilities: Be mentored by one of our outstanding performance team members along a 30/60/90 plan designed only for youParticipate in data modelling reviews and discussions to validate the model's accuracy, completeness, and alignment with business objectives.Design, develop, deploy and maintain ELT/ETL data pipelines from a variety of data sources (transactional databases, REST APIs, file-based endpoints).Serve hands-on delivery of data models using solid software engineering practices (e.g., version control, testing, CI/CD)Manage overall pipeline orchestration using Airflow (hosted in Cloud Composer), as well as execution using GCP hosted services such as Container Registry, Artifact Registry, Cloud Run, Cloud Functions, and GKE.Work on reducing technical debt by addressing code that is outdated, inefficient, or no longer aligned with best practices or business needs.Collaborate with team members to reinforce best practices across the platform, encouraging a shared commitment to quality.Help to implement data governance policies, including data quality standards, data access control, and data classification.Identify opportunities to optimise and refine existing processes.Experience and qualifications: 3+ years of data/analytics engineering experience building, maintaining & optimising data pipelines & ETL processes on big data environmentsProficiency in Python, SQL and AirflowKnowledge of software engineering practices in data (SDLC, RFC…)Stay informed about the latest developments and industry standards in DataFluency in EnglishEven if you don't meet every requirement listed, we encourage you to apply—your skills and experience might be a great fit for this role or future opportunities! We welcome applications from candidates who require a work permit.
For non-EU/EEA nationals, the company may assist with the work permit process, depending on individual circumstances.
#J-18808-Ljbffr