.Ebury is a hyper-growth FinTech firm, named in 2021 as one of the top 15 European Fintechs to work for by AltFi. We offer a range of products including FX risk management, trade finance, currency accounts, international payments, and API integration.Position: Data Engineer - Data Platform EngineerLocation: Madrid - 4 days in the office & 1 day working from homeAbout our company:Ebury is a FinTech success story, positioned among the fastest-growing international companies in its sector. Headquartered in London, we have more than 1,500 staff covering over 50 nationalities, working across more than 27 offices worldwide and serving more than 45,000 clients every day.About our team:Ebury's strategic growth plan would not be possible without our Data team and we are seeking a Data Engineer to join our Data Engineering team! Our data mission is to develop and maintain Ebury's Data Platform and serve it to the whole company, where Data Scientists, Data Engineers, Analytics Engineers, and Data Analysts work collaboratively to:Build ETLs and data pipelines to serve data in our platform.Provide clean, transformed data ready for analysis and used by our BI tool.Develop department and project specific data models and serve these to teams across the company to drive decision making.Automate end solutions so we can all spend time on high-value analysis rather than running data extracts.About our technology and Data stack:Google Cloud Platform as our main Cloud provider, Apache Airflow and dbt Cloud as orchestration tools, Docker as PaaS to deliver software in containers, Cloud Build as CICD, dbt as data modelling and warehousing, Looker and Looker Studio as Business Intelligence/dashboarding, Github as code management tool, Jira as project management tool, among other third party tools such as Hevodata, MonteCarlo, Synq...About the role:As a Data Engineer, you will work closely with the rest of the team to help model and maintain the Data Platform. Therefore, we are looking for:At least 3+ years of data/analytics engineering experience building, maintaining & optimising data pipelines & ETL processes on big data environments.Proficiency in Python and SQL.Knowledge of software engineering practices in data (SDLC, RFC...).Stay informed about the latest developments and industry standards in Data.Fluency in English.As a plus:Experience with our modern Data stack tools, Dimensional modelling/data warehousing concepts knowledge, Spanish language.Why this offer is for you:You will:Be mentored by one of our outstanding performance team members along a 30/60/90 plan designed only for you.Participate in data modelling reviews and discussions to validate the model's accuracy, completeness, and alignment with business objectives.Design, develop, deploy and maintain ELT/ETL data pipelines from a variety of data sources (transactional databases, REST APIs, file-based endpoints)