Wy-739 Data Engineer - Fintech

Wy-739 Data Engineer - Fintech
Empresa:

Ebury


Lugar:

Madrid

Detalles de la oferta

Ebury is a hyper-growth FinTech firm, named as one of the top 15 European Fintechs to work for by AltFi.
We offer a range of products including FX risk management, trade finance, currency accounts, international payments, and API integration.
Data Engineer - Fintech Madrid Office - Hybrid: 4 days in the office, 1 day working from home
Join Our Technology Team at Ebury Madrid Office. Ebury's strategic growth plan would not be possible without our Data team and we are seeking a Data Engineer to join our Data Engineering team! Our data mission is to develop and maintain Ebury's Data Platform and serve it to the whole company, where Data Scientists, Data Engineers, Analytics Engineers, and Data Analysts work collaboratively to:
Build ETLs and data pipelines to serve data in our platform.Provide clean, transformed data ready for analysis and used by our BI tool.Develop department and project specific data models and serve these to teams across the company to drive decision making.Automate end solutions so we can all spend time on high-value analysis rather than running data extracts.Why should you join Ebury? Want to work in a high-growth environment? We are always growing.
Want to build a better world? We believe in inclusion. We stand against discrimination in all forms and have no tolerance for the intolerance of differences that makes us a modern and successful organisation.
At Ebury, you will find an internal group dedicated to discussing how we can build a more diverse and inclusive workplace for all people in the Technology Team, so if you're excited about this job opportunity but your background doesn't match exactly the requirements in the job description, we strongly encourage you to apply anyways.
You may be just the right candidate for this or other positions we have.
About our technology and Data stack: Google Cloud Platform as our main Cloud providerApache Airflow and dbt Cloud as orchestration toolsDocker as PaaS to deliver software in containersCloud Build as CICDdbt as data modelling and warehousingLooker and Looker Studio as Business Intelligence/dashboardingGithub as code management toolJira as project management toolAmong others third party tools such as: Hevodata, MonteCarlo, Synq…
What we offer: Variety of meaningful and competitive benefits to meet your needsCompetitive salaryContinuous professional growth with regular reviewsEquity process through a performance bonusAllowance to take annually paid time off as well as during local public holidaysContinued personal development through training and certificationBeing part of a diverse technology team that cares deeply about culture and best practices, and believes in agile principlesWe are Open Source friendly, following Open Source principles in our internal projects and encouraging contributions to external projectsResponsibilities: Be mentored by one of our outstanding performance team members along a 30/60/90 plan designed only for you.Participate in data modelling reviews and discussions to validate the model's accuracy, completeness, and alignment with business objectives.Design, develop, deploy, and maintain ELT/ETL data pipelines from a variety of data sources (transactional databases, REST APIs, file-based endpoints).Serve hands-on delivery of data models using solid software engineering practices (e.g., version control, testing, CI/CD).Manage overall pipeline orchestration using Airflow (hosted in Cloud Composer), as well as execution using GCP hosted services such as Container Registry, Artifact Registry, Cloud Run, Cloud Functions, and GKE.Work on reducing technical debt by addressing code that is outdated, inefficient, or no longer aligned with best practices or business needs.Collaborate with team members to reinforce best practices across the platform, encouraging a shared commitment to quality.Help to implement data governance policies, including data quality standards, data access control, and data classification.Identify opportunities to optimise and refine existing processes.Experience and qualifications: 3+ years of data/analytics engineering experience building, maintaining & optimising data pipelines & ETL processes on big data environments.Proficiency in Python and SQL.Knowledge of software engineering practices in data (SDLC, RFC…).Stay informed about the latest developments and industry standards in Data.Fluency in English.Experience with our modern Data stack tools is a plus.Dimensional modelling/data warehousing concepts knowledge is a plus.Spanish language is a plus.#LI-CG1

#J-18808-Ljbffr


Fuente: Jobleads

Requisitos

Wy-739 Data Engineer - Fintech
Empresa:

Ebury


Lugar:

Madrid

Senior Security Engineerflexibel; Porto, Portugal; Lissabon, Portugal; Barcelona, Spanien; Madr...

.ABOUT THE JOBAs a Security Engineer, you will be part of a cross-functional team or a practice team that enables secure coding, secure design, and security ...


Desde Tui - Madrid

Publicado 12 days ago

It Applications Analyst - Spain

We are a leading global software company dedicated to the world of computer aided design, 3D modeling and simulation - helping innovative global manufacturer...


Desde Siemens - Madrid

Publicado 11 days ago

Iam Engineer Testing (F/M/D) It / Technology · Madrid · Hybrid Remote

Join our Identity team at Axpo Services AG, where you will play a crucial role in ensuring the excellence and integrity of our IAM solutions. As an experienc...


Desde Dispute Resolution - Madrid

Publicado 11 days ago

Technical Certification Officer

About Us A career at Hitachi Rail will help create a legacy. With operations in every corner of the world, our work goes to the cutting-edge of digital trans...


Desde Hitachi Vantara Corporation - Madrid

Publicado 11 days ago

Built at: 2024-09-21T10:47:16.946Z