As a Senior Data Engineer within our Data Analytics Team, you will play a main reference role in managing and optimizing our data infrastructure and processes to ensure the efficient flow of data and information for critical business insights. If you're a highly skilled and motivated Data Engineer who is passionate about data quality and efficiency, we encourage you to apply and join our dynamic Digital Data Team. You'll have the opportunity to lead and make a significant impact on our data processes and analytics capabilities.
Responsibilities:Data Infrastructure Management: Maintain and enhance our current Data Warehouse infrastructure, including SQL databases, stored procedures, and scripts, to ensure data accuracy and availability.Databricks Architecture: We are moving our data warehouse to a Data Lake following a Medallion architecture within Databricks. For this purpose, we look for a Senior Data Engineer with experience in PySpark, parallel processing, and data architecture who wants to build and structure our data in the best possible way for your Data Analyst colleagues (PowerBI / Microsoft Fabric is used by the team).Azure Data Factory: Develop, monitor, and optimize data pipelines within ADF to facilitate and improve data extraction, transformation, and loading.Data Quality Assurance: Implement and oversee data quality processes to ensure that data is accurate, consistent, and reliable for analytics and reporting purposes.Lead and Mentor: Take a leadership role in guiding and mentoring the rest of the team, ensuring best practices and efficient data engineering processes.Continuous Improvement: Proactively identify opportunities to improve data processes and efficiency within the team and implement data-driven solutions.Data Governance: Enforce data governance practices to maintain data security, compliance, and privacy.Technical Big Data Engineering Stack: Good knowledge in data engineering tools, scripting languages, and relevant technologies to lead the data engineering aspects of the team and serve as subject-matter expert that helps us grow the data warehouse and implement new sources in the most optimal way.Strong Communication: Excellent communication skills to convey complex technical concepts to both technical and non-technical colleagues.Desire for Continuous Learning: A proactive attitude towards learning and using data to improve existing processes.MUST HAVES:Proficient SQL & PySpark Knowledge: Proficiency in SQL and PySpark is a must. You should be capable of designing and maintaining complex queries and transformations, stored procedures, and data structures.
#J-18808-Ljbffr