.About the role : Join Everest, our forward-thinking insurance company, as we leverage data-driven solutions to transform the industry. We are looking for a talented Data Engineer Team Lead to enhance our data capabilities and support our growth and modernization strategies. In this vital role, the ideal candidate will provide technical direction and oversight to the team, ensuring the successful execution of data engineering solutions to support our evolving needs in the insurance industry. This includes ensuring alignment with our state-of-the-art data architecture and involves gathering, refining, and handling extensive and intricate data sets to fulfill our business operations demands. Key Responsibilities : Collaborate with cross-functional teams. Work closely with internal customers and business analysts to understand data needs and deliver solutions that support data-driven decision-making. Lead the technology activities to enable the delivery of meaningful, actionable, data-driven capabilities and solutions in alignment with business strategies. Manage the engagement / coordination across assigned delivery teams to ensure high quality project delivery within scope, timeframe, and budgets. Collaborate with the Scrum lead(s) to manage the backlog, sprints, user stories, features, and epics in the development process. Drive the delivery of an outstanding customer / partner experience by ensuring easy access to data and user-friendly self-service capabilities. Guarantee the security, integrity, and quality of data while facilitating the agility and performance of a scaled ecosystem. Support the development of data engineering solutions and methods, that promote sharing inner sourcing and reuse of code, in a cloud-first environment that leverages highly automated DevOps tools and capabilities. Extensive and complex data sets to fulfill functional and non-functional business needs. Services for integrating a variety of data sources into Everest's Data Lake, supporting both batch and real-time data. Services for data processing (ETL / ELT) catering to both batch and real-time data engineering requirements. Services for orchestrating data pipelines. Services for data provisioning that suit different data consumption needs, including microservices, APIs, and data extracts. Technical Competencies : Strong data engineering knowledge and hands-on experience with the Azure cloud platform. Profound knowledge of data manipulation scripting languages such as Python, PySpark, and SQL. Experience with streaming frameworks, such as Apache Kafka, is preferred. Knowledge of industry-wide visualization and analytics tools, such as Power BI, Tableau, Qlik, etc. Understanding of data warehousing and data modeling techniques. Experience in identifying and implementing efficiency and optimization improvements in code. Experience in developing strong, reusable tests for implemented solutions. Experience in orchestrating data process workflows