.Dynatrace Innovate faster, operate more efficiently, and drive better business outcomes with observability, AI, automation, and application security in one platform. Dynatrace exists to make software work perfectly. Our platform combines broad and deep observability and continuous runtime application security with advanced AIOps to provide answers and intelligent automation from data. This enables innovators to modernize and automate cloud operations, deliver software faster and more securely, and ensure flawless digital experiences. Job Description At Dynatrace, Information Systems Engineering manages and transforms data into information for decisionmakers. This includes assessment, design, acquisition and/or implementation of tools, stores and pipelines for turning data into information. We are seeking a Lead Data Engineer who will provide key technical direction for and hands-on effort with a small team of data engineers supporting our Business Intelligence function. A core role will be directing and helping to implement transformative pipelines of business data into our Snowflake environment. The ideal candidate will have experience and demonstrable skill with Snowflake, and Snowpark and Spark using Python. We are interested in candidates who can demonstrate technical leadership of at least small teams of data engineers, including mentoring and upskilling more junior members of the team. Key responsibilities: Lead the design, implementation, and maintenance of scalable data pipelines in the Snowflake eco-system including third party vendor tools such as AWS, Fivetran, etc. Key contributor to a Data Engineering strategy to ensure efficient data management for operations and enterprise analytics Key technical expert for business stakeholder engagement on business data initiatives Collaboration with colleagues in Data Modeling, BI and Data Governance teams for platform initiatives Provide the technical interface to data engineering vendors Ensure data engineering standards align with industry best practices for data governance, data quality, and data security Evaluate and recommend new data technologies and tools to improve data engineering processes and outcomes Qualifications Qualifications: Significant experience in a hands-on data engineering role, especially in relation to business operations data Bachelor's degree in Computer Science, Information Systems or related field, or equivalent experience Experience managing stakeholder engagement, collaborating across teams, and working on multiple simultaneous projects Extensive experience acquiring data from REST APIs Strong background in Python/Spark programming, with the ability to write efficient, maintainable, and scalable data pipeline code Solid understanding of data warehousing, data lakes, MPP data platforms, and data processing frameworks Strong understanding of database technologies, including SQL and NoSQL databases