At Dynatrace, Information Systems Engineering manages and transforms data into information for decision-makers. This includes assessment, design, acquisition, and/or implementation of tools, stores, and pipelines for turning data into information.
We are seeking a Lead Data Engineer who will provide key technical direction for and hands-on effort with a small team of data engineers supporting our Business Intelligence function.
A core role will be directing and helping to implement transformative pipelines of business data into our Snowflake environment. The ideal candidate will have experience and demonstrable skill with Snowflake, Snowpark, and Spark using Python.
We are interested in candidates who can demonstrate technical leadership of at least small teams of data engineers, including mentoring and upskilling more junior members of the team.
Key responsibilities: Lead the design, implementation, and maintenance of scalable data pipelines in the Snowflake ecosystem including third-party vendor tools such as AWS, Fivetran, etc.Key contributor to a Data Engineering strategy to ensure efficient data management for operations and enterprise analyticsKey technical expert for business stakeholder engagement on business data initiativesCollaboration with colleagues in Data Modeling, BI, and Data Governance teams for platform initiativesProvide the technical interface to data engineering vendorsEnsure data engineering standards align with industry best practices for data governance, data quality, and data securityEvaluate and recommend new data technologies and tools to improve data engineering processes and outcomes Qualifications: Significant experience in a hands-on data engineering role, especially in relation to business operations dataBachelor's degree in Computer Science, Information Systems, or related field, or equivalent experienceExperience managing stakeholder engagement, collaborating across teams, and working on multiple simultaneous projectsHands-on experience implementing robust, scalable data pipelinesExtensive experience acquiring data from REST APIsStrong background in Python/Spark programming, with the ability to write efficient, maintainable, and scalable data pipeline codeSolid understanding of data warehousing, data lakes, MPP data platforms, and data processing frameworksStrong understanding of database technologies, including SQL and NoSQL databasesExperience with CI/CD pipelines and DevOps practices for data engineeringExcellent problem-solving and analytical skillsSnowflake certification or other relevant data engineering certification is a plus
Dynatrace exists to make software work perfectly. Our platform combines broad and deep observability and continuous runtime application security with advanced AIOps to provide answers and intelligent automation from data. This enables innovators to modernize and automate cloud operations, deliver software faster and more securely, and ensure flawless digital experiences.
#J-18808-Ljbffr