.Edelman is a voice synonymous with trust, reimagining a future where the currency of communication is action.
Our culture thrives on three promises: boldness is possibility, empathy is progress, and curiosity is momentum.At Edelman, we understand diversity, equity, inclusion and belonging (DEIB) transform our colleagues, our company, our clients, and our communities.
We are in relentless pursuit of an equitable and inspiring workplace that is respectful of all, reflects and represents the world in which we live, and fosters trust, collaboration and belonging.We are currently seeking a Data Engineer with 3-5 years' experience.
The ideal candidate would have the ability to work independently within an AGILE working environment and have experience working with cloud infrastructure leveraging tools such as Apache Airflow, Databricks, and Snowflake.
A familiarity with real-time data processing and AI implementation is advantageous.Why You'll Love Working with Us:At Edelman, we believe in fostering a collaborative and open environment where every team member's voice is valued.
Our data engineering team thrives on building robust, scalable, and efficient data systems to power insightful decision-making.We are at an exciting point in our journey, focusing on designing and implementing modern data pipelines, optimizing data workflows, and enabling seamless integration of data across platforms.
You'll work with best-in-class tools and practices for data ingestion, transformation, storage and analysis, ensuring high data quality, performance, and reliability.Our data stack leverages technologies like ETL/ELT pipelines, distributed computing frameworks, data lakes, and data warehouses to process and analyze data efficiently at scale.
Additionally, we are exploring the use of Generative AI techniques to support tasks like data enrichment and automated reporting, enhancing the insights we deliver to stakeholders.This role provides a unique opportunity to work on projects involving batch processing, streaming data pipelines, and automation of data workflows, with occasional opportunities to collaborate on AI-driven solutions.If you're passionate about designing scalable systems, building reliable data infrastructure, and solving real-world data challenges, you'll thrive here.
We empower our engineers to explore new tools and approaches while delivering meaningful, high-quality solutions in a supportive, forward-thinking environment.Responsibilities:Design, build, and maintain scalable and robust data pipelines to support analytics and machine learning models, ensuring high data quality and reliability for both batch & real-time use cases.Design, maintain, and optimize data models and data structures in tooling such as Snowflake and Databricks.Leverage Databricks and Cloud-native solutions for big data processing, ensuring efficient management of Spark jobs and seamless integration with other data services