.At Kantox we are looking for a Data Engineer to join our Data team to design, improve and enhance our current data architecture. Kantox is a growing organisation with growing data needs, therefore scalable, reliable and efficient data architecture and orchestration of data pipelines are fundamental for our business.This is an exciting full-time job with great challenges, as you will help build and automate our streaming data architecture and help us grow our Data team. You will be joining a vibrant and passionate group of software engineers that hail from all parts of the world.The Kantox Engineering ManifestoKantox is a team sport. Our engineering culture is devoid of egos yet we take great pride in our work. We believe in constructively challenging each other pushing our knowledge, code, processes to the absolute limit. Our processes are based around continual self improvement, continuous code integration and deployment.About the teamYou will collaborate closely with data science, business intelligence, and cloud engineering teams to develop and manage data products across multiple domains. Your efforts will ensure the delivery of data that supports internal decision-making and provides clients with scalable and reliable visibility.Your future role at KantoxAs a Data Engineer, you will play a pivotal role in enhancing and optimising our data systems to support our Data Lake/Data Mesh infrastructure and meet our data processing requirements. Your work will enable the creation of high-performance, scalable data products company-wide, while ensuring proper data governance. You will also leverage and improve our existing AWS-based data platform, utilising a tech stack that includes Dagster for managing data assets, Deltalake for data storage, Kafka/RabbitMQ for streaming, dbt/DuckDB for transformations, and Terraform, Kubernetes, and ArgoCD for infrastructure management.This is a unique opportunity to join a rapidly growing company in a dynamic phase, offering a stimulating role with significant growth potential.What you will doCollaborate with a talented and diverse team to develop and enhance our data platform, enabling efficient data collection, processing, accessibility, usability, and monitoring.Automate the self-service platform and create tools for generating data products.Build highly scalable and reliable data assets using Python, Scala, and Spark/PySpark.Integrate new and existing data sources into our ETL and reverse ETL pipelines.Design a data strategy that leverages the appropriate tools for data governance, cataloguing, and discovery, while ensuring secure access.Ensure data quality and reliability by implementing standardised monitoring, CI/CD, and testing practices.Continuously explore and adopt new technologies and best practices to enhance our data infrastructure.Work within the Tech Team, adhering to agile methodologies