.Data Engineer (part-time)About Us: Avea is a Swiss longevity startup with the vision to become the leading player in the evidence-based longevity supplement industry.
We offer science-based longevity supplements to target multiple hallmarks of ageing, allowing individuals to improve their well-being by slowing down ageing itself.
Our mission is to optimise people's health- and lifespan, helping them keep or regain their vitality to live as healthily as possible for as long as possible.The Role: As a Data Engineer you will be responsible for ensuring that we as an organisation can leverage data effectively.
In essence, you bridge the gap between raw data and actionable insights, making them a crucial asset in data-driven decision-making.
It is initially a part-time role that can convert into full-time opportunity.
Depending on the location we offer B2B or EOR contract.
This is a fully remote role.Duties and Responsibilities:Data Quality and Consolidation:Data Quality: Ensure the accuracy, completeness, and consistency of data across various systems.Data Flow Management: Oversee the flow of data from extraction to transformation and loading (ETL) into the appropriate data stores.Data Consolidation: Integrate data from different sources like Snowflake and Google Cloud to create unified, accurate datasets.ETL Process Development (Python):Develop Python Scripts for ETL: Write and maintain Python scripts to extract, transform, and load (ETL) data from various sources.Automate ETL Pipelines: Automate the data transformation and loading processes to ensure real-time or scheduled data availability.Data Transformation: Convert transactional data into relational models (e.G., customer and order tables) to make it usable for analytics.Data Extraction:Connect to Data Sources: Build and manage Python connections to platforms like Snowflake and Google Cloud.Data Download and Loading: Extract raw data, clean it, and load it into relevant environments for processing.Data Transformation:Relational Data Modeling: Transform extracted data into a structured, relational format (e.G., customers, orders, order details, shipments, inventory).Preprocessing for Analysis: Ensure that data is ready to be used without the need for further pivoting or transformation by end users.Independence and Scalability:Data Pipeline Independence: Work towards making the data pipelines and transformation processes more independent and scalable.Maintain and Optimize: Regularly maintain and optimize the data processes to ensure efficiency.Requirements:Experience & Qualifications:Degree in Computer Science, Data Engineering, or a related field (or equivalent work experience).2-5+ years of experience in data engineering, ETL processes, or a similar role.Experience working with AWS/Snowflake or Google Cloud/PostgreSQL.Experience in writing Python scripts to connect with databases or cloud environments (e.G., Snowflake, Google Cloud)