What we're all about: Do you ever have the urge to do things better than the last time? We do. And it's this urge that drives us every day. Our environment of discovery and innovation means we're able to create deep and valuable relationships with our clients to create real change for them and their industries. It's what got us here – and it's what will make our future. At Quantexa, you'll experience autonomy and support in equal measures allowing you to form a career that matches your ambitions. 41% of our colleagues come from an ethnic or religious minority background. We speak over 20 languages across our 47 nationalities, creating a sense of belonging for all.
Role Overview: Founded in 2016 by a small team, Quantexa was built with a vision of enabling better decision making through better data-driven intelligence. Seven years, twelve locations and 700+ employees later we recently gained "Unicorn" status with our Series E funding round.
Our Analytics teams build, deploy and maintain a wide range of AI models which underpin our platform. This includes specific expertise in emerging methods for Graph based model and NLP models. Our MLOps team is tasked with automating and maximizing efficiency of the build, deployment and maintenance of all model types.
We are looking for a highly skilled MLOps Engineer to join our overseas team, working in parallel with our existing MLOps team. This individual will focus on developing and maintaining the infrastructure and automation pipelines that support our machine learning models, ensuring that they can be deployed efficiently into production environments. This role will involve close collaboration with data scientists, data engineers, and other MLOps engineers to deliver robust, scalable machine learning pipelines that are optimized for production environments.
Responsibilities:Model Deployment: Collaborate with data scientists to ensure smooth deployment of machine learning models into production environments.Automate the deployment of machine learning models using CI/CD pipelines and container orchestration tools like Kubernetes and Docker.Ensure proper model versioning, governance, and compliance using tools like MLFlow, Kubeflow, or DVC.Pipeline Automation: Build and maintain data and model pipelines for training, validation, deployment, and monitoring.Develop automated processes for data validation, feature engineering, and model training.Integrate pipelines with distributed data processing frameworks (e.g., Spark, Kafka) to ensure efficient data handling for model training and inference.Monitoring & Maintenance: Set up monitoring systems to track model performance and detect issues like model drift, triggering retraining when necessary.Work with cloud infrastructure to scale models and ensure high availability in production environments.Contribute to the troubleshooting and resolution of issues with models in production.
#J-18808-Ljbffr