Veeva Systems is a mission-driven organization and pioneer in industry cloud, helping life sciences companies bring therapies to patients faster. As one of the fastest-growing SaaS companies in history, we surpassed $2B in revenue in our last fiscal year with extensive growth potential ahead.
At the heart of Veeva are our values: Do the Right Thing, Customer Success, Employee Success, and Speed. We're not just any public company – we made history in 2021 by becoming a public benefit corporation, legally bound to balancing the interests of customers, employees, society, and investors.
As a Work Anywhere company, we support your flexibility to work from home or in the office, so you can thrive in your ideal environment.
Join us in transforming the life sciences industry, committed to making a positive impact on its customers, employees, and communities.
The Role We are an AI team supporting the entire suite of link data products (e.g., Link Key People). Agility and quality are our operating principles in developing cutting-edge ML models. Our ML models are trained by data captured by a massive group of over 2000 subject-matter experts. ML models complement the curational pipeline and scale our solutions to different regions, languages, and therapeutic areas. Ultimately, we accelerate clinical trials and equitable care. We are proud that our work helps patients to get their most urgent care sooner.
Your role will primarily involve developing LLM-based agents that are specialized in searching and browsing the web and extracting detailed information about Key Opinion Leaders (KOLs) in the healthcare sector. You will craft an end-to-end human-in-the-loop pipeline to sift through a large array of unstructured medical documents—ranging from academic articles to clinical guidelines and meeting notes from therapeutic committees. These agents will be equipped to perform semantic searches and reasoning in order to provide precise answers to predefined queries concerning KOL-related data across various languages and disciplines. Leveraging AWS infrastructure, you will build, scale, and optimize agents and pipelines for information extraction and question-answering, ensuring they are production-ready and robust. Your focus will be on building highly scalable, efficient systems while collaborating with Data Engineers for seamless data pipelines and Data Scientists for model refinement. You will take ownership of the entire deployment process, ensuring the models are integrated into production environments with minimal latency and high performance.
We invite you to work remotely from any area within the UK, Spain, or Portugal. However, it's a prerequisite that you already reside in this country and hold legal work authorization without needing an employer's support.
If you plan to move to this country or live nearby, we may still consider your application if you're a superb fit for the role. In such scenarios, kindly supply an extra document that outlines your impending or current location, visa status, and the reasons that make you an excellent fit.
What You'll Do Develop and manage ML infrastructures and CI/CD pipelines to support multiple data productsBuild fully automated, scalable, cost-effective, and fault-tolerant solutions in AWS to process billions of recordsProvide engineering mentorship and guidance to data scientistsDevelop LLM-based agents capable of performing function calls and utilizing tools such as browsers for enhanced data interaction and retrievalExperience with Reinforcement Learning from Human Feedback (RLHF) methods such as Direct Preference Optimization (DPO) and Proximal Policy Optimization (PPO) for training LLMs based on human preferencesCollaborate with data scientists, data engineers, and product/operation teamsRequirements Agile mindsetProfessional in ML operationalization, including CI/CD pipelines and workflow/model management, stacks such as Airflow and MLfLowProfessional in distributed computing platforms (Ray and Spark) as well as Kubernetes for inferenceSolid understanding and experience in deep learning frameworks (e.g., PyTorch, JAX,...)Hands-on experience in in-house training and inference of LLMs3+ years of experience as a Machine Learning Engineer or relevant jobs2+ years of experience in cloud development, ideally in AWSStrong analytical skills and data curiosityStrong collaboration skills as well as verbal and written communication skillsUsed to start-up environmentsSocial competence and a team playerHigh energy and ambitiousNice to Have Experience in the life/health science industry, notably pharmaStrong theoretical knowledge of Natural Language Processing, Machine Learning, or Reinforcement LearningExperience with NoSQL databasesFamiliarity with architectural choices, particularly for ML systemsLeadership skills and a solid network to help in hiring and growing the teamPerks & Benefits Work anywherePersonal development budget (equal to 2% of your salary and in addition to that)Veeva charitable giving programFitness reimbursementLife insurance + pension fund#RemoteSpain
Veeva's headquarters is located in the San Francisco Bay Area with offices in more than 15 countries around the world.
As an equal opportunity employer, Veeva is committed to fostering a culture of inclusion and growing a diverse workforce. Diversity makes us stronger. It comes in many forms. Gender, race, ethnicity, religion, politics, sexual orientation, age, disability and life experience shape us all into unique individuals. We value people for the individuals they are and the contributions they can bring to our teams.
If you need assistance or accommodation due to a disability or special need when applying for a role or in our recruitment process, please contact us at ******.
#J-18808-Ljbffr