.About usRavenPack is the leading big data analytics provider for financial services. Financial professionals rely on RavenPack for its speed and accuracy in analyzing large amounts of unstructured content. RavenPack's products allow clients to enhance returns, reduce risk, and increase efficiency by systematically incorporating the effects of public information in their models or workflows. Our clients include the most successful hedge funds, banks, and asset managers in the world!About the jobWe are seeking an experienced and highly motivated Senior Data Engineer to join our dynamic team. As a Senior Data Engineer, you will play a crucial role in designing, developing, and maintaining innovative data solutions which will shape the future of our data ecosystem. You will collaborate with the Data Science and Product teams to understand data requirements and implement data pipelines and infrastructure to support our data-driven initiatives.Your ability to work across multiple disciplines, from software engineering to database management, will contribute to building robust and scalable solutions.Responsibilities:Design, develop and maintain innovative data solutions to address industry-specific challenges.Develop data pipelines to extract, transform, and load (ETL) structured and unstructured data from various sources.Collaborate with the Data Science team to assist their data exploration, analysis, and modeling requirements.Collaborate with the Product team to grasp data requirements and translate them into technical solutions.Implement data quality checks and data validation processes to ensure accuracy and integrity of data.Develop and maintain documentation related to data engineering processes.Manage project tasks and ensure successful project delivery.Requirements:Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience.3+ years of experience in data engineering or software engineering projects.Advanced programming skills in Python for developing data processing pipelines, libraries and applications.SQL proficiency and experience working with relational databases (experience with NoSQL databases is a plus).Experience with cloud platforms, such as AWS.Experience with scripting languages like Bash for automation and scripting tasks.Familiarity with containerization technologies, such as Docker.Experience with Git.Excellent problem-solving skills and ability to analyze complex data-related issues.Attention to detail and a commitment to delivering high-quality solutions.Strong communication skills and ability to collaborate effectively with cross-functional teams. Ability to communicate effectively in English, both in writing and verbally.Desirable:Familiarity with Machine Learning (ML) techniques and frameworks, as well as Large Language Model (LLM) technologies is highly desirable.Experience with Agent workflows is highly desirable