NDT Global Barcelona, Barcelona, Spain Job Description NDT Global is the leading provider of ultra-high-tech diagnostic inspection solutions, advanced data analysis and integrity assessment services for ensuring the safety and longevity of energy-sector infrastructure assets.
Recognized as the forerunner in ultrasonic inspection technologies comprising Pulse Echo, Pitch-and-Catch and Phased Array, as well as Acoustic Resonance (ART Scan) methodologies, the company also deploys a range of non-ultrasonic technologies, such as Inertial Measurement Units, with more under development.
NDT Global strategically applies its inspection technologies to detect, diagnose and model various types of threat—circumferential or axial cracks, metal loss, geometry, mapping, and more—across diverse classes of assets.
By providing predictive, decision-ready insights driven by the world's most accurate data, NDT Global enables the conditions for asset owners to optimize infrastructure health and drive operational efficiencies while reducing risk and minimizing their carbon footprint.
Purpose The goal of the Data Engineer is to architect, specify and implement data access, ingestion, processing, and provisioning of large-scale datasets into the Big Data infrastructure.
The role ensures that architectural, test driven, and clean code principles are followed.
This position also covers the design, setup, maintenance, and upgrade of the production environment.
Tasks Designs, specifies, and writes software code for different projects in Big Data environments.
Mentors, supports, and coaches colleagues, Data Scientists and Engineers in Big Data and Data Engineering topics.
Maintains software code and infrastructure in the Big Data environments.
Designs and prepares Big Data platform for machine learning and data processing in production environments.
Documents the work for the developed solutions and maintain consistency of information.
Responsible for the overall software and code base and implements the software code into production.
Performs code reviews and ensures quality of the developed software in production stage.
Requirements A bachelor's degree in computer science, Engineering or related education is required.
A minimum of 6 years of experience in Software Development and Big Data framework.
A minimum of 3 years of experience working with Big Data framework is required.
Advanced experience in Scala, Apache Spark, Kafka and distributed storage systems (S3, Ceph).
Experience with NoSQL Databases (Redis) and SQL Databases (MySQL, Microsoft SQL Server).
Familiarity with cloud platforms (AWS or Azure) and their big data services (e.g., EMR, Databricks).
Experience with Apache Nifi is a plus.
Experience with containerization and orchestration tools (e.g., Docker, Kubernetes) is a plus.
Skillset Must be proficient in spoken and written English.
Knowledge of advanced programming skills (functional or object-oriented).
Knowledge of Machine Learning is preferred.
Strong background in big data processing, data pipeline development, and performance optimization.
Excellent problem-solving skills and the ability to troubleshoot complex data issues.
Strong communication skills and the ability to work collaboratively in a team environment.
Benefits Competitive salary and comprehensive benefits package : Health Insurance, Pension Plan, Gym Flex, eyecare, 23 days Annual leave.
Flexible & Hybrid working : Our flextime model allows you to design your working day as it suits your needs.
We think about the future: Individual training and development opportunities and professional development schemes.
Opportunity to work with cutting-edge technologies and make a significant impact on our data strategy.
International, intercultural and young working environment.
Great company culture and office environment.
#J-18808-Ljbffr