.Wizeline is a global digital services company helping mid-size to Fortune 500 companies build, scale, and deliver high-quality digital products and services. We thrive in solving our customer's challenges through human-centered experiences, digital core modernization, and intelligence everywhere (AI/ML and data). We help them succeed in building digital capabilities that bring technology to the core of their business.
Your Day-to-Day We are looking for Senior Data Engineers to drive the architectural design, implementation plan, best practices, and testing plans for projects involving terabytes of data, which will serve as the foundation for advanced analytics and machine learning tasks to be performed by data scientists on top of that infrastructure.
Here's what you'll be doing in your day-to-day work:
Decide on the best approach for the needs of every project and understand a strict process is not always possible to follow.Rapid sketching and wireframing with collaborative feedback from cross-functional project leads to ensure business goals are met and technical constraints are considered.Create information architecture diagrams to document new workflows and hierarchies.Design mockups for new products that utilize existing UX patterns and components, and demonstrate interaction details.Collaborate closely with the engineering team as designs are implemented to ensure products meet our usability standards before they go out to users.Quickly iterate on designs based on user feedback and user metrics available.Lead user research to validate your design hypothesis and solutions via interviews, usability tests, and data.Communicate frequently with your product/project team to explain design decisions, present user workflows, and have a strong understanding of user needs.Contribute to evolving our user-first, agile design methodology with new ideas, best practices, and other learnings from various user experience groups and resources.Must-have Skills Strong General Programming SkillsSolid experience with Python. If not proficient in Python, we expect the candidate to be proficient in other languages and prove their ability to learn new ones very quickly.Experience with Spark.Experience working with SQL in advanced scenarios that require heavy optimization.4+ years of experience with large-scale data engineering with emphasis on analytics and reporting.2+ years of experience developing on Hadoop-like Ecosystem.Experience building cloud scalable, real time and high-performance Data Lake solutions.Proficiency designing and implementing ETL (Extract, Transform, load) processes, dealing with big volumes of data (terabytes of data which required distributed processing).Experience developing solution within Cloud Services (AWS, GCP or Azure).Experience with NoSQL databases such as Apache HBase, MongoDB or Cassandra.Experience in data streams processing technologies including Kafka, Spark Streaming, etc.Advanced English level.
#J-18808-Ljbffr