Company Overview: Join a leading international company based in the U.S.,renowned for its enterprise VoIP communication,messaging,and video conferencing solutions.As part of our Data Platform Team,you will work on building and optimizing the backbone of our data infrastructure,ensuring efficient,scalable,and reliable data processing pipelines that support mission-critical business applications.
What You'll Do: As a member of the Data Platform Team,you will design,implement,and maintain robust data platforms that handle large-scale data processing and management.You'll collaborate with cross-functional teams to model data pipelines,develop solutions for complex data challenges,and ensure our platform is built to scale efficiently across a wide variety of use cases and systems.
Responsibilities: Design, implement, and support data platform applications using modern technologies in a dynamic, fast-evolving environment. Develop and optimize large-scale data pipelines, ensuring high performance, reliability, and scalability. Collaborate with various teams to model complex data relationships and provide insights to support data-driven decisions. Ensure data platform architecture and infrastructure remain resilient, efficient, and scalable to meet the company's growing data needs. Promote a knowledge-sharing environment, mentoring peers and contributing to the team's success. Technology Stack: Core Technologies: Java, Scala, ANSI SQL, Apache Spark, Apache Airflow, Apache Hadoop (HDFS, YARN), Apache Hive, Apache Impala, Apache Flume, MongoDB, AWS (Amazon Web Services), Snowflake. Skills & Requirements: 3+ years of hands-on experience with Java or Scala programming. Strong grasp of Java concepts (collections, serialization, multi-threading, lambda expressions, JVM architecture, etc.). Proficiency in ANSI SQL, including query syntax, performance tuning, and knowledge of OLAP vs. OLTP. Ability to quickly learn new technologies and integrate them into existing infrastructure. A deep understanding of architecture patterns in distributed systems, especially in data platform environments. Preferred Qualifications: Experience working with Hadoop ecosystem components and big data frameworks. Hands-on experience with Hadoop, Spark, Kafka, or other data streaming technologies. Familiarity with designing and implementing ETL (Extract, Transform, Load) processes. Basic knowledge of Python and its use in data engineering tasks. Experience with data visualization/analysis tools (e.g., Tableau) to drive insights. Proficiency with Linux and cloud-based infrastructure (especially AWS). Intermediate or higher proficiency in written and spoken English. What We Offer: A collaborative and high-performing professional team. The opportunity to work with cutting-edge data technologies and solve challenging, large-scale data problems. A dynamic project environment with ample opportunities for personal growth, professional development, and career advancement.
#J-18808-Ljbffr