
Big Data Developer
- IT
- India
- 2025-06-04
- Full Time
Job Title: Big Data Engineer Tech
Primary Skills:
Hadoop (HDFS, Map-Reduce, YARN)
SPARK (SQL, DataFrame)
ETL/ELT (Professional experience with Teradata, Ab Initio)
Python or PySpark
MongoDB, GCP, Big Query
CI/CD, Hive
Unix, Autosys
Responsibilities:
Design, develop, and test robust ETL/ELT data pipelines using Map-Reduce and SPARK.
Process large datasets in multiple file formats such as CSV, JSON, Parquet, and Avro.
Perform metadata configuration and optimize job performance.
Analyze and recommend changes to data models (E-R and Dimensional models) for enhanced efficiency.
Collaborate with cross-functional teams to ensure smooth data workflows and processing.
Implement best practices in coding, performance tuning, and process automation.
Lead the team in troubleshooting complex data issues and provide guidance on best approaches.
Ensure high-quality delivery of data pipelines with regular performance and scalability checks.
Conduct code reviews and mentor junior engineers on technical skills and best practices.
Design and optimize processes for scalable data storage, management, and access.
Eligibility Criteria:
10+ years of experience in Big Data Engineering with hands-on expertise in Hadoop and Spark.
Strong understanding and practical experience in designing, coding, and testing ETL/ELT pipelines.
Proficiency in Spark (SQL and DataFrame) for processing large datasets.
Experience with data models (E-R & Dimensional) and their optimization.
Strong skills in Unix Shell scripting (simple to moderate).
Familiarity with Sparkflow framework (preferred).
Proficiency in SQL, GCP Big Query, and Python is desirable.
Strong problem-solving and analytical skills.
Good communication and collaboration skills to work in a cross-functional team environment.
Ability to work independently and manage multiple tasks simultaneously.