Job Description • Develop and manage Hadoop-based data processing frameworks • Build ETL pipelines using tools like Sqoop and Hive • Work with Spark, MapReduce, Kafka, and related Big Data technologies • Optimize data storage and processing for performance and cost efficiency • Troubleshoot and reso