Data Engineer
Request a quote with no obligation
Data Engineer with a master’s degree and nearly 3 years of experience in building scalable data pipelines using Python, SQL, Hadoop, and cloud platforms (AWS, GCP, Azure). Competent in big data technologies like Apache Kafka, Hadoop, and Apache Spark, and in designing and implementing efficient ETL/ELT processes. Well-versed in developing real-time data pipelines, integrating data from multiple sources to optimize workflows and enhance data accuracy.
Proficient in SQL and NoSQL databases, with expertise in CI/CD, Docker, and Kubernetes for automation and deployment. Strong communicator and team contributor with a proven ability to manage agile projects, improve system reliability, and deliver data solutions.
I am a Data Engineer with almost 3 years of experience in building and optimizing scalable data pipelines, working with large datasets, and applying advanced technologies. I have expertise in Python, SQL, Apache Spark, Kafka, and Airflow, and have worked on data processing, ETL workflows, and data modeling. My experience spans across industries like insurance and telecommunications, where I’ve handled real-time and batch data from sources like IoT devices, transactional systems, and APIs.
I am skilled in optimizing data pipelines for performance and ensuring data integrity to support analytics, machine learning, and business decision-making.
I have completed a Master’s degree in Data Science from Pace University, where I developed a strong foundation in data engineering, distributed systems, and advanced data processing. During my studies, I gained expertise in database management, machine learning, and data structures, which have been essential in my career. I worked on projects involving ETL processes, data warehousing, and data pipeline optimization.
This academic background, combined with hands-on experience in tools like SQL, Python, and Apache Spark, has provided me with the technical skills needed to excel in my current role as a Data Engineer.