Dariel[Insert Location / Hybrid / Remote]
Full-Time
We are looking for an Intermediate Data Engineer to join our agile data team. In this role, you will build and support secure, scalable, and repeatable data pipelines, enabling the business to derive insights from complex and diverse data sources.
You'll work with large datasets across batch and real-time environments, leveraging modern big data and cloud technologies to deliver innovative, high‑performing solutions.
This position is ideal for a data engineer with solid technical foundations who is ready to step up, contribute to solution design, and work hands-on with cloud, ETL, and big data tools.
Build, enhance, and maintain data pipelines and data integration solutions.
Translate technical and business requirements into efficient, scalable data architectures.
Contribute to the design of data analytics frameworks and end-to-end data solutions.
Develop data feeds from on‑premise systems to AWS Cloud.
Support and troubleshoot production data pipelines (break/fix).
Develop data marts using Talend or similar ETL tools.
Manipulate and transform data using Python, PySpark, or Spark.
Process large datasets using the Hadoop ecosystem, particularly EMR.
Participate in database development, operations, and optimisation.
Contribute to standards, documentation, and best practices for data solutions.
Ensure alignment with policies, data governance, and disaster recovery requirements.
Participate in research and evaluation of new data technologies and tools.
Support automated testing, deployment, and CI/CD practices for data solutions.
Skills & Experience Required
Bachelor's Degree in Computer Science, Computer Engineering, or related field.
AWS Certification (or working towards one).
4+ years of experience with Big Data technologies.
4+ years of experience building ETL/ELT pipelines.
Practical experience working with AWS (EMR, EC2, S3).
Strong programming skills in Python and familiarity with scripting languages.
Experience with Talend or similar ETL tools.
Strong knowledge of data modelling, data structures, and physical database design.
Experience with distributed systems and large‑scale data processing.
Understanding of SDLC methodologies and agile environments.
Experience with batch and streaming tools (Kafka, Kinesis, etc.) is advantageous.
Strong analytical and problem-solving abilities.
Ability to work collaboratively in agile, cross-functional teams.
Curiosity about new technologies and a passion for data engineering.
A balance of technical depth with practical, delivery‑focused thinking.
¿Te interesa este puesto?