Skip to main content

Data Engineer

Technology
IBM
Vilnius, Lietuva45 600 € - 69 600 € /metaiprieš 1 mėnesiųIki 2026-04-02
Biure

Darbo aprašymas

Introduction In this role, you will work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we bring deep technical and industry expertise to public and private sector clients around the world. You will be part of a team that delivers high‑impact solutions and drives adoption of modern data and cloud technologies.

Your Role And Responsibilities The successful candidate will design, build, and maintain scalable and reliable data pipelines and platforms used across analytics, AI, and business systems. You will collaborate with cross-functional teams to ensure data is accessible, high-quality, and aligned with business goals.

Responsibilities include

Designing, developing, and optimizing data processing systems, including ETL/ELT pipelines and data orchestration workflows

Building and maintaining data pipelines for batch and real-time (streaming) use cases

Working with data scientists, software engineers, and business stakeholders to deliver high‑quality, production‑ready data solutions

Implementing and enforcing data quality, validation, and governance practices

Ensuring compliance with data security standards, access controls, and regulatory requirements

Monitoring data platform performance and implementing improvements to ensure reliability, scalability, and cost effectiveness

Contributing to standardization, automation, and best practices across data engineering teams

Preferred Education

Bachelor's Degree

Required Technical And Professional Expertise

Strong Python skills for data processing, pipeline development, and automation

Hands‑on experience with Apache Spark / PySpark for large‑scale distributed data processing

Experience with Databricks and cloud platforms (AWS or Azure), including Delta Lake and related data management tools

Proven experience designing, developing, and maintaining scalable ETL/ELT pipelines and data platform components

Familiarity with building both batch and real‑time (streaming) data workflows, preferably with technologies like Kafka, Event Hubs, or Kinesis

Experience with DevOps practices and Infrastructure as Code (Terraform preferred)

Understanding of data modeling, data warehousing concepts, and modern data architectures (e.g., Lakehouse)

Preferred Technical And Professional Experience

Experience building or integrating with LLM‑powered or AI‑driven solutions

Familiarity with FastAPI and Pydantic for service development

Certification in AWS, Azure, or Databricks

Knowledge of CI/CD pipelines for data workloads

Monthly salary for this position ranges from 3800 EUR gross to 5800 EUR gross. The final offer will be dependent on qualifications, professional experience and competencies.

Keywords
PythonApache SparkPySparkDatabricksAWSAzureDelta LakeETL/ELTKafkaEvent HubsKinesisTerraformData ModelingLakehouseFastAPIPydanticData EngineeringIBM ConsultingDelivery CentersPublic SectorPrivate SectorData PipelinesCloud TechnologiesAIETLELTData OrchestrationBatch ProcessingReal-time StreamingData QualityData GovernanceData SecurityData Platform PerformanceAutomationDevOps

¿Te interesa este puesto?