Skip to main content

Data Engineer

Technology
Confidential
Riyadh, المملكة العربية السعوديةمنذ 1 أشهرحتى ٢٢‏/٥‏/٢٠٢٦
دوام كاملفي الموقع

وصف الوظيفة

Overview

Build and maintain scalable data pipelines, ETL processes, and data platforms to enable analytics and data‑driven products.

Key responsibilities

  • Design, develop, and maintain ETL/ELT pipelines for batch and real‑time data ingestion.
  • Build and optimize data models, schemas, and data warehouses/lakes (Redshift, Snowflake, BigQuery, Azure Synapse).
  • Implement data transformation, cleansing, and validation logic to ensure accuracy and reliability.
  • Integrate diverse data sources (databases, APIs, streaming, logs) and maintain connectors.
  • Collaborate with data scientists, analysts, and product teams to deliver datasets and APIs for analytics and ML.
  • Ensure data quality, lineage, and governance; implement monitoring, alerting, and observability for pipelines.
  • Optimize performance and cost of storage and processing; manage partitioning, indexing, and resource usage.
  • Maintain infrastructure-as-code for data platforms (Terraform, CloudFormation) and automate deployments.
  • Document data schemas, pipelines, runbooks, and support incident response for data issues.
  • Participate in architecture reviews, capacity planning, and tech‑debt remediation.

Requirements

  • 3+ years’ experience in data engineering, ETL, or related software engineering role.
  • Proficiency with SQL and working knowledge of relational and NoSQL databases.
  • Experience with cloud platforms and data services (AWS/Azure/GCP).
  • Familiarity with ETL/ELT tools and frameworks (Airflow, dbt, Spark, Kafka, Flink).
  • Strong programming skills in Python, Scala, or Java.
  • Understanding of data modeling, warehousing concepts, and performance tuning.
  • Experience with data governance, testing, and monitoring tools.
  • Excellent problem‑solving, collaboration, and communication skills.

Preferred

  • Experience with Snowflake, BigQuery, or Redshift; dbt and Apache Spark expertise.
  • Knowledge of CI/CD for data pipelines and infrastructure-as-code.
  • Familiarity with GDPR, data privacy, and security best practices.
Job Type: Full-time

Pay: ﷼24.56 - ﷼45.20 per hour

Expected hours: 40 per week

Work Location: In person

Keywords
data-pipelineextract-transform-and-load-etldata-platformanalyticsdata-analyticsplanning-and-designvisual-art-designproduct-development-and-designelectronic-titleextract-load-transform-elttime-and-attendancedata-warehousesnowflakemicrosoft-azureazure-synapsemicrosoft-azure-synapsedata-transformationmachine-learningdata-qualityobservabilityperformance-optimizationdistribution-and-storageyouth-organizations-resourcesinfrastructure-as-code-iacterraformrunbooksincident-responseplanning-and-forecastingelectrical-engineering-and-planningcapacity-planningstudent-retention-remediation-in-higher-educationremediationdata-engineeringsqlnosqlamazon-web-servicesgoogle-cloud-platformgood-clinical-practice-gcpairflowapache-airflowdbtsparkkafkapythonscalajavadata-modelwarehousing-and-distributionvehicle-modification-tuningdata-governancetesting-and-analysiscustomer-intelligence-cicontinuous-integrationcd-certificate-of-depositci-cdgeneral-data-protection-regulation-gdprdata-privacy-and-protectionpolicies-and-practices

¿Te interesa este puesto?