Skip to main content

DevOps/Python Engineer

Technology
VBeyond
Austin, United States1 months agoUntil 4/16/2026

Job description

  • Design, implement, and maintain highly available and scalable data pipelines leveraging tools such as Druid, Databricks, dbt, and Amazon Redshift
  • Manage and optimize distributed data systems for real-time, batch, and analytical workloads
  • Develop custom scripts and applications using programming languages (Python, Scala, or Java) to enhance data workflows and automation
  • Implement automation for deployment, monitoring, and alerting of data workflows
  • Collaborate with data engineering, analytics, and platform teams to deliver reliable and performant data services
  • Monitor data quality, reliability, and cost efficiency across platforms
  • Build and enforce data governance, lineage, and observability practices
  • Work with cloud platforms (AWS/Azure/GCP) to provision and maintain data infrastructure
  • Apply CI/CD and Infrastructure-as-Code (IaC) principles to data workflows
Required Skills & Experience:
  • 5+ years of experience in DataOps, Data Engineering, DevOps Engineering, or related roles
  • Strong hands-on experience with Druid, Databricks, dbt, and Redshift (experience with Snowflake, BigQuery, or similar is a plus)
  • Solid understanding of distributed systems architecture and data infrastructure at scale
  • Proficiency in SQL and strong programming skills in at least one language (Python, Scala, or Java)
  • Experience with orchestration tools (Airflow, Dagster, Prefect, etc.)
  • Familiarity with cloud-native services on AWS, Azure, or GCP
  • Experience with CI/CD tools (GitHub Actions, GitLab CI, Jenkins, etc.)

¿Te interesa este puesto?