Databricks Engineer
Element TechnologiesJob description
- *Role Summary**
- *Key Responsibilities**
Data Engineering & Lakehouse Development
Build scalable ETL/ELT pipelines in Databricks using PySpark, Spark SQL, Delta Live Tables, and workflows.Engineer curated datasets across bronze/silver/gold layers for claims, pricing, provider, RCM, and member data.
Implement Delta Lake best practices including ACID transactions, schema evolution, CDC, and optimized storage formats.
Automate ingestion/transformation of large datasets from claims systems, provider files, call center platforms, and EHR feeds.
- *Data Quality & Governance**
Enforce PHI‑compliant design patterns using Unity Catalog, governance guardrails, and cluster policies.
Implement pipeline monitoring, logging, and Spark performance optimization.
- *Platform & Collaboration**
Support cluster optimization, table indexing (Z‑ORDER), and cost‑efficient lakehouse operations.
Participate in Agile ceremonies and ensure timely delivery of engineering tasks.
- *Required Skills & Experience
- *Technical Skills
Strong Spark performance tuning experience.
Experience engineering data for claims, provider, and membership domains.
Strong understanding of healthcare data models and adjudication flows.
- *Experience & Education**
Bachelor’s degree (4‑year).
- *Nice‑to‑Have Skills**
Experience with DLT, CI/CD, and MLflow‑integrated pipelines.
Exposure to actuarial or PI forecasting workflows.
¿Te interesa este puesto?