Back End Developer (Python)
DecskillDescrição da vaga
Location: Remote (Portugal)
Employment: Full-time / Contract
About Loop Future
Loop Future combines technology, innovation, and sustainability to create digital solutions that shape the future.
By integrating software engineering, cloud solutions, salesforce expertise, and intelligent sourcing, Loop Future connects people, systems, and industries — driving efficiency, growth, and global impact.
We are Headquartered in Portugal, with offices in Lisbon, Porto, Coimbra, Leiria and Braga;
we are also supported by our international offices in the UK, Switzerland and India.
The company brings together a diverse and talented team committed to delivering cutting-edge digital experiences into domains such as TV &Media, Satellite & Space, Consultancy, Retail and many other domains.
About the Role
We are looking for a Senior Data Engineer with strong expertise in Databricks and Python to design, evolve, and govern a common reusable Python library that serves as a foundation for batch and streaming pipelines across the Medallion Architecture (Bronze, Silver, Gold).
This is a highly technical, standards-driven role suited for professionals with strong software engineering maturity who enjoy defining best practices, enforcing consistency, and building scalable frameworks adopted by multiple data engineering teams.
Core responsibilities
Design, implement, and maintain a shared Python library for Databricks supporting batch and streaming workloads.
Develop reusable PySpark modules, abstractions, and base classes aligned with Medallion architecture principles.
Define and enforce software engineering best practices, including coding standards, documentation, testing strategies, and versioning.
Establish and maintain CI/CD pipelines using GitLab and Databricks Asset Bundles (DABs).
Ensure controlled releases, backward compatibility, and smooth adoption of shared components across teams.
Integrate logging, monitoring, and data quality controls using observability tools.
Collaborate within Scrum teams, actively participating in Agile ceremonies and incremental delivery.
Work closely with DataOps to ensure production stability, observability, and reliability.
Requirements
Minimum 10 years of professional experience in Python development.
At least 5 years of hands-on experience with Azure Databricks and PySpark in production environments.
Strong experience designing Python libraries, frameworks, or shared components.
Solid knowledge of OOP, design patterns, unit and integration testing, and CI/CD pipelines.
Experience with code quality tools (linting, formatting, static analysis).
Strong understanding of batch and streaming data processing.
Experience with Medallion Architecture and data lifecycle best practices.
Familiarity with Airflow, Terraform, and Azure ADLS Gen2.
Professional working proficiency in English.
Certifications in Databricks or Azure (nice to have).
¿Te interesa este puesto?