Skip to main content

Data Analytics Engineer

Technologie
Implicity
Paris, France52 000 € - 57 000 € /anIl y a 1 moisJusqu'au 02/05/2026
Temps pleinHybride

Description du poste

About us

💙 Implicity is a digital MedTech, that brings outstanding innovations to cardiologists, thanks to Big Data and Artificial Intelligence.

Thanks to our leading cardiac remote monitoring platform, it’s way easier to manage data and predict patient issues, so that cardiologists can bring the best care at the best time.

To put it simply, when you join Implicity, you’ll contribute to save lives with us 💓🩺

Dr Arnaud Rosier (cardiologist and AI researcher) & David Perlmutter (engineer and entrepreneur), co-founded Implicity in 2016.

10+ years later, a French Start-Up / Scale-Up 🐓 is a real game changer in the healthcare market, literally shaping the future of cardiology.

250+ hospitals / medical centers are already using our solutions, covering 100 000+ patients.

👩🏻👨🏿👱🏻 At Implicity, you will find the greatest experts in data science, engineering, clinical, regulatory, IT, sales, customers success, etc. working together.

This amazing team already managed to make Implicity a clear European leader, and we will very soon do the same in the US market.

In a nutshell, thanks to Implicity:

🏆 Patients get a far better care

🏆 Doctors’ life is far easier, they can have a far better focus on prevention/treatment, and not admin/data burden

🏆 Healthcare payers (Social Security in France) eventually pays a far lower price (preventing/monitoring instead of treating/hospitalizing)

It can start as soon as you can!

Want more infos?!

Our website: https://www.implicity.com/about-us/

Our team: https://www.welcometothejungle.com/fr/companies/implicity/team-1

Our other opening jobs: https://www.implicity.com/careers/

Job and recruitment context

Job and recruitment context

⭐️ Opening line ⭐️

Joining the Data Platform Team, reporting to the Lead Data Engineer, your mission is to participate in facilitating access to data in order to promote scalability.

The Data Analytics

Engineer sits at the crossroads of software engineering and data analysis, ensuring our Lakehouse is the "single source of truth" for our Data Scientists and Business Stakeholders.

🤝 Reporting Structure

Direct Report: Damien (Data Engineer Manager)

Collaboration: You will be part of the Analytics section of the Data Platform team, collaborating within a tech team of 8 people.

🥇 Your Profile and Mindset

Experience Profile

Master’s degree in Computer Science, Data Engineering, or a related quantitative field.

3+ years of experience in a Data Engineering or Analytics Engineering role.

Strong interest in the healthcare sector and the challenges of medical data normalization (HL7, FHIR).

Experience working with relational and analytics databases (PostgreSQL, Athena, BigQuery, or equivalent).

Fluent in English and French.

🧠 Your missions

Data Modeling: Design and implement robust dimensional models (Star Schema) within our Apache Iceberg Lakehouse to support business reporting and .

Transformation Pipelines: Build and optimize modular SQL/Python transformations using dbt and Dagster, ensuring code reusability and maintainability.

Data Governance & Lineage: Implement and maintain DataHub to provide end-to-end lineage, a searchable data catalog, and clear ownership of data assets.

Semantic Layer: Develop and manage our semantic layer via cube dev to provide unified data definitions across the company.

Data Quality & Traceability: Implement automated tests and monitoring to guarantee data integrity and observability from S3 to Metabase.

Performance Optimization: Fine-tune AWS Athena queries and Iceberg table layouts (partitioning, compaction) to ensure fast and cost-effective analytics.

Documentation: Maintain up-to-date documentation of data models and definitions in DataHub.

Run & Support: Investigate and resolve data quality incidents to ensure the reliability of our analytical platform.

To succeed in your mission, you will be part of a skilled and collaborative team working with modern data technologies. We encourage continuous learning and innovation, with dedicated time for experimentation and skill development.

At Implicity, you will have a weekly meeting with your manager, to help you succeed in your mission, and continuously improve your skills.

Each team works with quarterly OKR, to be crystal clear, fair and honest with your targets.

The annual appraisal is a shared exchange moment, focused on your development.

💻 Our Analytics Technical Stack The following is our current stack, we don’t expect you to be an expert in every single tool.

Lakehouse: AWS S3, Parquet, Apache Iceberg

Query Engine: AWS Athena

Transformation & Orchestration: dbt, Dagster

Governance & Lineage: DataHub

Semantic & BI: Cube.dev, Metabase

Infrastructure: AWS, Kubernetes (EKS), Terraform, Docker, GitLab CI/CD

🌟 Hard skills and Soft skills

Technical Expertise

SQL Mastery: You write "clean" SQL, understand window functions, indexing strategies, and know how to optimize complex queries

Data Architecture: Deep understanding of Lakehouse architectures and dimensional modeling.

ELT: Deep hands-on experience designing and scaling ELT pipelines, with an expert-level mastery of DBT to transform raw data into high-quality analytical assets.

Software Engineering Mindset: You write production-grade Python code, applying software best practices to data

Governance Advocate: Strong commitment to metadata management, lineage, and documentation.

AI-Assisted: Ability to leverage tools like Cursor or Claude to accelerate delivery while maintaining high code quality.

Professional Skills

Collaborative mindset and Team player : Strong communication skills and a genuine passion for teamwork + Dedicated to elevating your colleagues and fostering collective success.

Self-driven & autonomous: Comfortable working independently while contributing to cross-functional teams.

Curiosity & adaptability: Eager to learn and develop expertise in emerging technologies.

Problem-solving orientation: Proactive approach with strong analytical and prioritization skills.

Quality-focused: Committed to code quality, testing, and documentation.

A Note on Applying: We know the perfect candidate doesn't exist. If you believe you possess the core required experience and strongly align with this mindset, we highly encourage you to apply.

Recruitment process

1st HR Contact with Astrid (Talent Acquisition Manager) – 45 min - G-meet

Job / Manager Interview with Damien (Data Engineer Manager) – 45 min - G-meet

Technical interview with Louise & Stefan & Max (Data team members) (90 min) - On-site

Fit Interview with Louay (CTO) - 1 hour - On-site or Remote

Reference Check & Offer (usually follows within 72 hours 🤞)

Depending on your availability, the recruitment process should last less than 3-4 weeks.

General information

💰 Salary For this job (fuu-time), you have a base salary between €52k-57k depending on your experience

Eligible for stock option (BSPCEs) according to the company's existing rules

👍 Benefits

Health care plan: Alan (50% employer)

Luncheon voucher: 9€ (50% employer)

Transport: 50% of your pass OR sustainable mobility pass

📍 Remote work & Location

3 days per week (progressively)

Location: 29 rue du Louvre, 75002, PARIS

Keywords
SQL MasteryData ModelingdbtDagsterDataHubCube DevMetabasePythonApache IcebergAWS AthenaPostgreSQLHL7FHIRDimensional ModelingELTTerraformData Analytics EngineerMedTechBig DataArtificial IntelligenceCardiac Remote MonitoringData Platform TeamData Engineer ManagerLakehouseData ScientistsBusiness StakeholdersStar SchemaAWS S3ParquetKubernetesEKSDockerGitLab CI/CDSoftware EngineeringData Governance

¿Te interesa este puesto?