ABOUT THE COMPANYWe are a global legal technology company that has been building software for the legal industry for over two decades. Our AI-powered cloud platform is used by leading law firms, Fortune 500 corporations, and government agencies worldwide to organise complex data, surface critical insights, and act on them — across litigation, investigations, regulatory inquiries, and data breach response.We're valued at $3.6 billion and invest over $170 million annually in R&D. Over 75% of our business has transitioned to our cloud platform, and we are making substantial investments in data lake technology and distributed systems to support future growth and advanced analytics. Our scale means the data problems here are genuinely hard — and the infrastructure you build will have real consequence.ABOUT THE ROLEWe're building a specialised team focused on enabling advanced analytics and reporting capabilities across our internal data ecosystem. As an Advanced Data Platform Engineer, you'll design and implement scalable, cloud-native data platforms that integrate modern lakehouse technologies, distributed compute frameworks, and cloud-native services to support diverse analytical use cases at enterprise scale.The role emphasises technical depth — performance optimisation, governance best practices, and the kind of engineering rigour that keeps vast datasets accessible, secure, and compliant. You'll work closely with internal teams to deliver curated datasets and self-service analytics capabilities, and you'll participate in on-call rotations as part of shared team responsibility.WHAT YOU'LL WORK ONData pipeline and distributed systems designDesign and implement complex data pipelines and distributed systems using Spark and Python, applying clean code principles, modular design, CI/CD, automated testing, and thorough code reviews.Lakehouse platform developmentDevelop and maintain lakehouse capabilities with Delta Lake and Apache Iceberg, ensuring reliability, performance, and long-term maintainability at scale.Analytics workflow enablementIntegrate dbt for SQL transformations running on Spark. Deliver curated datasets and self-service analytics capabilities that empower internal stakeholders to explore data independently.Data warehousing optimisationOptimise Databricks and Snowflake environments for performance and scalability. Drive cost optimisation and performance tuning across Spark jobs and cloud-native infrastructure.Observability and governanceImplement observability and governance frameworks including data lineage tracking and compliance controls, ensuring data remains secure and auditable.On-call participationParticipate in on-call rotations as part of shared team responsibility for platform reliability.WHAT WE LOOK FORPython and SQLStrong programming skills in Python and SQL — the foundation for everything you'll build here.Apache SparkSolid experience with Spark for distributed data processing at scale, including performance tuning and optimisation.Lakehouse architectureExpertise in Delta Lake and/or Apache Iceberg. You understand the tradeoffs and have used these in production environments.Analytics toolingFamiliarity with dbt, Databricks, and Snowflake for analytics workflows and SQL transformation pipelines.Software engineering fundamentalsSolid understanding of software engineering principles — CI/CD, automated testing, clean code, and modular design applied to data systems.Infrastructure and containerisationFamiliarity with Kubernetes, Docker, and infrastructure-as-code tools in cloud-native environments.Scalability and cost optimisationUnderstanding of performance tuning, scalability strategies, and cost optimisation for large-scale data systems.BonusExposure to event-driven architectures and advanced analytics platforms. Experience enabling self-service analytics for internal stakeholders. Experience in Java, Scala, or Rust.THE TEAMYou'll join a global engineering organisation working on a platform used by some of the world's largest legal teams. The culture is diverse, inclusive, and driven by high standards. Engineers here work on genuinely complex technical problems at scale — and are supported with the coaching, development, and tooling to keep growing.COMPENSATION & BENEFITSSalary160,000 – 240,000 PLN per year, plus an annual performance bonus and long-term incentives.Health coverageComprehensive health, dental, and vision plans.Parental leaveParental leave available for both primary and secondary caregivers.Flexible workingFlexible work arrangements with a remote-first model.Company breaksTwo week-long company-wide breaks per year, plus additional time off.Training investmentDedicated training investment programme to support ongoing professional development.
¿Te interesa este puesto?
To wynagrodzenie jest 16% poniżej średniej
Typowe wynagrodzenie dla Server w Kraków:
PLN 239 000 - 239 000
Na podstawie 13 ofert pracy
Zobacz pełne dane o wynagrodzeniach