As part of our Data Engineering team, you will not only build scalable data platforms but also directly enable portfolio growth by supporting new funding capabilities, loan sales and securitization, and improving cost efficiency through automated and trusted data flows that evolve our accounting processes.ResponsibilitiesDesign and build data solutions that support Sunbits core business goals, from enabling capital market transactions (loan sales and securitization) to providingreliable insights for reducing the cost of capital.Develop advanced data pipelines and analytics to support finance, accounting, and product growth initiatives.Create ELT processes and SQL queries to bring data to the data warehouse and other data sources.Develop data-driven finance products that accelerate funding capabilities and automate accounting reconciliations.Own and evolve data lake pipelines, maintenance, schema management, and improvements.Create new features from scratch, enhance existing features, and optimize existing functionality.Collaborate with stakeholders across Finance, Product, Backend Engineering, and Data Science to align technical work with business outcomes.Implement new tools and modern development approaches that improve both scalability and business agility.Ensure adherence to coding best practices and development of reusable code.Constantly monitor the data platform and make recommendations to enhance architecture, performance, and cost efficiency.Requirements: 4 years of experience as a Data Engineer.4 years of Python and SQL experience.4 years of direct experience with SQL (Redshift/Snowflake), data modeling, data warehousing, and building ELT/ETL pipelines (DBT & Airflow preferred).3 years of experience in scalable data architecture, fault-tolerant ETL, and data quality monitoring in the cloud.Hands-on experience with cloud environments (AWS preferred) and big data technologies (EMR, EC2, S3, Snowflake, Spark Streaming, Kafka, DBT).Strong troubleshooting and debugging skills in large-scale systems.Deep understanding of distributed data processing and tools such as Kafka, Spark, and Airflow.Experience with design patterns, coding best practices, and data modeling.Proficiency with Git and modern source control.Basic Linux/Unix system administration skills.Experience with AI tools and a strong interest in continuously exploring and applying them in everyday work are highly valued.This position is open to all candidates.
¿Te interesa este puesto?