Systems Engineer
Job description
About ProSource
At ProSource, we build and manage highly technical distributed teams for some of the most innovative companies in the world. We believe in humanizing the outsourcing industry by finding, attracting, and retaining top talent. Our dynamic workspaces promote creativity, collaboration, and well-being. We leverage smart technologies to ensure our clients and employees thrive in a collaborative, high-performing environment.
Role Overview:
We are seeking a Systems Engineer with a strong technical foundation to support our data engineering and business reporting functions. This role is a hybrid of a data developer and a systems practitioner, responsible for the end-to-end flow of data—from building resilient ingestion pipelines to crafting the complex stored procedures, views, and reporting tables that power our business insights. The ideal candidate has experience in SQL and Python and is ready to apply those skills within our specific stack to transform raw source data into high-performance tables.
Key Responsibilities:
- Data Ingestion & Pipeline Development: Build and maintain automated pipelines to move data from diverse sources—including MySQL, SQL Server, and third-party APIs—into our Snowflake environment.
- Advanced SQL Development: Author and optimize complex stored procedures, functions, and triggers to handle sophisticated business logic and multi-stage data transformations.
- Reporting Layer Engineering: Construct and maintain specialized reporting tables and complex views designed to simplify end-user access and support high-performance analytical tools.
- Dimensional Modeling Execution: Implement and update Data Marts using Star Schema methodologies, ensuring the performance of Dimension (Dim) and Fact (Fct) tables.
- Workflow Orchestration: Utilize DataRunner and other orchestration tools to monitor job schedules, implement error-handling, and ensure consistent data freshness.
- Performance Tuning: Assist in tuning queries and procedures across the Microsoft stack and Azure environment, focusing on execution plans and indexing strategies.
- Data Quality & Integrity: Perform rigorous validation and testing (e.g., unit testing for SQL) to ensure data accuracy and consistency before it reaches production.
Qualifications:
- Databases: 3+ year hands-on proficiency with Snowflake, Microsoft SQL Server, and MySQL.
- Languages: Strong skills in SQL (T-SQL/Snowflake Scripting) and Python (Pandas/API integration).
- Cloud: Familiarity with Azure data services for storage and automated data movement.
- Tools: Experience with DataRunner or similar industry-standard ETL/orchestration frameworks.
- Advanced Ingestion Patterns: Experience with Incremental vs. Full Loading, Change Data Capture (CDC), and handling semi-structured data (JSON/XML).
- Complex SQL Logic: Mastery of Window Functions (LEAD/LAG, RANK), CTEs, and conditional logic within stored procedures.
- Data Architecture: Practical knowledge of Star Schema design, including the implementation of Slowly Changing Dimensions (SCDs).
- Programmatic Extraction: Ability to write Python scripts for data parsing, cleaning, and interacting with RESTful APIs.
- Idempotency & Error Handling: Ability to build \"\"self-healing\"\" pipelines that can restart after failure without creating duplicate data.
- Version Control: Familiarity with managing SQL and Python code within a version control system (e.g., Git/Azure DevOps).\"
Schedule:
- Monday to Friday, 9pm to 6am PHT
What's in it for you?
- 💸 Highly competitive salary
- 🏥 HMO coverage for you and your 2 dependents from Day 1
- 💻 Enjoy a fully remote setup with all the tools you need
- 🌱 Full-time role with excellent perks and benefits
Ready to take the next step? Apply now and be part of our team!
¿Te interesa este puesto?