Machine Learning Engineer - LLM
Tecnología
The Adecco GroupHace 1 mesesHasta 12/4/2026
Descripción del puesto
Role: Machine Learning Engineer - LLM Platforms & Assistants
Apply by Whats App: 55 61 4646 51
Requirements
Bachelor’s Degree in Computer Science, Engineering, Data Science, or equivalent experience.
Gross Salary: $75,000 MXN / Month
Summary Role
- The Machine Learning Engineer will support the transition from custom GPT prototypes to production-grade, action-driven AI assistants.
- This role focuses on designing and implementing modular LLM pipelines using Python, Retrieval-Augmented Generation (RAG) and Lang Chain, integrating enterprise data systems and emerging tools such as MCP and SnapLogic.
- The engineer will design, build, and operate production-grade Large Language Model (LLM) pipelines in AWS environments, integrating OpenAI models into modular Python services while enabling scalable, secure, and observable AI assistants.
Experience and Responsibilities & Qualifications
- With over 5 years of experience in software engineering, machine learning engineering, or applied AI roles.
- Strong Python programming skills.
- Experience implementing RAG pipelines and semantic search architectures.
- Hands-on experience integrating OpenAI LLM APIs into applications.
- Experience deploying systems within AWS environments (S3, Lambda, ECS/EKS, IAM).
- Preferred
- Experience with Amazon SageMaker or Amazon Bedrock.
- Familiarity with AWS data services such as Glue, Athena, OpenSearch, or Aurora.
- Experience building ETL/ELT data pipelines supporting ML workflows.
- Knowledge of LLM orchestration frameworks, reasoning workflows, and evaluation/benchmarking techniques.
- Experience integrating SnapLogic pipelines.
- Familiarity with Craxel Black Forest Timeseries DB (training provided).
- Experience with Infrastructure as Code tools such as CDK, CloudFormation, or Terraform.
- Design and maintain LLM integrations using OpenAI APIs within AWS environments.
- Build and deploy Python-based LLM services on AWS computer platforms such as ECS, EKS, Lambda, or EC2.
- Implement Retrieval-Augmented Generation (RAG) pipelines and semantic search using AWS data and storage services.
- Develop Lang Chain-based or agentic workflows that support reasoning, tool usage, and modular orchestration.
- Integrate LLM pipelines with enterprise ETL/ELT workflows and data systems, including SnapLogic.
- Deploy and integrate MCP servers and orchestration frameworks for scalable AI assistant workflows.
- Apply AWS security best practices using IAM, KMS, and Secrets Manager to protect data and model interactions.
- Implement monitoring, logging, and observability using CloudWatch and related AWS tools.
- Support the migration of custom GPT prototypes into production-grade AWS-hosted assistants.
Soft Skills
- Strong Analytical
- Problem - Solving
- Ability to communicate clearly
- Strong Sense of Ownership
- Accountability
- Responsibility
- Adaptability
¿Te interesa este puesto?