Skip to main content

Python Developer - Ai Training

Technology
YO IT Consulting
4 weeks agoUntil 2026-05-24
Full timeFully remote

Job description

  • *Work Mode:**

Remote

  • *Engagement Type:**

Independent Contractor

  • *Schedule:**
Full-Time or Part-Time Contract
  • *Language Requirement:**

Fluent English

  • *Role Overview**
We partner with leading AI teams to improve the quality, usefulness, and reliability of general-purpose conversational AI systems.

This project focuses specifically on evaluating and improving how AI systems reason about code, generate programming solutions, and explain technical concepts across various complexity levels.

The role involves rigorous technical evaluation of AI-generated responses in coding and software engineering contexts.

  • *What You’ll Do**
Evaluate LLM-generated responses to coding and software engineering queries for accuracy, reasoning, clarity, and completeness

Conduct fact-checking using trusted public sources and authoritative references

Conduct accuracy testing by executing code and validating outputs using appropriate tools

Annotate model responses by identifying strengths, areas of improvement, and factual or conceptual inaccuracies

Assess code quality, readability, algorithmic soundness, and explanation quality

Ensure model responses align with expected conversational behavior and system guidelines

Apply consistent evaluation standards by following clear taxonomies, benchmarks, and detailed evaluation guidelines

  • *Who You Are**
You hold a BS, MS, or PhD in Computer Science or a closely related field

You have significant real-world experience in software engineering or related technical roles

You are an expert in at least one relevant programming language (e.g., Python, Java, C++, JavaScript, Go, Rust)

You are able to solve HackerRank or LeetCode Medium and Hard-level problems independently

You have experience contributing to well-known open-source projects, including merged pull requests

You have significant experience using LLMs while coding and understand their strengths and failure modes

You have strong attention to detail and are comfortable evaluating complex technical reasoning, identifying subtle bugs or logical flaws

  • *Nice-to-Have Specialties**
Prior experience with RLHF, model evaluation, or data annotation work

Track record in competitive programming

Experience reviewing code in production environments

Familiarity with multiple programming paradigms or ecosystems

Experience explaining complex technical concepts to non-expert audiences

  • *What Success Looks Like**
You identify incorrect logic, inefficiencies, edge cases, or misleading explanations in model-generated code, technical concepts, and system design discussions

Your feedback improves the correctness, robustness, and clarity of AI coding outputs

You deliver reproducible evaluation artifacts that strengthen model performance

Keywords
pythontraining-certificationeducation-trainingtraining-and-developmentmodeexternal-workforceindependent-contractorstime-and-attendanceartificial-intelligenceassessment-assessment-toolsai-generated-answerslarge-language-model-llmtesting-and-analysiscomputer-scienceprogramming-languagesjavacplusplusjavascriptgolangrustopen-sourcepull-requestdata-annotationplanning-and-designvisual-art-designproduct-development-and-designmodel-performance

¿Te interesa este puesto?