TuringExperience building full-stack applications and deploying scalable software using modern languages and tools. Deep understanding of software architecture, design, development, debugging, and code quality/review assessment. Excellent oral and written communication skills for clear, structured evaluation rationales.
Project Overview: As a Software Engineering evaluator, you will create cutting-edge datasets for training, benchmarking, and advancing large language models, collaborating closely with researchers. This includes curating code examples, providing precise solutions, and making corrections — with a primary focus on Python across backend services, data pipelines, and ML infrastructure, alongside JavaScript (including ReactJS), C/C++, Java, Rust, and Go. You will evaluate and refine AI-generated code for efficiency, scalability, and reliability, and work with cross-functional teams to enhance enterprise-level AI-driven coding solutions.
What Does a Typical Day Look Like? Work on AI model training initiatives by curating code examples, building solutions, and correcting code — primarily in Python, with additional work in JavaScript (including ReactJS), C/C++, Java, Rust, and Go. Evaluate and refine AI-generated code to ensure that it is efficient, scalable, and reliable.
Collaborate with cross-functional teams to enhance AI-driven coding solutions against industry performance benchmarks. Build agents and automated verification tools in Python that can verify the quality of code and identify error patterns. Hypothesize on steps in the software engineering cycle (prototyping, architecture design, API design, production implementation, launch, experiments, monitoring, operational maintenance) and evaluate model capabilities on them.
Design verification mechanisms that can automatically verify a solution to a software engineering task.
¿Te interesa este puesto?