TuringExperience building and deploying scalable, production-grade software using modern languages and tools. Deep understanding of software architecture, design, development, debugging, and code quality/review assessment. Excellent oral and written communication skills for clear, structured evaluation rationales.
Project Overview: As a Software Engineering evaluator, you will create cutting-edge datasets for training, benchmarking, and advancing large language models, collaborating closely with researchers. This includes curating code examples, providing precise solutions, and making corrections in Python, C/C++, Rust, Go, Java, and JavaScript (including ReactJS) — with particular emphasis on systems-level code, performance-critical applications, and infrastructure. You will evaluate and refine AI-generated code for efficiency, scalability, and reliability, and work with cross-functional teams to enhance enterprise-level AI-driven coding solutions.
What Does a Typical Day Look Like? Work on AI model training initiatives by curating code examples, building solutions, and correcting code in Python, C/C++, Rust, Go, Java, and JavaScript (including ReactJS). Evaluate and refine AI-generated code with an emphasis on systems-level correctness, performance, and reliability.
Collaborate with cross-functional teams to enhance AI-driven coding solutions against industry performance benchmarks. Build agents that can verify the quality of systems-level and infrastructure code and identify error patterns. Hypothesize on steps in the software engineering cycle (prototyping, architecture design, API design, production implementation, launch, experiments, monitoring, operational maintenance) and evaluate model capabilities on them.
Design verification mechanisms that can automatically verify a solution to a software engineering task.
¿Te interesa este puesto?