Python Software Engineer - AI Workflows
Technology
Denver, United StatesTodayUntil 6/14/2026
Contract
Job description
Python Software Engineer — AI Workflows
About The Role
What if your Python expertise could directly shape the infrastructure powering the next generation of AI? We're looking for senior full-stack Python engineers to design and build the data pipelines, annotation tooling, and evaluation systems that leading AI labs depend on to train and improve their models.
This is a fully remote contract role working on real production systems — not toy projects. If you're a sharp engineer who wants to work at the frontier of AI development, this is the role for you.
- Organization: Alignerr
- Type: Hourly Contract
- Location: Remote
- Commitment: 20–40 hours/week
- Design, build, and optimize high-performance Python systems supporting AI data pipelines and evaluation workflows
- Develop full-stack tooling and backend services for large-scale data annotation, validation, and quality control
- Improve reliability, performance, and safety across production Python codebases
- Integrate AI services and APIs with robust error handling and edge case coverage
- Identify bottlenecks and failure modes in data and system behavior, then implement scalable solutions
- Collaborate with data, research, and engineering teams to support model training and evaluation workflows
- Participate in synchronous design reviews to iterate on system architecture and implementation decisions
- 3–5+ years of professional experience writing production-grade Python
- Strong full-stack developer with a solid systems programming background
- You write clean, maintainable code and naturally reach for linters, formatters, and comprehensive test coverage
- Experienced gluing together AI services and APIs with confidence — you anticipate edge cases before they bite
- Clear, direct communicator — both in writing and in technical discussions
- Native or fluent English speaker
- Available to commit 20–40 hours per week
- Prior experience with data annotation, data quality, or evaluation systems
- Familiarity with AI/ML workflows, model training, or benchmarking pipelines
- Experience with distributed systems or developer tooling
- Background working directly with AI labs or research teams
- Work on cutting-edge AI projects alongside leading research labs — real systems, real impact
- Fully remote and async-friendly — work from wherever you do your best work
- Freelance autonomy with the structure of meaningful, technically challenging projects
- Contribute directly to the infrastructure that shapes how next-generation AI models are built and evaluated
- Potential for ongoing work and contract extension as new projects launch
Keywords
monthsOfExperience: 36OCamlPythonIteration
¿Te interesa este puesto?