
Agentic Coding Annotator | Contractor | Remote | ~5-Week Engagement
Turing is seeking experienced software practitioners to evaluate and improve datasets for agentic coding models. This is a technically demanding annotation role requiring real engineering maturity — not a basic data labeling position. You'll work within an agentic coding harness, reviewing model trajectories, verifying solutions, and producing high-quality annotations that directly advance frontier AI capabilities.
Turing is one of the world's fastest-growing AI companies, partnering with leading AI labs to advance frontier model capabilities in coding, reasoning, agentic behavior, and more. We build real-world AI systems that solve mission-critical challenges for top-tier organizations globally.
Depending on your assignment, work will fall into one of two tracks: