This job post has expired on February 02, 2026. It is likely that the position has already been filled.

AI Red-Teamer — Adversarial Testing | $55.55/hr | Remote (US & Europe) | English & German Required
Mercor is assembling an elite red team to probe AI models with adversarial inputs, surface vulnerabilities, and generate critical safety data. Help us make AI safer by attacking it first — with structure, creativity, and expertise.
Why This Role Exists
We believe the safest AI is the one that's already been attacked by experts. As an AI Red-Teamer, you'll test conversational AI models and agents for jailbreaks, prompt injections, bias exploitation, and multi-turn manipulation. Your work will directly strengthen AI systems for our customers.
Note: This project involves reviewing AI outputs on sensitive topics such as bias, misinformation, or harmful behaviors. All work is text-based, and participation in higher-sensitivity projects is optional with clear guidelines and wellness resources provided.
What You'll Do
Who You Are
Nice-to-Have Specialties
What Success Looks Like
Why Join Mercor