
AI Red-Teamer — Adversarial Testing | $55.55/hr | Remote (US & Europe) | English & German Required
Mercor is assembling an elite red team to probe AI models with adversarial inputs, surface vulnerabilities, and generate critical safety data. Help us make AI safer by attacking it first — with structure, creativity, and expertise.
Why This Role Exists
We believe the safest AI is the one that's already been attacked by experts. As an AI Red-Teamer, you'll test conversational AI models and agents for jailbreaks, prompt injections, bias exploitation, and multi-turn manipulation. Your work will directly strengthen AI systems for our customers.
Note: This project involves reviewing AI outputs on sensitive topics such as bias, misinformation, or harmful behaviors. All work is text-based, and participation in higher-sensitivity projects is optional with clear guidelines and wellness resources provided.
What You'll Do
Who You Are
Nice-to-Have Specialties
What Success Looks Like
Why Join Mercor
Upload resume → AI interview → Complete form → Submit application
Upload your best, up-to-date resume in English. Mercor will extract details and fill out your profile automatically. Review and adjust as needed.
SHOCKING FACT: Only ~20% of applicants complete their application! Take the 15-minute AI interview about your experience and you'll have MUCH HIGHER chances of getting hired!
AI Interview Tips: The interview focuses on your resume and work experience. Be ready to discuss specific projects and how you solved challenges.
Takes about 15 minutes | Dramatically improves your chances