This job post has expired on February 02, 2026. It is likely that the position has already been filled.

AI Red-Teamer — Adversarial Testing | $50.5/hr | Remote (US, Taiwan, Malaysia) | English & Chinese Required
Join Mercor's elite red team to probe AI models with adversarial inputs, uncover vulnerabilities, and generate critical safety data that makes AI systems more robust and trustworthy for our customers.
Why This Role Exists
At Mercor, we believe the safest AI is the one that's already been attacked — by us. We're assembling a specialized red team of human data experts who surface vulnerabilities before they reach production. This text-based work involves reviewing AI outputs on sensitive topics including bias, misinformation, and harmful behaviors. Participation in higher-sensitivity projects is optional and fully supported with clear guidelines and wellness resources.
What You'll Do
Required Qualifications
Nice-to-Have Specialties
What Success Looks Like
Why Join Mercor
Contract Details
Full-time or part-time contract work available. Rate is competitive and commensurate with the expertise required and scope of work.