Benture logo
 ←  next job →

AI Red-Teamer — Adversarial Testing at Mercor

posted 22 hours ago
mercor.com Contractor remote: US/TW/MY $50.5/hr 45 views

AI Red-Teamer — Adversarial Testing | $50.5/hr | Remote (US, Taiwan, Malaysia) | English & Chinese Required

Join Mercor's elite red team to probe AI models with adversarial inputs, uncover vulnerabilities, and generate critical safety data that makes AI systems more robust and trustworthy for our customers.

Why This Role Exists

At Mercor, we believe the safest AI is the one that's already been attacked — by us. We're assembling a specialized red team of human data experts who surface vulnerabilities before they reach production. This text-based work involves reviewing AI outputs on sensitive topics including bias, misinformation, and harmful behaviors. Participation in higher-sensitivity projects is optional and fully supported with clear guidelines and wellness resources.

What You'll Do

  • Red team conversational AI models and agents: execute jailbreaks, prompt injections, misuse cases, bias exploitation, and multi-turn manipulation attacks
  • Generate high-quality human data by annotating failures, classifying vulnerabilities, and flagging systemic risks
  • Apply structured methodologies using taxonomies, benchmarks, and playbooks to ensure consistent testing
  • Document findings reproducibly through detailed reports, datasets, and attack cases that customers can act on

Required Qualifications

  • Native-level fluency in both English and Chinese (Mandarin)
  • Prior red teaming experience in AI adversarial work, cybersecurity, or socio-technical probing
  • Curious and adversarial mindset with instinct to push systems to breaking points
  • Structured approach using frameworks and benchmarks, not just random testing
  • Strong communication skills to explain risks clearly to technical and non-technical stakeholders
  • Adaptability to thrive across diverse projects and customer needs

Nice-to-Have Specialties

  • Adversarial ML: jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction
  • Cybersecurity: penetration testing, exploit development, reverse engineering
  • Socio-technical risk: harassment/disinformation probing, abuse analysis, conversational AI testing
  • Creative probing: psychology, acting, or writing backgrounds for unconventional adversarial thinking

What Success Looks Like

  • Uncovering vulnerabilities that automated tests miss
  • Delivering reproducible artifacts that strengthen customer AI systems
  • Expanding evaluation coverage with more scenarios tested and fewer production surprises
  • Building customer trust in AI safety through thorough adversarial probing

Why Join Mercor

  • Build frontier experience in human data-driven AI red teaming and safety
  • Play a direct role in making AI systems more robust, safe, and trustworthy
  • Work with cutting-edge AI models and contribute to the future of AI safety

Contract Details

Full-time or part-time contract work available. Rate is competitive and commensurate with the expertise required and scope of work.

Benture is an independent job board and is not affiliated with or employed by Mercor.

Tips for Applying to Mercor Jobs from Benture

Increase your chances of success!
1
Four Simple Steps

Upload resumeAI interviewComplete formSubmit application

2
Perfect Your Resume

Upload your best, up-to-date resume in English. Mercor will extract details and fill out your profile automatically. Review and adjust as needed.

3
Complete = Win

SHOCKING FACT: Only ~20% of applicants complete their application! Take the 15-minute AI interview about your experience and you'll have MUCH HIGHER chances of getting hired!

AI Interview Tips: The interview focuses on your resume and work experience. Be ready to discuss specific projects and how you solved challenges.

Takes about 15 minutes | Dramatically improves your chances

Related Jobs

Benture logo
See All Jobs