Benture logo
 ←  next job →

AI Red-Teamer — Adversarial Testing at Mercor

posted 22 hours ago
mercor.com Contractor remote: US & Europe $50.5/hr 41 views

AI Red-Teamer — Adversarial Testing | $50.5/hr | Remote (US & Europe) | English & French Required

Mercor is assembling an elite red team to probe AI models with adversarial inputs, surface vulnerabilities, and generate critical safety data. Help us make AI safer by attacking it first—before adversaries do.

Type: Full-time or Part-time Contract Work
Language Requirement: Native-level fluency in both English and French is required.

Why This Role Exists

We believe the safest AI is one that's already been tested under adversarial conditions. As an AI Red-Teamer, you'll be a human data expert who probes conversational AI models, identifies weaknesses, and creates the datasets that strengthen AI safety for our customers.

Note: This project involves reviewing AI outputs on sensitive topics including bias, misinformation, and harmful behaviors. All work is text-based, and participation in higher-sensitivity projects is optional with clear guidelines and wellness resources provided.

What You'll Do

  • Red team conversational AI models and agents: execute jailbreaks, prompt injections, misuse cases, bias exploitation, and multi-turn manipulation attacks
  • Generate high-quality human data: annotate failures, classify vulnerabilities, and flag systemic risks
  • Apply structured methodologies: follow taxonomies, benchmarks, and playbooks to ensure consistent testing
  • Document reproducibly: produce actionable reports, datasets, and attack cases for customers

Who You Are

  • Experienced in red teaming (AI adversarial work, cybersecurity, or socio-technical probing)
  • Naturally curious and adversarial: you instinctively push systems to their breaking points
  • Methodical and structured: you use frameworks and benchmarks, not just random exploits
  • Excellent communicator: you explain risks clearly to both technical and non-technical stakeholders
  • Highly adaptable: you thrive when moving across diverse projects and customers

Nice-to-Have Specialties

  • Adversarial ML: jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction
  • Cybersecurity: penetration testing, exploit development, reverse engineering
  • Socio-technical risk: harassment/disinformation probing, abuse analysis, conversational AI testing
  • Creative probing: psychology, acting, or writing backgrounds for unconventional adversarial thinking

What Success Looks Like

  • You uncover vulnerabilities that automated tests miss
  • You deliver reproducible artifacts that strengthen customer AI systems
  • Evaluation coverage expands: more scenarios tested, fewer surprises in production
  • Mercor customers trust their AI safety because you've already probed it like an adversary

Why Join Mercor

  • Build frontier experience in human data-driven AI red teaming and safety
  • Play a direct role in making AI systems more robust, safe, and trustworthy
  • Work with cutting-edge AI models and contribute to the future of AI safety

Contract rate is competitive and commensurate with experience, reflecting the expertise required and scope of work.

Benture is an independent job board and is not affiliated with or employed by Mercor.

Tips for Applying to Mercor Jobs from Benture

Increase your chances of success!
1
Four Simple Steps

Upload resumeAI interviewComplete formSubmit application

2
Perfect Your Resume

Upload your best, up-to-date resume in English. Mercor will extract details and fill out your profile automatically. Review and adjust as needed.

3
Complete = Win

SHOCKING FACT: Only ~20% of applicants complete their application! Take the 15-minute AI interview about your experience and you'll have MUCH HIGHER chances of getting hired!

AI Interview Tips: The interview focuses on your resume and work experience. Be ready to discuss specific projects and how you solved challenges.

Takes about 15 minutes | Dramatically improves your chances

Related Jobs

Benture logo
See All Jobs