Benture logo
 ←  next job →

AI Red-Teamer — Adversarial Testing at Mercor

posted 22 hours ago
mercor.com Contractor remote: US & Europe $50.5/hr 36 views

AI Red-Teamer — Adversarial Testing | $50.5/hr | Remote (US & Europe) | English & Italian Required

Join Mercor's elite red team to probe AI models with adversarial inputs, surface vulnerabilities, and generate critical safety data that makes AI systems more robust and trustworthy for our customers.

Why This Role Exists

At Mercor, we believe the safest AI is the one that's already been attacked — by us. We're assembling a specialized red team of human data experts who test conversational AI models to their breaking points, documenting vulnerabilities before they reach production.

Note: This project involves reviewing AI outputs on sensitive topics including bias, misinformation, and harmful behaviors. All work is text-based, participation in higher-sensitivity projects is optional, and topics are clearly communicated beforehand with wellness resources available.

What You'll Do

  • Red team conversational AI models and agents through jailbreaks, prompt injections, misuse cases, bias exploitation, and multi-turn manipulation
  • Generate high-quality human data by annotating failures, classifying vulnerabilities, and flagging systemic risks
  • Apply structured methodologies using taxonomies, benchmarks, and playbooks to ensure consistent testing
  • Document findings reproducibly with detailed reports, datasets, and attack cases that customers can act on

Who You Are

  • Prior red teaming experience in AI adversarial work, cybersecurity, or socio-technical probing
  • Naturally curious and adversarial mindset — you instinctively push systems to their breaking points
  • Structured approach using frameworks and benchmarks, not just random exploits
  • Excellent communicator who can explain risks clearly to both technical and non-technical stakeholders
  • Adaptable and thrive on moving across diverse projects and customers
  • Native-level fluency in both English and Italian required

Nice-to-Have Specialties

  • Adversarial ML: jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction
  • Cybersecurity: penetration testing, exploit development, reverse engineering
  • Socio-technical risk: harassment/disinformation probing, abuse analysis, conversational AI testing
  • Creative probing: psychology, acting, or writing backgrounds for unconventional adversarial thinking

What Success Looks Like

  • Uncover vulnerabilities that automated tests miss
  • Deliver reproducible artifacts that strengthen customer AI systems
  • Expand evaluation coverage with more scenarios tested and fewer production surprises
  • Build customer trust in AI safety through thorough adversarial probing

Why Join Mercor

  • Build frontier experience in human data-driven AI red teaming and safety
  • Play a direct role in making AI systems more robust, safe, and trustworthy
  • Work remotely with flexible full-time or part-time arrangements
  • Competitive contract rates aligned with expertise, sensitivity of material, and scope of work

Location: Remote (US & Europe only)

Type: Full-time or Part-time Contract Work

Benture is an independent job board and is not affiliated with or employed by Mercor.

Tips for Applying to Mercor Jobs from Benture

Increase your chances of success!
1
Four Simple Steps

Upload resumeAI interviewComplete formSubmit application

2
Perfect Your Resume

Upload your best, up-to-date resume in English. Mercor will extract details and fill out your profile automatically. Review and adjust as needed.

3
Complete = Win

SHOCKING FACT: Only ~20% of applicants complete their application! Take the 15-minute AI interview about your experience and you'll have MUCH HIGHER chances of getting hired!

AI Interview Tips: The interview focuses on your resume and work experience. Be ready to discuss specific projects and how you solved challenges.

Takes about 15 minutes | Dramatically improves your chances

Related Jobs

Benture logo
See All Jobs