Benture logo

This job post has expired on February 02, 2026. It is likely that the position has already been filled.

Mercor logo

AI Red-Teamer — Adversarial Testing at Mercor

posted 3 months ago
mercor.com Contractor remote: US, Mexico 26/hour 419 views

AI Red-Teamer — Adversarial Testing | $26/hr | Remote (US & Mexico) | Bilingual English & Spanish Required

Join Mercor's specialized red team to probe AI systems, uncover vulnerabilities, and generate critical safety data that makes AI more secure and trustworthy for our customers.

About This Role

We believe the safest AI is one that's already been attacked — by us. As an AI Red-Teamer, you'll conduct adversarial testing on conversational AI models and agents, identifying weaknesses before they reach production. This text-based work involves reviewing AI outputs on sensitive topics including bias, misinformation, and harmful behaviors. Participation in higher-sensitivity projects is optional and fully supported with clear guidelines and wellness resources.

What You'll Do

  • Conduct red team testing on AI models: jailbreaks, prompt injections, misuse cases, bias exploitation, and multi-turn manipulation
  • Generate high-quality human data by annotating failures, classifying vulnerabilities, and flagging systemic risks
  • Apply structured frameworks using taxonomies, benchmarks, and playbooks to ensure consistent testing
  • Document findings reproducibly through detailed reports, datasets, and attack cases that drive actionable improvements

Required Qualifications

  • Native-level fluency in both English and Spanish (required)
  • Prior red teaming experience in AI adversarial work, cybersecurity, or socio-technical probing
  • Adversarial mindset with instinct to push systems to breaking points
  • Structured approach using frameworks and benchmarks, not random testing
  • Strong communication skills to explain risks to technical and non-technical audiences
  • Adaptability to move across diverse projects and customer needs

Valuable Specialties

  • Adversarial ML: jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction
  • Cybersecurity: penetration testing, exploit development, reverse engineering
  • Socio-technical risk: harassment/disinformation probing, abuse analysis, conversational AI testing
  • Creative probing: psychology, acting, or writing for unconventional adversarial thinking

Impact & Success Metrics

  • Uncover vulnerabilities that automated tests miss
  • Deliver reproducible artifacts that strengthen customer AI systems
  • Expand evaluation coverage with comprehensive scenario testing
  • Build customer trust through thorough adversarial probing

Why Mercor

Build frontier experience in human data-driven AI red teaming while playing a direct role in making AI systems more robust, safe, and trustworthy. Work flexibly on full-time or part-time contract basis from anywhere in the US or Mexico.

How to apply for this role
  • Upload your resume — keep it up-to-date and in English. Mercor will auto-fill your profile from it.
  • Complete the AI interview — a 15-minute conversation about your experience. Be ready to discuss specific projects and challenges you've solved.
  • Submit your application — only about 20% of applicants finish all the steps, so completing yours puts you well ahead.
Benture is an independent job board and is not affiliated with Mercor.

Related Jobs

Benture logo
See All Jobs