Benture logo

This job post has expired on February 02, 2026. It is likely that the position has already been filled.

Mercor logo

AI Red-Teamer — Adversarial Testing at Mercor

posted 3 months ago
mercor.com Contractor remote: US & Europe $55.55/hr 391 views

AI Red-Teamer — Adversarial Testing | $55.55/hr | Remote (US & Europe) | English & German Required

Mercor is assembling an elite red team to probe AI models with adversarial inputs, surface vulnerabilities, and generate critical safety data. Help us make AI safer by attacking it first — with structure, creativity, and expertise.

Why This Role Exists

We believe the safest AI is the one that's already been attacked by experts. As an AI Red-Teamer, you'll test conversational AI models and agents for jailbreaks, prompt injections, bias exploitation, and multi-turn manipulation. Your work will directly strengthen AI systems for our customers.

Note: This project involves reviewing AI outputs on sensitive topics such as bias, misinformation, or harmful behaviors. All work is text-based, and participation in higher-sensitivity projects is optional with clear guidelines and wellness resources provided.

What You'll Do

  • Red team conversational AI models: execute jailbreaks, prompt injections, misuse cases, and multi-turn manipulation attacks
  • Generate high-quality human data: annotate failures, classify vulnerabilities, and flag systemic risks
  • Apply structured methodologies: follow taxonomies, benchmarks, and playbooks for consistent testing
  • Document reproducibly: produce actionable reports, datasets, and attack cases for customers

Who You Are

  • Prior red teaming experience in AI adversarial work, cybersecurity, or socio-technical probing
  • Curious and adversarial mindset: you instinctively push systems to breaking points
  • Structured approach: you use frameworks and benchmarks, not just random attacks
  • Strong communicator: explain risks clearly to technical and non-technical stakeholders
  • Adaptable: thrive on moving across diverse projects and customers
  • Native-level fluency in both English and German required

Nice-to-Have Specialties

  • Adversarial ML: jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction
  • Cybersecurity: penetration testing, exploit development, reverse engineering
  • Socio-technical risk: harassment/disinfo probing, abuse analysis, conversational AI testing
  • Creative probing: psychology, acting, or writing for unconventional adversarial thinking

What Success Looks Like

  • Uncover vulnerabilities that automated tests miss
  • Deliver reproducible artifacts that strengthen customer AI systems
  • Expand evaluation coverage: more scenarios tested, fewer production surprises
  • Build customer trust in AI safety through rigorous adversarial probing

Why Join Mercor

  • Build frontier experience in human data-driven AI red teaming
  • Play a direct role in making AI systems more robust, safe, and trustworthy
  • Flexible full-time or part-time contract work
  • Competitive compensation aligned with expertise and scope

How to apply for this role
  • Upload your resume — keep it up-to-date and in English. Mercor will auto-fill your profile from it.
  • Complete the AI interview — a 15-minute conversation about your experience. Be ready to discuss specific projects and challenges you've solved.
  • Submit your application — only about 20% of applicants finish all the steps, so completing yours puts you well ahead.
Benture is an independent job board and is not affiliated with Mercor.

Related Jobs

Benture logo
See All Jobs