Benture logo
 ←  next job →

AI Quality Analyst (Personalization) - English at Turing

posted 1 hour ago
turing.com Contractor remote TBD 26 views

About Turing:
Based in San Francisco, California, Turing is the world’s leading research accelerator for frontier AI labs and a trusted partner for global enterprises deploying advanced AI systems. Turing supports customers in two ways: first, by accelerating frontier research with high-quality data, advanced training pipelines, plus top AI researchers who specialize in coding, reasoning, STEM, multilinguality, multimodality, and agents; and second, by applying that expertise to help enterprises transform AI from proof of concept into proprietary intelligence with systems that perform reliably, deliver measurable impact, and drive lasting results on the P&L.

Role Overview:

As an AI Quality Analyst, you will evaluate a new personalization feature for Gemini. You will assess how well the model uses information from your past Gemini conversations, Gmail, Google Search, and YouTube activity to make responses more relevant and helpful. This role requires a unique blend of creativity and analytical rigor. You will actively design prompts from the perspective of your own personal experiences. You will then use your analytical skills to assess the quality of the model's personalized responses, evaluating dimensions like Grounding, Integration, and Helpfulness.


Key Qualifications

  • English Proficiency: Ability to read and write in English with a high degree of comp, as English is the focus language for this project.
  • Personal Account Usage: Willingness to use your primary personal Google account (not a testing account) and enable personal data sources for a genuine assessment.
  • Schedule Flexibility: Full-time availability in your local time zone is required.  We are staffing a global, 24-hour operations team.
  • Exceptional Analytical Thinking: Demonstrate ability to evaluate nuanced and ambiguous AI responses, specifically assessing personalization quality.
  • Creative Prompt Engineering: Experience in designing creative, multi-turn starting prompts based on personal context to thoroughly test the model's capabilities.
  • Strong Evaluation Acumen: Understanding of personalization concepts, including the ability to identify incorrect personalization, poor inferences, and forced connections.
  • Meticulous Attention to Detail: The ability to review Side-by-Side (SxS) model responses and spot subtle differences in naturalness and overnarrating.
  • Excellent Written Communication: Superior ability to write clear, concise, and structured rationales for model rankings, explicitly referencing specific turn numbers.
  • Feedback: Ability to provide constructive feedback and detailed annotations.
  • Communication: Excellent communication and collaboration skills.
  • Independence: Self-motivated and able to work independently in a remote setting.
  • Technical Setup: Desktop/Laptop set up with a good internet connection.


Description:

  • In this role, you will be part of a dynamic team focused on evaluating the quality of personalized AI interactions. Your day-to-day work will involve:
  • Designing and executing multi-turn conversational prompts (typically 1-5 turns) that require the AI to utilize your personal information and experiences.
  • Evaluating model responses based on your intent from the starting prompt, checking if the personalization was appropriately applied.
  • Analyzing responses for Grounding issues, ensuring claims about you are supported by evidence and not flawed inferences or hallucinations.
  • Assessing Integration quality to ensure personal data is woven naturally into the response without robotic "overnarrating".
  • Rigorously evaluating and stack-ranking two model responses side-by-side (SxS) to determine which is overall more helpful, easy to use, and enjoyable.
  • Writing clear, defensible rationales for your comparisons, explicitly referencing where issues or positive aspects occurred in the conversation.
  • Extracting and verifying "Debug Info" from the model to confirm that chat summaries and data sources were properly utilized.
  • Maintaining strict data hygiene by deleting evaluation conversations to prevent them from polluting your future chat history.


Education & Experience

  • BS/BA degree or equivalent experience in a relevant field (e.g., Policy, Law, Ethics, Linguistics, Journalism, Computer Science, or a related analytical field).
  • Experience in data annotation, AI quality evaluation, content moderation, or a related role is strongly preferred.


Offer Details:

  • Commitments Required: at least 4 hours per day and minimum 30 hours per week with 4 hours of overlap with PST. (We have 2 options of time commitment: 30 hrs/week or 40 hrs/week)
  • Engagement type: Contractor
  • Engagement Length: 3 months
  • Our offered rate for this project is $15 per hour. 

Evaluation Process -

  • Shortlisted candidates will be sent a Job Interest Form.
  • After the profile review, an assessment will be shared, which must be completed within 24 hours.
  • Based on the assessment outcomes, shortlisted candidates will be contacted to discuss the pre‑onboarding requirements.

Go back

Related Jobs

Benture logo
See All Jobs