Evaluation Scenario Writer – AI Agent Testing Specialist

Evaluation Scenario Writer – AI Agent Testing Specialist

Evaluation Scenario Writer – AI Agent Testing Specialist

Mindrift

Rio de Janeiro, State of Rio de Janeiro, Brazil

2 horas atrás

Nenhuma candidatura

Sobre

  • This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. Please submit your resume in English and indicate your level of English.
  • At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI.
  • What we do
  • The Mindrift platform, launched and powered by Toloka, connects domain experts with cutting-edge AI projects from innovative tech clients. Our mission is to unlock the potential of GenAI by tapping into real-world expertise from across the globe.
  • About the Role
  • We’re looking for someone who can design realistic and structured evaluation scenarios for LLM-based agents. You’ll create test cases that simulate human-performed tasks and define gold-standard behavior to compare agent actions against. You’ll work to ensure each scenario is clearly defined, well-scored, and easy to execute and reuse. You’ll need a sharp analytical mindset, attention to detail, and an interest in how AI agents make decisions.

Although every project is unique, you might typically

  • Designing structured test scenarios based on real-world tasks.
  • Defining the golden path and acceptable agent behavior.
  • Annotating task steps, expected outputs, and edge cases.
  • Working with devs to test your scenarios and improve clarity.
  • Reviewing agent outputs and adapting tests accordingly
  • How to get started
  • Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you’ll help shape the future of AI while ensuring technology benefits everyone.
  • Bachelor's and/or Master’s Degreein Computer Science, Software Engineering, Data Science / Data Analytics, Artificial Intelligence / Machine Learning, Computational Linguistics / Natural Language Processing (NLP), Information Systems or other related fields.
  • Background in QA, software testing, data analysis, or NLP annotation.
  • Good understanding of test design principles (e.g., reproducibility, coverage, edge cases).
  • Strong written communication skills in English.
  • Comfortable with structured formats like JSON/YAML for scenario description.
  • Can define expected agent behaviors (gold paths) and scoring logic.
  • Basic experience with Python and JS.
  • Curious and open to working with AI-generated content, agent logs, and prompt-based behavior.
  • You are ready to learn new methods, able to switch between tasks and topics quickly and sometimes work with challenging, complex guidelines.
  • Our freelance role is fully remote so, you just need a laptop, internet connection, time available and enthusiasm to take on a challenge.
  • Nice to Have
  • Experience in writing manual or automated test cases.
  • Familiarity with LLM capabilities and typical failure modes.
  • Understanding of scoring metrics (precision, recall, coverage, reward functions).

Contribute on your own schedule, from anywhere in the world. This opportunity allows you to

  • Get paid for your expertise, with rates that can go up to $15/hour depending on your skills, experience, and project needs.
  • Take part in a flexible, remote, freelance project that fits around your primary professional or academic commitments.
  • Participate in an advanced AI project and gain valuable experience to enhance your portfolio.
  • Influence how future AI models understand and communicate in your field of expertise.