Freelance Agent Evaluation Analyst

December 24, 2025
Open
Open
Location
Vietnam
Occupation
Part-time
Experience level
Mid-level
Apply
Job Summary

Mindrift offers a flexible, remote freelance role for candidates currently residing in the specified country, where you’ll design and evaluate scenarios for LLM-based AI agents. Working in collaboration with tech leaders, you’ll create test cases simulating complex human tasks, define gold-standard behaviors, and enhance agent performance evaluation. You set your schedule, contributing to advanced AI projects while earning up to $40/hour based on expertise and project scope. Benefits include skill-based compensation, project flexibility, and the chance to influence the future of Generative AI.

Qualified applicants will have a Bachelor’s or Master’s in a relevant technical field, strong software testing and analytical skills, and be proficient with structured formats (JSON/YAML). Essential skills include Python/JS basics, English fluency, and experience with QA, data analysis, or NLP annotation. Familiarity with LLM limitations, automated test writing, and scoring metrics is a plus. Submit your resume in English and specify your English proficiency.

Only candidates located in the required country are eligible. Mindrift values curiosity and offers projects that enhance your AI expertise, portfolio, and career advancement, all while working from anywhere and at your preferred pace.

Highlight
Highlight

This opportunity is only for candidates currently residing in the specified country. Your location may affect eligibility and rates. Please submit your resume in English and indicate your level of English.

At Mindrift, innovation meets opportunity. We believe in using the power of collective human intelligence to ethically shape the future of AI. 

What we do

The Mindrift platform connects specialists with AI projects from major tech innovators. Our mission is to unlock the potential of Generative AI by tapping into real-world expertise from across the globe.

About the Role

We’re looking for someone who can design realistic and structured evaluation scenarios for LLM-based agents. You’ll create test cases that simulate human-performed tasks and define gold-standard behavior to compare agent actions against. You’ll work to ensure each scenario is clearly defined, well-scored, and easy to execute and reuse. You’ll need a sharp analytical mindset, attention to detail, and an interest in how AI agents make decisions.

Although every project is unique, you might typically:

  • Create structured test cases that simulate complex human workflows.
  • Define gold-standard behavior and scoring logic to evaluate agent actions.
  • Analyze agent logs, failure modes, and decision paths.
  • Work with code repositories and test frameworks to validate your scenarios.
  • Iterate on prompts, instructions, and test cases to improve clarity and difficulty.
  • Ensure that scenarios are production-ready, easy to run, and reusable.

How to get started

Simply apply to this post, qualify, and get the chance to contribute to projects aligned with your skills, on your own schedule. From creating training prompts to refining model responses, you’ll help shape the future of AI while ensuring technology benefits everyone.

  • Bachelor's and/or Master’s Degree in Computer Science, Software Engineering, Data Science / Data Analytics, Artificial Intelligence / Machine Learning, Computational Linguistics / Natural Language Processing (NLP), Information Systems or other related fields.
  • Background in QA, software testing, data analysis, or NLP annotation.
  • Good understanding of test design principles (e.g., reproducibility, coverage, edge cases).
  • Strong written communication skills in English.
  • Comfortable with structured formats like JSON/YAML for scenario description.
  • Can define expected agent behaviors (gold paths) and scoring logic.
  • Basic experience with Python and JS.
  • Curious and open to working with AI-generated content, agent logs, and prompt-based behavior.

Nice to Have

  • Experience in writing manual or automated test cases.
  • Familiarity with LLM capabilities and typical failure modes.
  • Understanding of scoring metrics (precision, recall, coverage, reward functions).

Contribute on your own schedule, from anywhere in the world. This opportunity allows you to:

  • Get paid for your expertise, with rates that can go up to $40/hour depending on your skills, experience, and project needs.
  • Take part in a flexible, remote, freelance project that fits around your primary professional or academic commitments.
  • Participate in an advanced AI project and gain valuable experience to enhance your portfolio.
  • Influence how future AI models understand and communicate in your field of expertise.
Apply now
Thanks you!
Oops! Something went wrong while submitting the form.
Please let us know if this job is expired. Your support helps us maintain an accurate job board!
Similar Jobs
ENCORE IT SOLUTIONS
AI/ML Engineer with LLM (Taiwan, Hong Kong, Korean)
ENCORE IT SOLUTIONS
APAC
Freelancer
Senior
file.jpeg
Web Application Penetration Tester – Cybersecurity
Techno Vista Dynamics
Anywhere
Full-time
Mid-level
file.jpeg
AI Trainer - Freelance Data Annotator
Toloka Annotators
Vietnam
Part-time
Mid-level
image.png
Mindrift
Welcome to Mindrift — a space where innovation meets opportunity. We're a pioneering platform dedicated to advancing the field of artificial intelligence through collaborative online projects. Our focus lies in creating data for generative AI, offering a unique chance for freelancers to contribute to AI development from anywhere, at any time. At Mindrift, we believe in the power of collective intelligence to shape the future of AI. Our platform allows users to dive into a variety of tasks — ranging from creating training prompts for AI models to refining AI responses for greater relevance. Let's build the future of AI together, one task at a time.
HQ Location
Company size
1,001-5,000
Founded in
Industry
Information Technology & Services
Website