Micro1, a Scale AI competitor, touts crossing $100M ARR

by | Dec 4, 2025 | Technology

Micro1’s rapid climb over the past two years has pushed it into a cohort of AI companies scaling at breakneck speed. The three-year-old startup, which helps AI labs recruit and manage human experts for training data, started the year with roughly $7 million in annual recurring revenue (ARR).

Today, it claims to have surpassed $100 million in ARR, founder and CEO Ali Ansari told TechCrunch. That figure is also more than double the revenue Micro1 reported in September when it announced its $35 million Series A at a $500 million valuation.

Ansari, 24, said then that Micro1 works with leading AI labs, including Microsoft, as well as Fortune 100 companies racing to improve large language models through post-training and reinforcement learning. Their demand for top-tier human data has fueled a fast-expanding market that Ansari believes will grow from $10-15 billion today to nearly $100 billion within two years.

Micro1’s rise, and that of larger competitors such as Mercor and Surge, accelerated after OpenAI and Google DeepMind reportedly cut ties with Scale AI following Meta’s $14 billion investment in the vendor and its decision to hire Scale’s CEO.

While Micro1’s ARR is growing fast, according to the founder, it hasn’t yet matched its rivals: Mercor’s more than $450 million, sources told TechCrunch, and Surge’s reported $1.2 billion in 2024.

Ansari attributes Micro1’s growth to its ability to recruit and evaluate domain experts quickly. Like Mercor, Micro1 began as an AI recruiter called Zara, matching engineering talent with software roles before pivoting into the data-training market. That tool now interviews and vets applicants seeking expert roles on the platform.

Beyond supplying expert-level data to leading AI labs, Ansari says two new segments, still barely visible today, are on track to reshape the economics of human data.

Techcrunch event

San Francisco
|
October 13-15, 2026

The first involves non-AI-native Fortune 1000 enterprises that will begin building AI agents for internal workflows, support operations, finance, and industry-specific tasks.

Developing these agents requires systematic evaluation: testing frontier models, grading their output, choosing winners, fine-tuning them, and continuously validating performance in production. Ansari argues this cycle depends heavily on human experts evaluating …

Article Attribution | Read More at Article Source