Stop benchmarking in the lab: Inclusion Arena shows how LLMs perform in production

by | Aug 19, 2025 | Technology

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

Benchmark testing models have become essential for enterprises, allowing them to choose the type of performance that resonates with their needs. But not all benchmarks are built the same and many test models are based on static datasets or testing environments. 

Researchers from Inclusion AI, which is affiliated with Alibaba’s Ant Group, proposed a new model leaderboard and benchmark that focuses more on a model’s performance in real-life scenarios. They argue that LLMs need a leaderboard that takes into account how people use them and how much people prefer their answers compared to the static knowledge capabilities models have. 

In a paper, the researchers laid out the foundation for Inclusion Arena, which ranks models based on user preferences.  

“To address these gaps, we propose Inclusion Arena, a live leaderboard that bridges real-world AI-powered applications with state-of-the-art LLMs and MLLMs. Unlike crowdsourced platforms, our system randomly triggers model battles during multi-turn human-AI dialogues in real-world apps,” the paper said. 

AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

Turning energy into a strategic advantage

Architecting efficient inference for real throughput gains

Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead: https://bit.ly/4mwGngO

Inclusion Arena stands out among other model leaderboards, such as MMLU and OpenLLM, due to its real-life aspect and its unique method of ranking models. It employs the Bradley-Terry modeling method, similar to the one used by Chatbot Arena. 

Inclusion Arena works by integrating the benchmark into AI applications to gather datasets and conduct human evaluations. The researchers admit that “the number of initially integrated AI-powered applications is limited, but we aim to build an open alliance to expand the ecosystem.”

By now, most people are …

Article Attribution | Read More at Article Source