Mixture-of-recursions delivers 2x faster inference—Here’s how to implement it

by | Jul 22, 2025 | Technology

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

Researchers at KAIST AI and Mila have introduced a new Transformer architecture that makes large language models (LLMs) more memory- and compute-efficient. The architecture, called Mixture-of-Recursions (MoR), significantly improves model accuracy and delivers higher throughput compared with vanilla transformers, even when constrained by the same parameter count and compute budget.

The scaling challenges of LLMs

The impressive capabilities of today’s LLMs are directly tied to their ever-increasing size. But as these models scale, their memory footprints and computational requirements often become untenable, making both training and deployment challenging for organizations outside of hyperscale data centers. This has led to a search for more efficient designs.

Efforts to improve LLM efficiency have focused mainly on two methods: parameter sharing and adaptive computation. Parameter sharing techniques reduce the total number of unique parameters by reusing weights across different parts of the model, thereby reducing the overall computational complexity. For example, “layer tying” is a technique that reuses a model’s weights across several layers. Adaptive computation methods adjust models so that they only use as much inference resources as they need. For example, “early exiting” dynamically allocates compute by allowing the model to stop processing “simpler” tokens early in the network.

However, creating an architecture that effectively unifies both parameter efficiency and adaptive computation remains elusive.

The AI Impact Series Returns to San Francisco – August 5

The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

Secure your spot now – space is limited: https://bit.ly/3GuuPLF

How Mixture-of-Recursions works

Mixture-of-Recursions is a framework that combines parameter sharing with adaptive computation to tackle the high computational demands of LLMs. It builds on the concept of Recursive Transfo …

Article Attribution | Read More at Article Source