Liquid AI is revolutionizing LLMs to work on edge devices like smartphones with new ‘Hyena Edge’ model

by | Apr 25, 2025 | Technology

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

Liquid AI, the Boston-based foundation model startup spun out of the Massachusetts Institute of Technology (MIT), is seeking to move the tech industry beyond its reliance on the Transformer architecture underpinning most popular large language models (LLMs) such as OpenAI’s GPT series and Google’s Gemini family.

Yesterday, the company announced “Hyena Edge,” a new convolution-based, multi-hybrid model designed for smartphones and other edge devices in advance of the International Conference on Learning Representations (ICLR) 2025.

The conference, one of the premier events for machine learning research, is taking place this year in Vienna, Austria.

New convolution-based model promises faster, more memory-efficient AI at the edge

Hyena Edge is engineered to outperform strong Transformer baselines on both computational efficiency and language model quality.

In real-world tests on a Samsung Galaxy S24 Ultra smartphone, the model delivered lower latency, smaller memory footprint, and better benchmark results compared to a parameter-matched Transformer++ model.

A new architecture for a new era of edge AI

Unlike most small models designed for mobile deployment — including SmolLM2, the Phi models, and Llama 3.2 1B — Hyena Edge steps away from traditional attention-heavy designs. Instead, it strategically replaces two-thirds of grouped-query attention (GQA) operators with gated convolutions from the Hyena-Y family.

The new architecture is the result of Liquid AI’s Synthesis of Tailored Architectures (STAR) framework, which uses evolutionary algorithms to automatically design model backbones and was announced back in December 2024.

STAR explores a wide range of operator compositions, rooted in the mathematical theory of linear input-varying systems, to optimize for multiple hardware-specific objectives like latency, memory usage, and quality.

Benchmarked directly on consumer hardware

To validate Hyena Edge’s real-world readiness, Liquid AI ran tests directly on the Samsung Galaxy S24 Ultra smartphone.

Results show that Hyena Edge achieved up to 30% faster prefill and decode latencies compared to its Transformer++ counterpart, with speed advantages increasing at longer sequence lengths.

Prefill latencies at short sequence lengths also outpaced the Transformer baseline — a critical performance metric for responsive on-device applications.

In terms of memory, Hyena Edge consistently used less RAM during inference across all tested sequence lengths, positioning it as a strong candidate for environments with tight resource constraints.

Outperforming Transformers on language benchmarks

Hyena Edge was trained on 100 billion tokens and evaluated across standard benchmarks for sma …

Article Attribution | Read More at Article Source