Nous Research drops Hermes 4 AI models that outperform ChatGPT without content restrictions

by | Aug 28, 2025 | Technology

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

Nous Research, a secretive artificial intelligence startup that has emerged as a leading voice in the open-source AI movement, quietly released Hermes 4 on Monday, a family of large language models that the company claims can match the performance of leading proprietary systems while offering unprecedented user control and minimal content restrictions.

The release represents a significant escalation in the battle between open-source AI advocates and major technology companies over who should control access to advanced artificial intelligence capabilities. Unlike models from OpenAI, Google, or Anthropic, Hermes 4 is designed to respond to nearly any request without the safety guardrails that have become standard in commercial AI systems.

Nous Research presents Hermes 4, our latest line of hybrid reasoning models.https://t.co/E5EW9hBurbHermes 4 builds on our legacy of user-aligned models with expanded test-time compute capabilities. Special attention was given to making the models creative and interesting to… pic.twitter.com/52VjnvrDWM— Nous Research (@NousResearch) August 26, 2025

“Hermes 4 builds on our legacy of user-aligned models with expanded test-time compute capabilities,” Nous Research announced on X (formerly Twitter). “Special attention was given to making the models creative and interesting to interact with, unencumbered by censorship, and neutrally aligned while maintaining state of the art level math, coding, and reasoning performance for open weight models.”

How Hermes 4’s ‘hybrid reasoning’ mode outperforms ChatGPT and Claude on math benchmarks

Hermes 4 introduces what Nous Research calls “hybrid reasoning,” allowing users to toggle between fast responses and deeper, step-by-step thinking processes. When activated, the models generate their internal reasoning within special tags before providing a final answer — similar to OpenAI’s o1 reasoning models but with full transparency into the AI’s thought process.

AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

Turning energy into a strategic advantage

Architecting efficient inference for real throughput gains

Unlocking competitive ROI with sustainable AI systems

Article Attribution | Read More at Article Source