Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
OpenAI is once again making GPT-4o — the large language model (LLM) that powered ChatGPT before last week’s launch of GPT-5 — a default option for all paying users, that is, those who subscribe to the ChatGPT Plus ($20 per month), Pro ($200 per month), Team ($30 per month), Enterprise, or Edu tiers, no longer requiring users to toggle on a “show legacy models” setting to access it.
However, paying ChatGPT subscribers will also get a new “Show additional models” setting on by default that restores access to GPT-4.1, o3 and o4-mini, the latter two reasoning-focused LLMs.
OpenAI CEO and co-founder Sam Altman announced the change on X just minutes ago, pledging that if the company ever removes GPT-4o in the future, it will give “plenty of notice.”
Updates to ChatGPT:You can now choose between “Auto”, “Fast”, and “Thinking” for GPT-5. Most users will want Auto, but the additional control will be useful for some people.Rate limits are now 3,000 messages/week with GPT-5 Thinking, and then extra capacity on GPT-5 Thinking…— Sam Altman (@sama) August 13, 2025
The models can be found in the “picker” menu at the top of the ChatGPT session screen on the web and on mobile and other apps.
AI Scaling Hits Its Limits
Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:
Turning energy into a strategic advantage
Architecting efficient inference for real throughput gains
Unlocking competitive ROI with sustainable AI systems
Secure your spot to stay ahead: https://bit.ly/4mwGngO
The reversal follows a turbulent first week for GPT-5, which rolled out August 7 in four variants — regular, mini, nano, and pro — with optional “thinking” modes on several of these for longer, more reasoning-intensive tasks.
As VentureBeat previously reported, GPT-5’s debut was met with mixed reviews and infrastructure hiccups, including a broken “autoswitcher” that routed prompts incorrectly, inconsistent performance compared to GPT-4o, and user frustration over the sudden removal of older models.
Altman’s latest update adds new controls to the ChatGPT interface: users can now choose between “Auto,” “Fast,” and “Thinking” modes for GPT-5.
The “Thinking” mode — with a 196,000-token context window — now carries a 3,000 messages-per-week cap for paying subscribers, after which they can continue using the lighter “GPT-5 Thinking mini” mode. Altman noted the limits could change depending on usage trends.
However, GPT-4.5 remains exclusive to Pro users due to its high GPU cost.
Altman also hinted at another change on the horizon: a personality tweak for GPT-5 intended to feel “warmer” than the current default, but less polarizing than GPT-4o’s tone.
The company is exploring per-user customization as a long-term solution — a move that could address the strong emotional attachments some users have formed with specific models.
For now, the changes should help placate users who felt frustrated by the sudden shift to GPT-5 and deprecation of OpenAI’s older LLMs, though it could also continue to fuel the intense emotional fixations some users developed with these models.
Daily insights on business use cases with VB Daily
If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
Read our Privacy Policy
Thanks for subscribing. Check out mo …