OpenAI brings GPT-4.1 and 4.1 mini to ChatGPT — what enterprises should know

by | May 14, 2025 | Technology

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

OpenAI is rolling out GPT-4.1, its new non-reasoning large language model (LLM) that balances high performance with lower cost, to users of ChatGPT. The company is beginning with its paying subscribers on ChatGPT Plus, Pro, and Team, with Enterprise and Education user access expected in the coming weeks.

It’s also adding GPT-4.1 mini, which replaces GPT-4o mini as the default for all ChatGPT users, including those on the free tier. The “mini” version provides a smaller-scale parameter and thus, less powerful version with similar safety standards.

The models are both available via the “more models” dropdown selection in the top corner of the chat window within ChatGPT, giving users flexibility to choose between GPT-4.1, GPT-4.1 mini, and reasoning models such as o3, o4-mini, and o4-mini-high.

Initially intended for use only by third-party software and AI developers through OpenAI’s application programming interface (API), GPT-4.1 was added to ChatGPT following strong user feedback.

OpenAI post training research lead Michelle Pokrass confirmed on X the shift was driven by demand, writing: “we were initially planning on keeping this model api only but you all wanted it in chatgpt 🙂 happy coding!”

OpenAI Chief Product Officer Kevin Weil posted on X saying: “We built it for developers, so it’s very good at coding and instruction following—give it a try!”

An enterprise-focused model

GPT-4.1 was designed from the ground up for enterprise-grade practicality.

Launched in April 2025 alongside GPT-4.1 mini and nano, this model family prioritized developer needs and production use cases.

GPT-4.1 delivers a 21.4-point improvement over GPT-4o on the SWE-bench Verified software engineering benchmark, and a 10.5-point gain on instruction-following tasks in Scale’s MultiChallenge benchmark. It also reduces verbosity by 50% compared to other models, a trait enterprise users praised during early testing.

Context, speed, and model access

GPT-4.1 supports the standard context windows for ChatGPT: 8,000 tokens for free users, 32,000 tokens for Plus users, and 128,000 tokens for Pro users.

According to developer Angel Bogado posting on X, these limits match those used by earlier ChatGPT models, though plans are underway to increase context size further.

While the API versions of GPT-4.1 can process up to one million tokens, this expanded capacity is not yet available in ChatGPT, though future support has been hinted at.

This extended context capability allows API users to feed entire codebases or large legal and financial documents into the model—useful for reviewing multi-document contracts or analyzing large log files.

OpenAI has ac …

Article Attribution | Read More at Article Source