Qwen3-Coder-480B-A35B-Instruct launches and it ‘might be the best coding model yet’

by | Jul 23, 2025 | Technology

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

Chinese e-commerce giant Alibaba’s “Qwen Team” has done it again.

Mere days after releasing for free and with open source licensing what is now the top performing non-reasoning large language model (LLM) in the world — full stop, even compared to proprietary AI models from well-funded U.S. labs such as Google and OpenAI — in the form of the lengthily named Qwen3-235B-A22B-2507, this group of AI researchers has come out with yet another blockbuster model.

That is Qwen3-Coder-480B-A35B-Instruct, a new open-source LLM focused on assisting with software development. It is designed to handle complex, multi-step coding workflows and can create full-fledged, functional applications in seconds or minutes.

The model is positioned to compete with proprietary offerings like Claude Sonnet-4 in agentic coding tasks and sets new benchmark scores among open models.

It is available on Hugging Face, GitHub, Qwen Chat, via Alibaba’s Qwen API, and a growing list of third-party vibe coding and AI tool platforms.

Open sourcing licensing means low cost and high optionality for enterprises

But unlike Claude and other proprietary models, Qwen3-Coder, which we’ll call it for short, is available now under an open source Apache 2.0 license, meaning it’s free for any enterprise to take without charge, download, modify, deploy and use in their commercial applications for employees or end customers without paying Alibaba or anyone else a dime.

It’s also so highly performant on third-party benchmarks and anecdotal usage among AI power users for “vibe coding” — coding using natural language and without formal development processes and steps — that at least one, LLM researcher Sebastian Raschka, wrote on X that: “This might be the best coding model yet. General-purpose is cool, but if you want the best at coding, specialization wins. No free lunch.”

Developers and enterprises interested in downloading it can find the code on the AI code sharing repository Hugging Face.

Enterprises who don’t wish to, or don’t have the capacity to host the model on their own or through various third-party cloud inference providers, can also use it directly through the Alibaba Cloud Qwen API, where the per-million token costs start at $1/$5 per million tokens (mTok) for input/output of up to 32,000 tokens, then $1.8/$9 for up to 128,000, $3/$15 for up to 256,000 and $6/$60 for the full million.

Model architecture and capabilities

According to the documentation released by Qwen Team online, Qwen3-Coder is a Mixture-of-Experts (MoE) model …

Article Attribution | Read More at Article Source