Chinese AI startup Zhupai aka z.ai is back this week with an eye-popping new frontier large language model: GLM-5.The latest in z.ai’s ongoing and continually impressive GLM series, it retains an open source MIT License — perfect for enterprise deployment – and, in one of several notable achievements, achieves a record-low hallucination rate on the independent Artificial Analysis Intelligence Index v4.0. With a score of -1 on the AA-Omniscience Index—representing a massive 35-point improvement over its predecessor—GLM-5 now leads the entire AI industry, including U.S. competitors like Google, OpenAI and Anthropic, in knowledge reliability by knowing when to abstain rather than fabricate information.Beyond its reasoning prowess, GLM-5 is built for high-utility knowledge work. It features native “Agent Mode” capabilities that allow it to turn raw prompts or source materials directly into professional office documents, including ready-to-use .docx, .pdf, and .xlsx files. Whether generating detailed financial reports, high school sponsorship proposals, or complex spreadsheets, GLM-5 delivers results in real-world formats that integrate directly into enterprise workflows.It is also disruptively priced at roughly $0.80 per million input tokens and $2.56 per million output tokens, approximately 6x cheaper than proprietary competitors like Claude Opus 4.6, making state-of-the-art agentic engineering more cost-effective than ever before. Here’s what else enterprise decision makers should know about the model and its training. Technology: scaling for agentic efficiencyAt the heart of GLM-5 is a massive leap in raw parameters. The model scales from the 355B parameters of GLM-4.5 to a staggering 744B parameters, with 40B active per token in its Mixture-of-Experts (MoE) architecture. This growth is supported by an increase in pre-training data to 28.5T tokens.To address training inefficiencies at this magnitude, Zai developed “slime,” a novel asynchronous reinforc …