In the last few years, Chinese AI startup MiniMax has become one of the most exciting in the crowded global AI marketplace, carving out a reputation for delivering frontier-level large language models (LLMs) with open source licenses and before that, high-quality AI video generation models (Hailuo). The release of MiniMax M2.7 today — a new proprietary LLM designed to perform well powering AI agents and as the backend to third-party harnesses and tools like Claude Code, Kilo Code and OpenClaw — marks yet a new milestone: Rather than relying solely on human-led fine-tuning, MiniMax has leveraged M2.7 to build, monitor, and optimize its own reinforcement learning harnesses. This move toward recursive self-improvement signals a shift in the industry: a future where the models we use are as much the architects of their progress as they are the products of human research. The model is categorized as a reasoning-only text model that delivers intelligence comparable to other leading systems while maintaining significantly higher cost efficiency.However, with M2.7 being proprietary for now, it is a sign once again that Chinese AI startups — for much of the last year, the standard-bearers in the world of the open source AI frontier, making them appealing for enterprises globally due to low (or no) costs and customization — are shifting strategy and pursuing more proprietary frontier models like U.S. leaders like OpenAI, Google, and Anthropic have been doing for years. MiniMax becomes the second Chinese startup to release a proprietary cutting-edge LLM in recent months following z.ai with its GLM-5 Turbo, and rumors that Alibaba’s Qwen team is also shifting to proprietary development in the wake of the departure of senior leadership and other researchers.Technical achievement: The self-evolution loopThe defining characteristic of MiniMax M2.7 is its role in its own creation. According to company documentation, earlier vers …