Anthropic has confirmed the implementation of strict new technical safeguards preventing third-party applications from spoofing its official coding client, Claude Code, in order to access the underlying Claude AI models for more favorably pricing and limits — a move that has disrupted workflows for users of popular open source coding agent OpenCode. Simultaneously but separately, it has restricted usage of its AI models by rival labs including xAI (through the integrated developer environment Cursor) to train competing systems to Claude Code. The former action was clarified on Friday by Thariq Shihipar, a Member of Technical Staff at Anthropic working on Claude Code. Writing on the social network X (formerly Twitter), Shihipar stated that the company had “tightened our safeguards against spoofing the Claude Code harness.”He acknowledged that the rollout had unintended collateral damage, noting that some user accounts were automatically banned for triggering abuse filters—an error the company is currently reversing. However, the blocking of the third-party integrations themselves appears to be intentional.The move targets harnesses—software wrappers that pilot a user’s web-based Claude account via OAuth to drive automated workflows. This effectively severs the link between flat-rate consumer Claude Pro/Max plans and external coding environments. The Harness ProblemA harness acts as a bridge between a subscription (designed for human chat) and an automated workflow. Tools like OpenCode work by spoofing the client identity, sending headers that convince the Anthropic server the request is coming from its own official command line interface (CLI) tool.Shihipar cited technical instability as the primary driver for the block, noting that unauthorized harnesses introduce bugs and usage patterns that Anthropic cannot properly diagnose. When a third-party wrapper l …