State attorneys general warn Microsoft, OpenAI, Google, and other AI giants to fix ‘delusional’ outputs

by | Dec 10, 2025 | Technology

After a string of disturbing mental health incidents involving AI chatbots, a group of state attorneys general have sent a letter to the AI industry’s top companies, with a warning to fix “delusional outputs” or risk being in breach of state law. 

The letter, signed by dozens of AGs from U.S. states and territories with the National Association of Attorneys General, asks the companies, including Microsoft, OpenAI, Google, and 10 other major AI firms, to implement a variety of new internal safeguards to protect their users. Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika, and xAI were also included in the letter.

The letter comes as a fight over AI regulations has been brewing between state and federal government.

Those safeguards include transparent third-party audits of large language models that look for signs of delusional or sycophantic ideations, as well as new incident reporting procedures designed to notify users when chatbots produce psychologically harmful outputs. Those third parties, which could include academic and civil society groups, should be allowed to “evaluate systems pre-release without retaliation and to publish their findings without prior approval from the company,” the letter states.

“GenAI has the potential to change how the world works in a positive way. But it also has caused—and has the potential to cause—serious harm, especially to vulnerable populations,” the letter states, pointing to a number of well-publicized incidents over the past year — including suicides and murder — in which violence has been linked to excessive AI use, the letter states. “In many of these incidents, the GenAI products generated sycophantic and delusional outputs that either encouraged users’ delusions or assured users that they were not delusional.”

AGs also suggest companies treat mental health incidents the same way tech companies handle cybersecurity incidents — with clear and transparent incident reporting policies and procedures.

Companies should develop and publish “detection and response timelines for sycophantic and delusional outputs,” the letter states. In a similar fashion to how data breaches are currently handled, companies should also “promptly, clearly, and directly notify users if they were exposed to potentially harmful sycophantic or delusional outputs,” the letter says. 

Techcrunch event

San Francisco
|
October 13-15, 2026

Another ask is that the companies develop “reasonable and appropriate safety tests” on GenAI models to “ensure the models do not produce potentially harmf …

Article Attribution | Read More at Article Source