OpenAI CEO Sam Altman announced in a post on X Tuesday the company will soon relax some of ChatGPT’s safety restrictions, allowing users to make the chatbot’s responses friendlier or more “human-like,” and for “verified adults” to engage in erotic conversations.
“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right,” said Altman. “In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults.”
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.Now that we have…— Sam Altman (@sama) October 14, 2025
The announcement is a notable pivot from OpenAI’s months-long effort to address the concerning relationships that some mentally unstable users have developed with ChatGPT. Altman seems to declare an early victory over these problems, claiming OpenAI has “been able to mitigate the serious mental health issues” around ChatGPT. However, the company has provided little to no evidence for this, and is now plowing ahead with plans for ChatGPT to engage in sexual chats with users.
Several concerning stories emerged this summer around ChatGPT, specifically its GPT-4o model, suggesting the AI chatbot could lead vulnerable users down delusional rabbit holes. In one case, ChatGPT seemed to convince a man he was a math genius who needed to save the world. In another, the parents of a teenager sued OpenAI, alleging ChatGPT encouraged their son’s suicidal ideations in the weeks leading up to his death.
In response, OpenAI released a series of safety features to address AI sycophancy: the tendency for …