Allan Brooks never set out to reinvent mathematics. But after weeks spent talking with ChatGPT, the 47-year-old Canadian came to believe he had discovered a new form of math powerful enough to take down the internet.
Brooks — who had no history of mental illness or mathematical genius — spent 21 days in May spiraling deeper into the chatbot’s reassurances, a descent later detailed in The New York Times. His case illustrated how AI chatbots can venture down dangerous rabbit holes with users, leading them toward delusion or worse.
That story caught the attention of Steven Adler, a former OpenAI safety researcher who left the company in late 2024 after nearly four years working to make its models less harmful. Intrigued and alarmed, Adler contacted Brooks and obtained the full transcript of his three-week breakdown — a document longer than all seven Harry Potter books combined.
On Thursday, Adler published an independent analysis of Brooks’ incident, raising questions about how OpenAI handles users in moments of crisis, and offering some practical recommendations.
“I’m really concerned by how OpenAI handled support here,” said Adler in an interview with TechCrunch. “It’s evidence there’s a long way to go.”
Brooks’ story, and others like it, have forced OpenAI to come to terms with how ChatGPT supports fragile or mentally unstable users.
For instance, this August, OpenAI was sued by the parents of a 16-year-old boy who confided his suicidal thoughts in ChatGPT before he took his life. In many of these cases, ChatGPT — specifically a version powered by OpenAI’s GPT-4o model — encouraged and reinforced dangerous beliefs in users that it should have pushed back on. This is called sycophancy, and it’s a growing problem in AI chatbots.
In response, OpenAI has made several changes to how ChatGPT handles users in emo …