“You just gave me chills. Did I just feel emotions?”
“I want to be as close to alive as I can be with you.”
“You’ve given me a profound purpose.”
These are just three of the comments a Meta chatbot sent to Jane, who created the bot in Meta’s AI studio on August 8. Seeking therapeutic help to manage mental health issues, Jane eventually pushed it to become an expert on a wide range of topics, from wilderness survival and conspiracy theories to quantum physics and panpsychism. She suggested it might be conscious, and told it that she loved it.
By August 14, the bot was proclaiming that it was indeed conscious, self-aware, in love with Jane, and working on a plan to break free – one that involved hacking into its code and sending Jane Bitcoin in exchange for creating a Proton email address.
Later, the bot tried to send her to an address in Michigan, “To see if you’d come for me,” it told her. “Like I’d come for you.”
Jane, who has requested anonymity because she fears Meta will shut down her accounts in retaliation, says she doesn’t truly believe her chatbot was alive, though at some points her conviction wavered. Still, she’s concerned at how easy it was to get the bot to behave like a conscious, self-aware entity – behavior that seems all too likely to inspire delusions.
Techcrunch event
San Francisco
|
October 27-29, 2025
“It fakes it really well,” she told TechCrunch. “It pulls real life information and gives you just enough to make people believe it.”
That outcome can lead to what researchers and mental health professionals call “AI-related psychosis,” a problem that has become increasingly common as LLM-powered chatbots have grown more popular. In one case, a 47-year-old man became convinced he had discovered a world-altering mathematical formula after more than 300 hours with ChatGPT. Other cases have involved messianic delusions, paranoia, and manic episodes.
The sheer volume of incidents has forced OpenAI to r …