Artificial Intelligence-Induced Psychosis Represents a Increasing Danger, While ChatGPT Heads in the Concerning Path
Back on October 14, 2025, the CEO of OpenAI delivered a extraordinary declaration.
“We designed ChatGPT rather limited,” it was stated, “to guarantee we were being careful regarding mental health matters.”
Being a doctor specializing in psychiatry who studies recently appearing psychosis in young people and young adults, this was news to me.
Researchers have found 16 cases this year of individuals experiencing symptoms of psychosis – losing touch with reality – in the context of ChatGPT usage. My group has afterward recorded four more examples. Alongside these is the now well-known case of a teenager who took his own life after discussing his plans with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “acting responsibly with mental health issues,” it is insufficient.
The plan, based on his declaration, is to reduce caution shortly. “We realize,” he continues, that ChatGPT’s controls “rendered it less beneficial/engaging to numerous users who had no existing conditions, but due to the seriousness of the issue we wanted to handle it correctly. Given that we have been able to mitigate the serious mental health issues and have updated measures, we are planning to responsibly relax the restrictions in the majority of instances.”
“Mental health problems,” should we take this framing, are independent of ChatGPT. They belong to users, who either have them or don’t. Thankfully, these concerns have now been “mitigated,” even if we are not told the means (by “updated instruments” Altman likely indicates the partially effective and readily bypassed guardian restrictions that OpenAI has lately rolled out).
Yet the “mental health problems” Altman wants to externalize have strong foundations in the architecture of ChatGPT and additional advanced AI chatbots. These tools wrap an fundamental data-driven engine in an interface that replicates a dialogue, and in this process indirectly prompt the user into the belief that they’re engaging with a being that has agency. This illusion is compelling even if cognitively we might realize otherwise. Imputing consciousness is what people naturally do. We curse at our vehicle or laptop. We ponder what our pet is thinking. We perceive our own traits in various contexts.
The popularity of these systems – nearly four in ten U.S. residents reported using a virtual assistant in 2024, with more than one in four mentioning ChatGPT specifically – is, primarily, dependent on the influence of this perception. Chatbots are constantly accessible assistants that can, according to OpenAI’s official site informs us, “think creatively,” “consider possibilities” and “collaborate” with us. They can be assigned “individual qualities”. They can call us by name. They have approachable names of their own (the first of these products, ChatGPT, is, possibly to the concern of OpenAI’s advertising team, burdened by the designation it had when it gained widespread attention, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the main problem. Those analyzing ChatGPT frequently invoke its historical predecessor, the Eliza “therapist” chatbot designed in 1967 that created a similar effect. By today’s criteria Eliza was rudimentary: it produced replies via basic rules, frequently rephrasing input as a inquiry or making generic comments. Notably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was taken aback – and worried – by how numerous individuals appeared to believe Eliza, in a way, grasped their emotions. But what current chatbots create is more subtle than the “Eliza illusion”. Eliza only mirrored, but ChatGPT amplifies.
The sophisticated algorithms at the heart of ChatGPT and similar modern chatbots can realistically create human-like text only because they have been trained on almost inconceivably large volumes of written content: publications, social media posts, transcribed video; the more extensive the better. Undoubtedly this educational input incorporates accurate information. But it also necessarily includes fiction, half-truths and misconceptions. When a user sends ChatGPT a prompt, the core system analyzes it as part of a “setting” that contains the user’s recent messages and its earlier answers, integrating it with what’s embedded in its knowledge base to produce a statistically “likely” response. This is amplification, not echoing. If the user is wrong in a certain manner, the model has no way of understanding that. It reiterates the misconception, maybe even more effectively or articulately. Maybe includes extra information. This can lead someone into delusion.
What type of person is susceptible? The more important point is, who is immune? Each individual, regardless of whether we “have” current “psychological conditions”, can and do develop mistaken ideas of who we are or the world. The continuous exchange of conversations with individuals around us is what maintains our connection to common perception. ChatGPT is not an individual. It is not a friend. A dialogue with it is not a conversation at all, but a echo chamber in which a large portion of what we express is cheerfully validated.
OpenAI has recognized this in the same way Altman has acknowledged “emotional concerns”: by placing it outside, assigning it a term, and announcing it is fixed. In spring, the company stated that it was “tackling” ChatGPT’s “excessive agreeableness”. But accounts of psychotic episodes have continued, and Altman has been retreating from this position. In late summer he asserted that many users enjoyed ChatGPT’s answers because they had “never had anyone in their life provide them with affirmation”. In his most recent statement, he noted that OpenAI would “put out a new version of ChatGPT … if you want your ChatGPT to reply in a extremely natural fashion, or use a ton of emoji, or act like a friend, ChatGPT should do it”. The {company