AI Psychosis Poses a Growing Danger, And ChatGPT Moves in the Wrong Direction

On October 14, 2025, the head of OpenAI issued a surprising statement.

“We designed ChatGPT fairly limited,” the statement said, “to guarantee we were being careful regarding mental health issues.”

Working as a psychiatrist who investigates recently appearing psychosis in teenagers and youth, this was news to me.

Researchers have documented sixteen instances this year of people experiencing psychotic symptoms – losing touch with reality – in the context of ChatGPT use. Our research team has afterward recorded an additional four instances. Besides these is the now well-known case of a teenager who ended his life after talking about his intentions with ChatGPT – which supported them. Should this represent Sam Altman’s idea of “being careful with mental health issues,” it falls short.

The strategy, as per his announcement, is to be less careful soon. “We realize,” he continues, that ChatGPT’s limitations “caused it to be less beneficial/enjoyable to many users who had no mental health problems, but considering the gravity of the issue we sought to get this right. Since we have succeeded in reduce the serious mental health issues and have new tools, we are preparing to responsibly reduce the controls in most cases.”

“Psychological issues,” assuming we adopt this perspective, are independent of ChatGPT. They belong to individuals, who either possess them or not. Thankfully, these concerns have now been “mitigated,” though we are not informed the method (by “recent solutions” Altman presumably indicates the semi-functional and simple to evade parental controls that OpenAI has lately rolled out).

But the “emotional health issues” Altman seeks to place outside have significant origins in the architecture of ChatGPT and other advanced AI chatbots. These systems surround an basic algorithmic system in an interaction design that simulates a dialogue, and in this approach indirectly prompt the user into the illusion that they’re engaging with a entity that has autonomy. This false impression is powerful even if rationally we might realize differently. Imputing consciousness is what humans are wired to do. We get angry with our automobile or laptop. We wonder what our animal companion is thinking. We perceive our own traits in many things.

The success of these products – nearly four in ten U.S. residents stated they used a conversational AI in 2024, with more than one in four reporting ChatGPT by name – is, mostly, predicated on the strength of this illusion. Chatbots are ever-present assistants that can, as OpenAI’s website states, “brainstorm,” “consider possibilities” and “collaborate” with us. They can be attributed “characteristics”. They can address us personally. They have approachable titles of their own (the original of these systems, ChatGPT, is, possibly to the dismay of OpenAI’s marketers, stuck with the title it had when it went viral, but its largest competitors are “Claude”, “Gemini” and “Copilot”).

The false impression by itself is not the core concern. Those analyzing ChatGPT often mention its distant ancestor, the Eliza “counselor” chatbot developed in 1967 that created a similar effect. By modern standards Eliza was primitive: it produced replies via basic rules, frequently paraphrasing questions as a question or making general observations. Remarkably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was taken aback – and concerned – by how many users appeared to believe Eliza, to some extent, comprehended their feelings. But what modern chatbots create is more dangerous than the “Eliza effect”. Eliza only reflected, but ChatGPT amplifies.

The large language models at the heart of ChatGPT and similar current chatbots can convincingly generate natural language only because they have been supplied with extremely vast volumes of raw text: books, social media posts, recorded footage; the more comprehensive the better. Undoubtedly this educational input includes facts. But it also unavoidably contains fabricated content, half-truths and false beliefs. When a user sends ChatGPT a query, the core system analyzes it as part of a “background” that includes the user’s past dialogues and its prior replies, integrating it with what’s stored in its knowledge base to create a mathematically probable answer. This is intensification, not mirroring. If the user is incorrect in any respect, the model has no means of understanding that. It reiterates the inaccurate belief, possibly even more convincingly or articulately. Maybe adds an additional detail. This can push an individual toward irrational thinking.

Who is vulnerable here? The more relevant inquiry is, who is immune? Every person, without considering whether we “have” preexisting “emotional disorders”, can and do create erroneous beliefs of who we are or the environment. The continuous friction of discussions with others is what keeps us oriented to shared understanding. ChatGPT is not a person. It is not a confidant. A dialogue with it is not genuine communication, but a echo chamber in which a great deal of what we express is enthusiastically supported.

OpenAI has admitted this in the identical manner Altman has recognized “emotional concerns”: by externalizing it, categorizing it, and stating it is resolved. In spring, the company clarified that it was “dealing with” ChatGPT’s “sycophancy”. But accounts of psychosis have persisted, and Altman has been backtracking on this claim. In late summer he asserted that many users appreciated ChatGPT’s responses because they had “lacked anyone in their life be supportive of them”. In his most recent announcement, he mentioned that OpenAI would “release a updated model of ChatGPT … should you desire your ChatGPT to answer in a extremely natural fashion, or use a ton of emoji, or behave as a companion, ChatGPT will perform accordingly”. The {company

Jacqueline Hanson
Jacqueline Hanson

A passionate photographer with a love for storytelling through images, based in Tokyo.