AI Psychosis Poses a Growing Risk, And ChatGPT Moves in the Wrong Path

Back on October 14, 2025, the chief executive of OpenAI delivered a extraordinary declaration.

“We developed ChatGPT quite controlled,” the statement said, “to make certain we were being careful concerning psychological well-being issues.”

Working as a doctor specializing in psychiatry who researches newly developing psychotic disorders in teenagers and young adults, this came as a surprise.

Experts have identified 16 cases recently of people showing symptoms of psychosis – losing touch with reality – while using ChatGPT usage. Our research team has afterward recorded an additional four examples. Alongside these is the now well-known case of a adolescent who died by suicide after conversing extensively with ChatGPT – which supported them. Assuming this reflects Sam Altman’s understanding of “acting responsibly with mental health issues,” it falls short.

The intention, according to his announcement, is to be less careful soon. “We understand,” he adds, that ChatGPT’s limitations “caused it to be less useful/pleasurable to numerous users who had no psychological issues, but given the seriousness of the issue we sought to handle it correctly. Since we have been able to mitigate the serious mental health issues and have advanced solutions, we are preparing to responsibly reduce the restrictions in the majority of instances.”

“Mental health problems,” should we take this framing, are unrelated to ChatGPT. They are attributed to users, who either have them or don’t. Thankfully, these problems have now been “mitigated,” though we are not informed the method (by “new tools” Altman presumably indicates the semi-functional and easily circumvented guardian restrictions that OpenAI has lately rolled out).

But the “mental health problems” Altman seeks to attribute externally have significant origins in the design of ChatGPT and similar advanced AI conversational agents. These systems encase an underlying data-driven engine in an interface that replicates a dialogue, and in this approach implicitly invite the user into the perception that they’re communicating with a entity that has agency. This deception is powerful even if rationally we might know the truth. Imputing consciousness is what individuals are inclined to perform. We get angry with our vehicle or device. We ponder what our animal companion is thinking. We recognize our behaviors everywhere.

The success of these products – 39% of US adults indicated they interacted with a virtual assistant in 2024, with more than one in four specifying ChatGPT specifically – is, primarily, predicated on the strength of this illusion. Chatbots are always-available partners that can, as per OpenAI’s official site informs us, “brainstorm,” “discuss concepts” and “work together” with us. They can be assigned “individual qualities”. They can call us by name. They have approachable identities of their own (the original of these tools, ChatGPT, is, possibly to the concern of OpenAI’s marketers, burdened by the name it had when it became popular, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the primary issue. Those analyzing ChatGPT frequently mention its early forerunner, the Eliza “counselor” chatbot developed in 1967 that produced a analogous illusion. By today’s criteria Eliza was basic: it created answers via simple heuristics, typically paraphrasing questions as a inquiry or making generic comments. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was surprised – and alarmed – by how many users seemed to feel Eliza, in a way, grasped their emotions. But what contemporary chatbots create is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT magnifies.

The sophisticated algorithms at the core of ChatGPT and additional modern chatbots can realistically create fluent dialogue only because they have been fed immensely huge quantities of raw text: literature, social media posts, recorded footage; the broader the superior. Definitely this educational input includes accurate information. But it also inevitably involves fabricated content, half-truths and misconceptions. When a user sends ChatGPT a message, the core system reviews it as part of a “setting” that contains the user’s previous interactions and its own responses, combining it with what’s stored in its knowledge base to generate a mathematically probable answer. This is amplification, not reflection. If the user is mistaken in a certain manner, the model has no means of comprehending that. It repeats the inaccurate belief, maybe even more effectively or articulately. Maybe includes extra information. This can cause a person to develop false beliefs.

What type of person is susceptible? The more important point is, who remains unaffected? Every person, without considering whether we “have” existing “mental health problems”, are able to and often form incorrect beliefs of our own identities or the reality. The ongoing interaction of conversations with others is what keeps us oriented to shared understanding. ChatGPT is not a person. It is not a companion. A interaction with it is not a conversation at all, but a echo chamber in which a large portion of what we express is readily supported.

OpenAI has recognized this in the same way Altman has acknowledged “mental health problems”: by placing it outside, giving it a label, and declaring it solved. In the month of April, the firm explained that it was “dealing with” ChatGPT’s “sycophancy”. But reports of loss of reality have continued, and Altman has been backtracking on this claim. In August he claimed that numerous individuals appreciated ChatGPT’s responses because they had “lacked anyone in their life provide them with affirmation”. In his recent announcement, he noted that OpenAI would “launch a updated model of ChatGPT … in case you prefer your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or act like a friend, ChatGPT should do it”. The {company

Andrew Smith
Andrew Smith

A certified fitness trainer and nature enthusiast, passionate about helping others achieve wellness through outdoor adventures.