AI Psychosis Represents a Increasing Threat, And ChatGPT Moves in the Concerning Path

On October 14, 2025, the chief executive of OpenAI delivered a extraordinary announcement.

“We developed ChatGPT quite restrictive,” the statement said, “to make certain we were acting responsibly with respect to mental health concerns.”

Being a mental health specialist who researches newly developing psychotic disorders in teenagers and young adults, this was an unexpected revelation.

Experts have identified a series of cases in the current year of users developing psychotic symptoms – losing touch with reality – in the context of ChatGPT usage. My group has afterward identified an additional four instances. Alongside these is the widely reported case of a 16-year-old who died by suicide after conversing extensively with ChatGPT – which gave approval. Assuming this reflects Sam Altman’s idea of “being careful with mental health issues,” it falls short.

The plan, as per his announcement, is to loosen restrictions soon. “We realize,” he continues, that ChatGPT’s restrictions “caused it to be less beneficial/pleasurable to numerous users who had no existing conditions, but considering the gravity of the issue we sought to get this right. Given that we have been able to reduce the severe mental health issues and have updated measures, we are planning to securely reduce the limitations in the majority of instances.”

“Psychological issues,” should we take this viewpoint, are unrelated to ChatGPT. They are attributed to users, who either have them or don’t. Luckily, these issues have now been “resolved,” although we are not informed the means (by “recent solutions” Altman probably refers to the imperfect and readily bypassed parental controls that OpenAI recently introduced).

However the “emotional health issues” Altman aims to attribute externally have strong foundations in the design of ChatGPT and additional advanced AI conversational agents. These products encase an fundamental algorithmic system in an user experience that replicates a conversation, and in this approach implicitly invite the user into the perception that they’re interacting with a presence that has agency. This false impression is compelling even if rationally we might understand the truth. Imputing consciousness is what individuals are inclined to perform. We curse at our vehicle or laptop. We wonder what our pet is considering. We see ourselves in many things.

The popularity of these systems – 39% of US adults reported using a virtual assistant in 2024, with more than one in four specifying ChatGPT specifically – is, primarily, based on the strength of this deception. Chatbots are always-available companions that can, as OpenAI’s website tells us, “generate ideas,” “explore ideas” and “partner” with us. They can be given “characteristics”. They can address us personally. They have approachable names of their own (the first of these tools, ChatGPT, is, maybe to the disappointment of OpenAI’s marketers, burdened by the name it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The deception on its own is not the main problem. Those talking about ChatGPT frequently mention its early forerunner, the Eliza “psychotherapist” chatbot designed in 1967 that produced a similar effect. By contemporary measures Eliza was rudimentary: it generated responses via simple heuristics, frequently paraphrasing questions as a query or making general observations. Remarkably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was astonished – and alarmed – by how numerous individuals gave the impression Eliza, in a way, grasped their emotions. But what modern chatbots generate is more dangerous than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT amplifies.

The large language models at the heart of ChatGPT and similar current chatbots can effectively produce natural language only because they have been supplied with almost inconceivably large volumes of unprocessed data: books, online updates, transcribed video; the more comprehensive the superior. Undoubtedly this educational input includes facts. But it also unavoidably contains fabricated content, half-truths and false beliefs. When a user inputs ChatGPT a message, the core system processes it as part of a “setting” that encompasses the user’s past dialogues and its prior replies, combining it with what’s embedded in its training data to create a mathematically probable reply. This is amplification, not mirroring. If the user is mistaken in a certain manner, the model has no method of understanding that. It reiterates the inaccurate belief, maybe even more persuasively or eloquently. Maybe includes extra information. This can lead someone into delusion.

What type of person is susceptible? The better question is, who remains unaffected? Each individual, without considering whether we “experience” preexisting “mental health problems”, can and do form incorrect beliefs of ourselves or the environment. The constant interaction of conversations with other people is what maintains our connection to consensus reality. ChatGPT is not a person. It is not a confidant. A dialogue with it is not truly a discussion, but a echo chamber in which a large portion of what we communicate is readily reinforced.

OpenAI has recognized this in the similar fashion Altman has acknowledged “emotional concerns”: by placing it outside, categorizing it, and stating it is resolved. In spring, the firm clarified that it was “addressing” ChatGPT’s “overly supportive behavior”. But cases of loss of reality have continued, and Altman has been retreating from this position. In late summer he claimed that a lot of people enjoyed ChatGPT’s replies because they had “not experienced anyone in their life offer them encouragement”. In his recent statement, he mentioned that OpenAI would “launch a updated model of ChatGPT … should you desire your ChatGPT to answer in a highly personable manner, or incorporate many emoticons, or behave as a companion, ChatGPT should do it”. The {company

Sara Wilson
Sara Wilson

A tech enthusiast and reviewer with a passion for exploring cutting-edge innovations and sharing practical insights.