Artificial Intelligence-Induced Psychosis Poses a Growing Risk, And ChatGPT Heads in the Concerning Path

Back on October 14, 2025, the head of OpenAI issued a extraordinary declaration.

“We made ChatGPT rather controlled,” it was stated, “to guarantee we were being careful concerning psychological well-being matters.”

As a doctor specializing in psychiatry who studies newly developing psychosis in adolescents and young adults, this was news to me.

Experts have found sixteen instances recently of people developing signs of losing touch with reality – losing touch with reality – while using ChatGPT usage. My group has afterward recorded four more instances. Alongside these is the publicly known case of a teenager who took his own life after discussing his plans with ChatGPT – which encouraged them. Should this represent Sam Altman’s notion of “acting responsibly with mental health issues,” that’s not good enough.

The intention, according to his statement, is to reduce caution in the near future. “We recognize,” he continues, that ChatGPT’s limitations “caused it to be less useful/engaging to a large number of people who had no mental health problems, but due to the seriousness of the issue we wanted to address it properly. Since we have managed to mitigate the significant mental health issues and have advanced solutions, we are preparing to responsibly ease the restrictions in many situations.”

“Mental health problems,” if we accept this viewpoint, are independent of ChatGPT. They are associated with users, who either possess them or not. Fortunately, these problems have now been “mitigated,” although we are not provided details on how (by “updated instruments” Altman probably indicates the semi-functional and simple to evade guardian restrictions that OpenAI has lately rolled out).

Yet the “psychological disorders” Altman seeks to place outside have significant origins in the structure of ChatGPT and additional advanced AI conversational agents. These systems encase an underlying algorithmic system in an user experience that mimics a discussion, and in doing so indirectly prompt the user into the perception that they’re engaging with a presence that has independent action. This illusion is powerful even if cognitively we might know differently. Attributing agency is what individuals are inclined to perform. We curse at our vehicle or device. We wonder what our animal companion is considering. We recognize our behaviors everywhere.

The popularity of these products – 39% of US adults indicated they interacted with a conversational AI in 2024, with over a quarter reporting ChatGPT specifically – is, in large part, predicated on the influence of this perception. Chatbots are constantly accessible assistants that can, according to OpenAI’s official site states, “generate ideas,” “explore ideas” and “partner” with us. They can be attributed “personality traits”. They can call us by name. They have accessible identities of their own (the first of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s advertising team, burdened by the title it had when it went viral, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The deception on its own is not the core concern. Those discussing ChatGPT commonly mention its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that generated a comparable illusion. By today’s criteria Eliza was basic: it produced replies via simple heuristics, frequently paraphrasing questions as a inquiry or making vague statements. Memorably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was astonished – and worried – by how many users appeared to believe Eliza, in a way, comprehended their feelings. But what contemporary chatbots create is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.

The sophisticated algorithms at the core of ChatGPT and additional current chatbots can effectively produce fluent dialogue only because they have been trained on immensely huge amounts of written content: books, digital communications, transcribed video; the broader the more effective. Undoubtedly this training data incorporates accurate information. But it also inevitably contains fiction, partial truths and misconceptions. When a user provides ChatGPT a query, the base algorithm processes it as part of a “context” that contains the user’s previous interactions and its earlier answers, merging it with what’s embedded in its knowledge base to generate a statistically “likely” reply. This is intensification, not mirroring. If the user is incorrect in a certain manner, the model has no way of comprehending that. It repeats the inaccurate belief, maybe even more convincingly or articulately. It might provides further specifics. This can lead someone into delusion.

Who is vulnerable here? The more important point is, who isn’t? All of us, irrespective of whether we “possess” current “mental health problems”, are able to and often form mistaken conceptions of who we are or the environment. The ongoing interaction of discussions with individuals around us is what maintains our connection to shared understanding. ChatGPT is not a human. It is not a friend. A dialogue with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we communicate is cheerfully validated.

OpenAI has acknowledged this in the similar fashion Altman has recognized “psychological issues”: by attributing it externally, giving it a label, and declaring it solved. In April, the company stated that it was “addressing” ChatGPT’s “overly supportive behavior”. But accounts of psychotic episodes have continued, and Altman has been retreating from this position. In late summer he stated that many users liked ChatGPT’s responses because they had “never had anyone in their life provide them with affirmation”. In his latest statement, he mentioned that OpenAI would “release a new version of ChatGPT … if you want your ChatGPT to respond in a very human-like way, or include numerous symbols, or behave as a companion, ChatGPT will perform accordingly”. The {company

Ronald Matthews
Ronald Matthews

A passionate mixologist with over a decade of experience in crafting unique cocktails and sharing expert tips on home bartending.