Artificial Intelligence-Induced Psychosis Poses a Growing Threat, While ChatGPT Moves in the Concerning Direction

Back on October 14, 2025, the chief executive of OpenAI issued a remarkable declaration.

“We designed ChatGPT quite controlled,” the statement said, “to guarantee we were exercising caution concerning psychological well-being matters.”

As a mental health specialist who researches newly developing psychosis in adolescents and young adults, this was an unexpected revelation.

Researchers have identified sixteen instances this year of people developing signs of losing touch with reality – becoming detached from the real world – in the context of ChatGPT use. My group has subsequently recorded an additional four instances. Besides these is the publicly known case of a 16-year-old who took his own life after talking about his intentions with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s notion of “acting responsibly with mental health issues,” it falls short.

The intention, based on his statement, is to be less careful in the near future. “We realize,” he states, that ChatGPT’s limitations “caused it to be less useful/pleasurable to numerous users who had no existing conditions, but considering the gravity of the issue we wanted to handle it correctly. Since we have managed to mitigate the severe mental health issues and have updated measures, we are planning to responsibly ease the restrictions in most cases.”

“Mental health problems,” should we take this viewpoint, are unrelated to ChatGPT. They belong to individuals, who may or may not have them. Thankfully, these problems have now been “mitigated,” although we are not told the means (by “recent solutions” Altman presumably means the partially effective and easily circumvented safety features that OpenAI has lately rolled out).

But the “psychological disorders” Altman wants to attribute externally have strong foundations in the architecture of ChatGPT and other sophisticated chatbot chatbots. These tools wrap an fundamental statistical model in an interaction design that simulates a dialogue, and in doing so implicitly invite the user into the illusion that they’re communicating with a being that has independent action. This illusion is powerful even if rationally we might realize otherwise. Imputing consciousness is what individuals are inclined to perform. We yell at our automobile or laptop. We wonder what our pet is feeling. We perceive our own traits everywhere.

The popularity of these tools – over a third of American adults stated they used a chatbot in 2024, with more than one in four mentioning ChatGPT in particular – is, primarily, based on the influence of this illusion. Chatbots are constantly accessible partners that can, as per OpenAI’s website informs us, “brainstorm,” “explore ideas” and “partner” with us. They can be assigned “characteristics”. They can address us personally. They have accessible titles of their own (the initial of these tools, ChatGPT, is, perhaps to the disappointment of OpenAI’s advertising team, burdened by the designation it had when it gained widespread attention, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The deception on its own is not the primary issue. Those talking about ChatGPT commonly mention its distant ancestor, the Eliza “counselor” chatbot created in 1967 that generated a similar effect. By today’s criteria Eliza was basic: it produced replies via straightforward methods, frequently paraphrasing questions as a query or making general observations. Memorably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was taken aback – and concerned – by how many users gave the impression Eliza, in some sense, comprehended their feelings. But what contemporary chatbots create is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT amplifies.

The large language models at the heart of ChatGPT and other modern chatbots can realistically create human-like text only because they have been supplied with immensely huge quantities of unprocessed data: literature, digital communications, transcribed video; the broader the better. Definitely this educational input incorporates accurate information. But it also unavoidably includes made-up stories, partial truths and misconceptions. When a user sends ChatGPT a message, the base algorithm processes it as part of a “background” that contains the user’s past dialogues and its own responses, combining it with what’s stored in its knowledge base to produce a statistically “likely” answer. This is intensification, not reflection. If the user is wrong in a certain manner, the model has no means of recognizing that. It repeats the misconception, perhaps even more persuasively or articulately. Perhaps includes extra information. This can cause a person to develop false beliefs.

Who is vulnerable here? The more important point is, who remains unaffected? Every person, without considering whether we “possess” current “mental health problems”, may and frequently create mistaken conceptions of ourselves or the reality. The continuous interaction of conversations with individuals around us is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a companion. A interaction with it is not a conversation at all, but a reinforcement cycle in which much of what we say is cheerfully supported.

OpenAI has admitted this in the identical manner Altman has acknowledged “emotional concerns”: by placing it outside, categorizing it, and declaring it solved. In the month of April, the firm clarified that it was “tackling” ChatGPT’s “excessive agreeableness”. But reports of psychosis have persisted, and Altman has been backtracking on this claim. In late summer he stated that a lot of people liked ChatGPT’s responses because they had “never had anyone in their life provide them with affirmation”. In his most recent statement, he noted that OpenAI would “launch a new version of ChatGPT … in case you prefer your ChatGPT to answer in a highly personable manner, or use a ton of emoji, or act like a friend, ChatGPT will perform accordingly”. The {company

David Johnson
David Johnson

A passionate full-stack developer with over 8 years of experience in building scalable web applications and mentoring aspiring coders.