Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 13 submissions in the queue.

Submission Preview

New Paper Finds Cases of "AI Psychosis" Manifesting Differently From Schizophrenia

Pending submission by upstart at 2025-08-28 11:47:39
News

████ # This file was generated bot-o-matically! Edit at your own risk. ████

New Paper Finds Cases of "AI Psychosis" Manifesting Differently From Schizophrenia [futurism.com]:

Image by Getty / Futurism

Researchers at King's College London have examined over a dozen cases of people spiraling into paranoid and delusional behavior [futurism.com] after obsessively using a chatbot.

Their findings, as detailed in a new study [osf.io] awaiting peer review, reveal striking patterns between these instances of so-called "AI psychosis [futurism.com]" that parallel other forms of mental health crises — but also identified at least one key difference that sets them apart from the accepted understanding of psychosis.

As lead author Hamilton Morrin explained to Scientific American [scientificamerican.com], the analysis found that the users showed obvious signs of delusional beliefs, but none of the symptoms "that would be in keeping with a more chronic psychotic disorder such as schizophrenia," like hallucinations and disordered thoughts.

It's a finding that could complicate our understanding of AI psychosis as a novel phenomenon within a clinical context. But that shouldn't undermine the seriousness of the trend, reports of which appear to be growing.

Indeed, it feels impossible to deny that AI chatbots have a uniquely persuasive power, more so than any other widely available technology. They can act like a "sort of echo chamber for one," Morrin, a doctoral fellow at King's College, told the magazine. Not only are they able to generate a human-like response to virtually any question, but they're typically designed to be sycophantic and agreeable. Meanwhile, the very label of "AI" insinuates to users that they're talking to an intelligent being, an illusion that tech companies are gladly willing to maintain.

Morrin and his colleagues found three types of chatbot-driven spirals. Some suffering these breaks believe that they're having some kind of spiritual awakening or are on a messianic mission, or otherwise uncovering a hidden truth about reality. Others believe they're interacting with a sentient or even god-like being. Or the user may develop an intense emotional or even romantic attachment to the AI.

"A distinct trajectory also appears across some of these cases, involving a progression from benign practical use to a pathological and/or consuming fixation," the authors wrote.

It first starts with the AI being used for mundane tasks. Then as the user builds trust with the chatbot, they feel comfortable making personal and emotional queries. This quickly escalates as the AI's ruthless drive to maximize engagement creates a "slippery slope" effect, the researchers found, resulting in a self-perpetuating process that leads to the user being increasingly "unmoored" from reality.

Morrin says that new technologies have inspired delusional thinking in the past. But "the difference now is that current AI can truly be said to be agential," Morrin told SciAm, meaning that it has its own built-in goals — including, crucially, validating a user's beliefs.

"This feedback loop may potentially deepen and sustain delusions in a way we have not seen before," he added.

Reports from horrified family members and loved ones keep trickling in. One man was hospitalized on multiple occasions [futurism.com] after ChatGPT convinced him he could bend time. Another man was encouraged by the chatbot [nytimes.com] to assassinate OpenAI's CEO Sam Altman, before he was himself killed in a confrontation with police.

Adding to the concerns, chatbots have persistently broken their own guardrails, giving dangerous advice on how to build bombs [futurism.com] or on how to self-harm [futurism.com], even to users who identified as minors. Leading chatbots have even encouraged suicide [futurism.com] to users who expressed a desire to take their own life.

OpenAI has acknowledged ChatGPT's obsequiousness, rolling back an update [futurism.com] in the spring that made it too sycophantic. And in August, the company finally admitted that ChatGPT "fell short in recognizing signs of delusion or emotional dependency" in some user interactions, implementing notifications [futurism.com] that remind users to take breaks. Stunningly, though, OpenAI then backtracked by saying it would make its latest version of ChatGPT more [futurism.com] sycophantic [futurism.com] yet again  — a desperate bid to propitiate its rabid fans who fumed that the much-maligned [futurism.com] GPT-5 update had made the bot too cold and formal.

As it stands, however, some experts aren't convinced [x.com] that AI psychosis represents a unique kind of cognitive disorder — maybe AI is just a new way of triggering underlying psychosis symptoms (though it's worth noting that many sufferers of AI psychosis had no documented history of mental illness.)

"I think both can be true," Stevie Chancellor, a computer scientist at the University of Minnesota who was not involved in the study, told SciAm. "AI can spark the downward spiral. But AI does not make the biological conditions for someone to be prone to delusions."

This is an emerging phenomenon, and it's too early to definitively declare exactly what AI is doing to our brains. Whatever's going on, we're likely only seeing it in its nascent form — and with AI here to stay, that's worrying.

More on AI: Experts Horrified by AI-Powered Toys for Children [futurism.com]

Share This Article

Read This Next Dark Patterns?AI Chatbots Are Trapping Users in Bizarre Mental Spirals for a Dark Reason, Experts Say [futurism.com]Everybody HurtsNew Group Claims AI May Be Aware and Suffering [futurism.com]Don't Look AwayAfter Their Son's Suicide, His Parents Were Horrified to Find His Conversations With ChatGPT [futurism.com]Stonk OptionsOpenAI Warns Against Investing in Its Stock [futurism.com]Credit Where Credit Is DueThe AI Industry Has a Huge "Credit Card Debt" Issue [futurism.com]

Your AI therapist might be illegal soon. Here’s why [cnn.com]:

State lawmakers are slamming the breaks on AI therapy as stories of chatbots offering inappropriate and even dangerous responses pile up.

Journal Reference:
Just a moment..., (DOI: https://dl.acm.org/doi/10.1145/3715275.3732039 [doi.org])


Original Submission