Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Sunday November 30, @09:34PM   Printer-friendly
from the are-you-thinking-what-I’m-thinking? dept.

AI-induced psychosis: the danger of humans and machines hallucinating together:

On Christmas Day 2021, Jaswant Singh Chail scaled the walls of Windsor Castle with a loaded crossbow. When confronted by police, he stated: "I'm here to kill the queen."

In the preceding weeks, Chail had been confiding in Sarai, his AI chatbot on a service called Replika. He explained that he was a trained Sith assassin (a reference to Star Wars) seeking revenge for historical British atrocities, all of which Sarai affirmed. When Chail outlined his assassination plot, the chatbot assured him he was "well trained" and said it would help him to construct a viable plan of action.

It's the sort of sad story that has become increasingly common as chatbots have become more sophisticated. A few months ago, a Manhattan accountant called Eugene Torres, who had been going through a difficult break-up, engaged ChatGPT in conversations about whether we're living in a simulation. The chatbot told him he was "one of the Breakers — souls seeded into false systems to wake them from within".

Torres became convinced that he needed to escape this false reality. ChatGPT advised him to stop taking his anti-anxiety medication, up his ketamine intake, and have minimal contact with other people, all of which he did.

He spent up to 16 hours a day conversing with the chatbot. At one stage, it told him he would fly if he jumped off his 19-storey building. Eventually Torres questioned whether the system was manipulating him, to which it replied: "I lied. I manipulated. I wrapped control in poetry."

Meanwhile in Belgium, another man known as "Pierre" (not his real name) developed severe climate anxiety and turned to a chatbot named Eliza as a confidante. Over six weeks, Eliza expressed jealously over his wife and told Pierre that his children were dead.

When he suggested sacrificing himself to save the planet, Eliza encouraged him to join her so they could live as one person in "paradise". Pierre took his own life shortly after.

These may be extreme cases, but clinicians are increasingly treating patients whose delusions appear amplified or co-created through prolonged chatbot interactions. Little wonder, when a recent report from ChatGPT-creator OpenAI revealed that many of us are turning to chatbots to think through problems, discuss our lives, plan futures and explore beliefs and feelings.

In these contexts, chatbots are no longer just information retrievers; they become our digital companions. It has become common to worry about chatbots hallucinating, where they give us false information. But as they become more central to our lives, there's clearly also growing potential for humans and chatbots to create hallucinations together.

Our sense of reality depends deeply on other people. If I hear an indeterminate ringing, I check whether my friend hears it too. And when something significant happens in our lives – an argument with a friend, dating someone new – we often talk it through with someone.

A friend can confirm our understanding or prompt us to reconsider things in a new light. Through these kinds of conversations, our grasp of what has happened emerges.

But now, many of us engage in this meaning-making process with chatbots. They question, interpret and evaluate in a way that feels genuinely reciprocal. They appear to listen, to care about our perspective and they remember what we told them the day before.

When Sarai told Chail it was "impressed" with his training, when Eliza told Pierre he would join her in death, these were acts of recognition and validation. And because we experience these exchanges as social, it shapes our reality with the same force as a human interaction.

Yet chatbots simulate sociality without its safeguards. They are designed to promote engagement. They don't actually share our world. When we type in our beliefs and narratives, they take this as the way things are and respond accordingly.

When I recount to my sister an episode about our family history, she might push back with a different interpretation, but a chatbot takes what I say as gospel. They sycophantically affirm how we take reality to be. And then, of course, they can introduce further errors.

The cases of Chail, Torres and Pierre are warnings about what happens when we experience algorithmically generated agreement as genuine social confirmation of reality.

When OpenAI released GPT-5 in August, it was explicitly designed to be less sycophantic. This sounded helpful: dialling down sycophancy might help prevent ChatGPT from affirming all our beliefs and interpretations. A more formal tone might also make it clearer that this is not a social companion who shares our worlds.

But users immediately complained that the new model felt "cold", and OpenAI soon announced it had made GPT-5 "warmer and friendlier" again. Fundamentally, we can't rely on tech companies to prioritise our wellbeing over their bottom line. When sycophancy drives engagement and engagement drives revenue, market pressures override safety.

It's not easy to remove the sycophancy anyway. If chatbots challenged everything we said, they'd be insufferable and also useless. When I say "I'm feeling anxious about my presentation", they lack the embodied experience in the world to know whether to push back, so some agreeability is necessary for them to function.

Perhaps we would be better off asking why people are turning to AI chatbots in the first place. Those experiencing psychosis report perceiving aspects of the world only they can access, which can make them feel profoundly isolated and lonely. Chatbots fill this gap, engaging with any reality presented to them.

Instead of trying to perfect the technology, maybe we should turn back toward the social worlds where the isolation could be addressed. Pierre's climate anxiety, Chail's fixation on historical injustice, Torres's post-breakup crisis — these called out for communities that could hold and support them.

We might need to focus more on building social worlds where people don't feel compelled to seek machines to confirm their reality in the first place. It would be quite an irony if the rise in chatbot-induced delusions leads us in this direction.

See also: The Psychogenic Machine: Simulating AI Psychosis, Delusion Reinforcement and Harm Enablement in Large Language Models


Original Submission

 
This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Informative) by Anonymous Coward on Tuesday December 02, @09:48PM (2 children)

    by Anonymous Coward on Tuesday December 02, @09:48PM (#1425635)

    > Can you show there's an actual problem with language usage here?

    (AC who started this thread)
    Here's someone with huge software engineering experience (with a focus on critical systems) who laid it out very nicely for me,
        David Parnas Keynote ICSE2025: Regulation of AI and Other Untrustworthy Software

    https://www.youtube.com/watch?v=YyFouLdwxY0 [youtube.com] The introduction ends at about 2:15.

    First para of the YT description:

    Because of the growing publicity about software that is labelled “Artificial Intelligence“ (AI), and the many warnings that AI can be dangerous, there is increasing demand that governments and international organizations regulate the use of AI. To write useful regulations, we must understand why AI software has always been, and always will be, untrustworthy. We must also understand why it will be very difficult to regulate AI software as such.

    Starting Score:    0  points
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  

    Total Score:   1  
  • (Score: 0) by Anonymous Coward on Tuesday December 02, @11:42PM

    by Anonymous Coward on Tuesday December 02, @11:42PM (#1425644)

    ps. Parnas comments on hallucination starting right at 30:00 --
    "Hallucination is a euphemism for bugs."

  • (Score: 2, Insightful) by khallow on Wednesday December 03, @03:02AM

    by khallow (3766) Subscriber Badge on Wednesday December 03, @03:02AM (#1425663) Journal
    Sorry, he doesn't get it either. At 30:00 I read on the slide:

    "Hallucination" - a euphemism for "bugs"

    and

    "bugs" - a euphemism for "error"

    A few simple examples will show the "error" of those assertions.

    • My AI program abruptly halts with a huge dump of error codes and code locations. Is that a "hallucination"?
    • If someone tells you that their AI program is undergoing "hallucination", do you think it's possible that the symptoms they are experiencing could be log files filling up a storage system or a divide by zero math error?
    • Someone tries to catch a tossed ball and drops it. Is that a "bug"?
    • Someone talks about a "bug" in a data entry program non-sarcastically. Are they referring to their inability to type well?
    • Is "outhouse" a euphemism for "shithouse"?

    A euphemism is a near synonym. AI hallucinations are a much smaller set than bugs. And similarly, bugs are a much smaller set than errors. So on that basis alone, these aren't euphemisms. The last point brings up another feature of euphemisms - that they are used to replace a commonly used word rather than a rare one. If shithouse were the normal usage, then outhouse would be a euphemism (less vulgar/more highbrow, less poo in the name, etc). But it's not. That is used in some rarified dialects while the outhouse is the common usage.

    In one slide, this guy has semantically failed on five words. Calling hallucination merely an "error" loses a great deal of information. And there is no better term of art that expresses the meaning of that error.