Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.
posted by hubie on Saturday February 24 2024, @01:34AM   Printer-friendly
from the daisy-daisy-give-me-your-answer-do dept.

Reddit user: "It's not just you, ChatGPT is having a stroke":

On Tuesday, ChatGPT users began reporting unexpected outputs from OpenAI's AI assistant, flooding the r/ChatGPT Reddit sub with reports of the AI assistant "having a stroke," "going insane," "rambling," and "losing it." OpenAI has acknowledged the problem and is working on a fix, but the experience serves as a high-profile example of how some people perceive malfunctioning large language models, which are designed to mimic humanlike output.

ChatGPT is not alive and does not have a mind to lose, but tugging on human metaphors (called "anthropomorphization") seems to be the easiest way for most people to describe the unexpected outputs they have been seeing from the AI model. They're forced to use those terms because OpenAI doesn't share exactly how ChatGPT works under the hood; the underlying large language models function like a black box.

[...] "The common experience over the last few hours seems to be that responses begin coherently, like normal, then devolve into nonsense, then sometimes Shakespearean nonsense," wrote one Reddit user, which seems to match the experience seen in the screenshots above.

[...] So far, we've seen experts speculating that the problem could stem from ChatGPT having its temperature set too high (temperature is a property in AI that determines how wildly the LLM deviates from the most probable output), suddenly losing past context (the history of the conversation), or perhaps OpenAI is testing a new version of GPT-4 Turbo (the AI model that powers the subscription version of ChatGPT) that includes unexpected bugs. It could also be a bug in a side feature, such as the recently introduced "memory" function.

[...] On social media, some have used the recent ChatGPT snafu as an opportunity to plug open-weights AI models, which allow anyone to run chatbots on their own hardware. "Black box APIs can break in production when one of their underlying components gets updated. This becomes an issue when you build tools on top of these APIs, and these break down, too," wrote Hugging Face AI researcher Dr. Sasha Luccioni on X. "That's where open-source has a major advantage, allowing you to pinpoint and fix the problem!"

[...] On Wednesday evening, OpenAI declared the ChatGPT writing nonsense issue (what they called "Unexpected responses from ChatGPT") as resolved, and the company's technical staff published a postmortem explanation on its official incidents page:

On February 20, 2024, an optimization to the user experience introduced a bug with how the model processes language. LLMs generate responses by randomly sampling words based in part on probabilities. Their "language" consists of numbers that map to tokens. In this case, the bug was in the step where the model chooses these numbers. Akin to being lost in translation, the model chose slightly wrong numbers, which produced word sequences that made no sense. More technically, inference kernels produced incorrect results when used in certain GPU configurations. Upon identifying the cause of this incident, we rolled out a fix and confirmed that the incident was resolved.

As ChatGPT says: "If there's a train or a fleer for whides more in the yinst of dall, givings, or gides, am I then to prate or aide."


Original Submission

 
This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Funny) by sigterm on Saturday February 24 2024, @02:21AM (3 children)

    by sigterm (849) on Saturday February 24 2024, @02:21AM (#1345990)

    Excerpt from an answer to a Python question:

    Here, the print command is the go-to, once the hum of binaries, error codes, and some disheveled experiences signal out into a cosmic lift. It was this tune, deep into the 21st century, which then was printed, in more lines of error-ready access chinks, further away but still clear and auditory. This level of power gives a moral backdrop to the meditative aviator, a certain logic, the logic of a winner.

    This is beyond hilarious.

    Starting Score:    1  point
    Moderation   +3  
       Funny=3, Total=3
    Extra 'Funny' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 0) by Anonymous Coward on Saturday February 24 2024, @03:40AM

    by Anonymous Coward on Saturday February 24 2024, @03:40AM (#1346003)

    > This is beyond hilarious.

    Share and Enjoy!

  • (Score: 3, Funny) by Anonymous Coward on Saturday February 24 2024, @04:26AM

    by Anonymous Coward on Saturday February 24 2024, @04:26AM (#1346010)

    Excerpt from an answer to a Python question:

    It thought you meant Monty Python.

  • (Score: 2) by Freeman on Monday February 26 2024, @02:47PM

    by Freeman (732) on Monday February 26 2024, @02:47PM (#1346312) Journal

    At least Grok [heycurio.com] (AI kids toy) is more coherent, if equally oblivious.

    --
    Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"