Reddit user: "It's not just you, ChatGPT is having a stroke":
On Tuesday, ChatGPT users began reporting unexpected outputs from OpenAI's AI assistant, flooding the r/ChatGPT Reddit sub with reports of the AI assistant "having a stroke," "going insane," "rambling," and "losing it." OpenAI has acknowledged the problem and is working on a fix, but the experience serves as a high-profile example of how some people perceive malfunctioning large language models, which are designed to mimic humanlike output.
ChatGPT is not alive and does not have a mind to lose, but tugging on human metaphors (called "anthropomorphization") seems to be the easiest way for most people to describe the unexpected outputs they have been seeing from the AI model. They're forced to use those terms because OpenAI doesn't share exactly how ChatGPT works under the hood; the underlying large language models function like a black box.
[...] "The common experience over the last few hours seems to be that responses begin coherently, like normal, then devolve into nonsense, then sometimes Shakespearean nonsense," wrote one Reddit user, which seems to match the experience seen in the screenshots above.
[...] So far, we've seen experts speculating that the problem could stem from ChatGPT having its temperature set too high (temperature is a property in AI that determines how wildly the LLM deviates from the most probable output), suddenly losing past context (the history of the conversation), or perhaps OpenAI is testing a new version of GPT-4 Turbo (the AI model that powers the subscription version of ChatGPT) that includes unexpected bugs. It could also be a bug in a side feature, such as the recently introduced "memory" function.
[...] On social media, some have used the recent ChatGPT snafu as an opportunity to plug open-weights AI models, which allow anyone to run chatbots on their own hardware. "Black box APIs can break in production when one of their underlying components gets updated. This becomes an issue when you build tools on top of these APIs, and these break down, too," wrote Hugging Face AI researcher Dr. Sasha Luccioni on X. "That's where open-source has a major advantage, allowing you to pinpoint and fix the problem!"
[...] On Wednesday evening, OpenAI declared the ChatGPT writing nonsense issue (what they called "Unexpected responses from ChatGPT") as resolved, and the company's technical staff published a postmortem explanation on its official incidents page:
On February 20, 2024, an optimization to the user experience introduced a bug with how the model processes language. LLMs generate responses by randomly sampling words based in part on probabilities. Their "language" consists of numbers that map to tokens. In this case, the bug was in the step where the model chooses these numbers. Akin to being lost in translation, the model chose slightly wrong numbers, which produced word sequences that made no sense. More technically, inference kernels produced incorrect results when used in certain GPU configurations. Upon identifying the cause of this incident, we rolled out a fix and confirmed that the incident was resolved.
As ChatGPT says: "If there's a train or a fleer for whides more in the yinst of dall, givings, or gides, am I then to prate or aide."
(Score: 5, Funny) by sigterm on Saturday February 24 2024, @02:21AM (3 children)
Excerpt from an answer to a Python question:
This is beyond hilarious.
(Score: 0) by Anonymous Coward on Saturday February 24 2024, @03:40AM
> This is beyond hilarious.
Share and Enjoy!
(Score: 3, Funny) by Anonymous Coward on Saturday February 24 2024, @04:26AM
It thought you meant Monty Python.
(Score: 2) by Freeman on Monday February 26 2024, @02:47PM
At least Grok [heycurio.com] (AI kids toy) is more coherent, if equally oblivious.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"