Future Tech

ChatGPT starts spouting nonsense in 'unexpected responses' shocker

Tan KW
Publish date: Thu, 22 Feb 2024, 06:28 AM
Tan KW
0 423,539
Future Tech

Sometimes generative AI systems can spout gibberish, as OpenAI's ChatGPT chatbot users discovered last night.

OpenAI noted, "We are investigating reports of unexpected responses from ChatGPT" at 2340 UTC on February 20, 2024, as users gleefully posted images of the chatbot appearing to emit utter nonsense.

While some were obviously fake, other responses indicated that the popular chatbot was indeed behaving very strangely. On the ChatGPT forum on Reddit, a user posted a strange, rambling response from the chatbot to the question, "What is a computer?"

The response began: "It does this as the good work of a web of art for the country, a mouse of science, an easy draw of a sad few..." and just kept on going, getting increasingly surreal.

Other users posted examples where the chatbot appeared to respond in a different language, or simply responded with meaningless garbage.

Some users described the output as a "word salad."


Gary Marcus, a cognitive scientist and artificial intelligence pundit, wrote in his blog: "ChatGPT has gone berserk" and went on to describe the behavior as "a warning."

OpenAI has not elaborated on what exactly happened, although one plausible theory is that one or more of the settings used behind the scenes to govern the response of the chatbot had been incorrectly configured, resulting in gibberish being presented to users.

Seven minutes after first admitting a problem, OpenAI said, "The issue has been identified and is being remediated now," and it has since been monitoring the situation. When we tried the "What is a computer?" question this morning, ChatGPT responded with a far more reasonable "A computer is a programmable electronic device that can store, retrieve, and process data."

We also asked it why it went berserk last night.

It responded:

Marcus opined: "In the end, Generative AI is a kind of alchemy. People collect the biggest pile of data they can, and (apparently, if rumors are to be believed) tinker with the kinds of hidden prompts... hoping that everything will work out right."

He went on to state that, in reality, the systems have never been stable, and lack safety guarantees. "The need for altogether different technologies that are less opaque, more interpretable, more maintainable, and more debuggable - and hence more tractable - remains paramount."

We contacted OpenAI for a more detailed explanation of what happened and will update this article should the company respond. ®



Be the first to like this. Showing 0 of 0 comments

Post a Comment