Stories
Slash Boxes
Comments

SoylentNews is people

Submission Preview

Link to Story

What Kind of Mind Does ChatGPT Have?

Accepted submission by fliptop at 2023-04-30 12:10:40 from the HAL-call-your-office dept.
Answers

Months before OpenAI released ChatGPT, Google engineer and AI ethicist Blake Lemoine went viral after going on record with The Washington Post [washingtonpost.com] to claim that LaMDA, Google's powerful large language model (LLM), had come to life, an act that cost him his job [washingtonpost.com].

Now that the dust has settled, Futurism has published an interview [futurism.com] with Lemoine to talk about the state of the AI industry, what Google might still have in the vault, and whether society is actually ready for what AI may bring.

Which begs the question, if AI is sentient, what kind of mind does it have [newyorker.com]?

What kinds of new minds are being released into our world? The response to ChatGPT, and to the other chatbots that have followed in its wake, has often suggested that they are powerful, sophisticated, imaginative, and possibly even dangerous. But is that really true? If we treat these new artificial-intelligence [newyorker.com] tools as mysterious black boxes, it’s impossible to say. Only by taking the time to investigate how this technology actually works—from its high-level concepts down to its basic digital wiring—can we understand what we’re dealing with. We send messages into the electronic void, and receive surprising replies. But what, exactly, is writing back?

[...] The idea that programs like ChatGPT might represent a recognizable form of intelligence is further undermined by the details of their architecture. Consciousness depends on a brain’s ability to maintain a constantly updated conception of itself as a distinct entity interacting with a model of the external world. The layers of neural networks that make up systems like ChatGPT, however, are static: once they’re trained, they never change. ChatGPT maintains no persistent state, no model of its surroundings that it modifies with new information, no memory of past conversations. It just cranks out words one at a time, in response to whatever input it’s provided, applying the exact same rules for each mechanistic act of grammatical production—regardless of whether that word is part of a description of VCR repair or a joke in a sitcom script.

[...] With the introduction of GPT-3, which paved the way for the next-generation chatbots that have impressed us in recent months, OpenAI created, seemingly all at once, a significant leap forward in the study of artificial intelligence. But, once we’ve taken the time to open up the black box and poke around the springs and gears found inside, we discover that programs like ChatGPT don’t represent an alien intelligence with which we must now learn to coexist; instead, they turn out to run on the well-worn digital logic of pattern-matching, pushed to a radically larger scale.

Originally spotted on The Eponymous Pickle [blogspot.com].

Previously:


Original Submission