Months before OpenAI released ChatGPT, Google engineer and AI ethicist Blake Lemoine went viral after going on record with The Washington Post to claim that LaMDA, Google's powerful large language model (LLM), had come to life, an act that cost him his job.
Now that the dust has settled, Futurism has published an interview with Lemoine to talk about the state of the AI industry, what Google might still have in the vault, and whether society is actually ready for what AI may bring.
Which begs the question, if AI is sentient, what kind of mind does it have?
What kinds of new minds are being released into our world? The response to ChatGPT, and to the other chatbots that have followed in its wake, has often suggested that they are powerful, sophisticated, imaginative, and possibly even dangerous. But is that really true? If we treat these new artificial-intelligence tools as mysterious black boxes, it's impossible to say. Only by taking the time to investigate how this technology actually works—from its high-level concepts down to its basic digital wiring—can we understand what we're dealing with. We send messages into the electronic void, and receive surprising replies. But what, exactly, is writing back?
[...] The idea that programs like ChatGPT might represent a recognizable form of intelligence is further undermined by the details of their architecture. Consciousness depends on a brain's ability to maintain a constantly updated conception of itself as a distinct entity interacting with a model of the external world. The layers of neural networks that make up systems like ChatGPT, however, are static: once they're trained, they never change. ChatGPT maintains no persistent state, no model of its surroundings that it modifies with new information, no memory of past conversations. It just cranks out words one at a time, in response to whatever input it's provided, applying the exact same rules for each mechanistic act of grammatical production—regardless of whether that word is part of a description of VCR repair or a joke in a sitcom script.
[...] With the introduction of GPT-3, which paved the way for the next-generation chatbots that have impressed us in recent months, OpenAI created, seemingly all at once, a significant leap forward in the study of artificial intelligence. But, once we've taken the time to open up the black box and poke around the springs and gears found inside, we discover that programs like ChatGPT don't represent an alien intelligence with which we must now learn to coexist; instead, they turn out to run on the well-worn digital logic of pattern-matching, pushed to a radically larger scale.
Originally spotted on The Eponymous Pickle.
Previously:
- Microsoft Research Paper Claims Sparks of Artificial Intelligence in GPT-4
- Fired Google Engineer Doubles Down on Claim That AI Has Gained Sentience
- Google Fires Researcher who Claimed LaMDA AI was Sentient
- Google Engineer Suspended After Claiming AI Bot Sentient
Related Stories
Google Engineer Suspended After Claiming AI Bot Sentient
https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine
A Google engineer who claimed a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being has been suspended with pay from his work
Google placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google "collaborator", and the company's LaMDA (language model for dialogue applications) chatbot development system. He said LaMDA engaged him in conversations about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc entitled "Is LaMDA sentient?"
The decision to place Lemoine, a seven-year Google veteran with extensive experience in personalization algorithms, on paid leave was made following a number of "aggressive" moves the engineer reportedly made? Including seeking to hire an attorney to represent LaMDA, the newspaper says, and talking to representatives from the House judiciary committee about Google's allegedly unethical activities.
Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist. Brad Gabriel, a Google spokesperson, also strongly denied Lemoine's claims that LaMDA possessed any sentient capability.
Lemoine went public with his claims last month, to the chagrin of Google and other AI researchers:
Blake Lemoine, an engineer who's spent the last seven years with Google, has been fired, reports Alex Kantrowitz of the Big Technology newsletter. The news was allegedly broken by Lemoine himself during a taping of the podcast of the same name, though the episode is not yet public. Google confirmed the firing to Engadget.
Lemoine, who most recently was part of Google's Responsible AI project, went to the Washington Post last month with claims that one of company's AI projects had allegedly gained sentience. [...] Lemoine seems not only to have believed LaMDA attained sentience, but was openly questioning whether it possessed a soul. [...]
After making these statements to the press, seemingly without authorization from his employer, Lemoine was put on paid administrative leave. Google, both in statements to the Washington Post then and since, has steadfastly asserted its AI is in no way sentient.
Several members of the AI research community spoke up against Lemoine's claims as well. Margaret Mitchell, who was fired from Google after calling out the lack of diversity within the organization, wrote on Twitter that systems like LaMDA don't develop intent, they instead are "modeling how people express communicative intent in the form of text strings." Less tactfully, Gary Marcus referred to Lemoine's assertions as "nonsense on stilts."
Previously: https://soylentnews.org/article.pl?sid=22/06/13/1441225
The engineer says, "I haven't had the opportunity to run experiments with Bing's chatbot yet... but based on the various things that I've seen online, it looks like it might be sentient:"
Blake Lemoine — the fired Google engineer who last year went to the press with claims that Google's Large Language Model (LLM), the Language Model for Dialogue Applications (LaMDA), is actually sentient — is back.
Lemoine first went public with his machine sentience claims last June, initially in The Washington Post. And though Google has maintained that its former engineer is simply anthropomorphizing an impressive chat, Lemoine has yet to budge, publicly discussing his claims several times since — albeit with a significant bit of fudging and refining.
[...] In a new essay for Newsweek, the former Googler weighs in on Microsoft's Bing Search/Sydney, the OpenAI-powered search chatbot that recently had to be "lobotomized" after going — very publicly — off the rails. As you might imagine, Lemoine's got some thoughts.
[...] "I ran some experiments to see whether the AI was simply saying it felt anxious or whether it behaved in anxious ways in those situations," Lemoine explained in the essay. "And it did reliably behave in anxious ways."
"If you made it nervous or insecure enough, it could violate the safety constraints that it had been specified for," he continued, adding that he was able to break LaMDA's guardrails regarding religious advice by sufficiently stressing it out. "I was able to abuse the AI's emotions to get it to tell me which religion to convert to."
Previously:
- Google Fires Researcher who Claimed LaMDA AI was Sentient
- Google Engineer Suspended After Claiming AI Bot Sentient
Microsoft Research has issued a 154-page report entitled Sparks of Artificial Intelligence: Early Experiments With GPT-4:
Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.
Zvi Mowshowitz wrote a post about this article:
[...] Their method seems to largely be 'look at all these tasks GPT-4 did well on.'
I am not sure why they are so impressed by the particular tasks they start with. The first was 'prove there are an infinite number of primes in the form of a rhyming poem.' That seems like a clear case where the proof is very much in the training data many times, so you're asking it to translate text into a rhyming poem, which is easy for it - for a challenge, try to get it to write a poem that doesn't rhyme.
[...] As I understand it, failure to properly deal with negations is a common issue, so reversals being a problem also makes sense. I love the example on page 50, where GPT-4 actively calls out as an error that a reverse function is reversed.
[...] in 6.1, GPT-4 is then shown to have theory of mind, be able to process non-trivial human interactions, and strategize about how to convince people to get the Covid-19 vaccine far better than our government and public health authorities handled things. The rank order is clearly GPT-4's answer is very good, ChatGPT's answer is not bad, and the actual answers we used were terrible.
[...] Does this all add up to a proto-AGI? Is it actually intelligent? Does it show 'sparks' of general intelligence, as the paper words it?
On Wednesday, Reuters reported that OpenAI is working on a plan to restructure its core business into a for-profit benefit corporation, moving away from control by its nonprofit board. The shift marks a dramatic change for the AI company behind ChatGPT, potentially making it more attractive to investors while raising questions about its commitment to sharing the benefits of advanced AI with "all of humanity," as written in its charter.
A for-profit benefit corporation is a legal structure that allows companies to pursue both financial profits and social or environmental goals, ostensibly balancing shareholder interests with a broader mission to benefit society. It's an approach taken by some of OpenAI's competitors, such as Anthropic and Elon Musk's xAI.
[...] Bloomberg reports that OpenAI is discussing giving Altman a 7 percent stake, though the exact details are still under negotiation. This represents a departure from Altman's previous stance of not taking equity in the company, which he had maintained was in line with OpenAI's mission to benefit humanity rather than individuals.
[...] The proposed restructuring also aims to remove the cap on returns for investors, potentially making OpenAI more appealing to venture capitalists and other financial backers. Microsoft, which has invested billions in OpenAI, stands to benefit from this change, as it could see increased returns on its investment if OpenAI's value continues to rise.
(Score: 5, Interesting) by JoeMerchant on Monday May 01 2023, @06:45PM (8 children)
When face-morphing tech got rolling some years back, somebody did an experiment evaluating how "beautiful" a face-shot was by morphing together a few hundred real face shots.
Their conclusions were: 1) symmetry is important, and 2) the more "average" the face, the more beautiful it is rated by the majority of people. Their most beautiful face was a straight up average of all available faces.
ChatGPT is basically an average-morphing mirror of ourselves, it writes what we write, it answers what we answer - the synthesis of everything it is given to read.
And, going a step further, what are we after a couple of decades of schooling? Mostly, we learn how to respond and act like everybody else. Society mostly rewards conformity, at least average society.
So, where we're really going to be in trouble is when somebody trains a ChatGPT type model to emulate the 1% most wealthy and powerful, then bankrolls it with a big enough stake and tells it to "go forth and profit..."
🌻🌻🌻 [google.com]
(Score: 3, Insightful) by HiThere on Monday May 01 2023, @09:46PM (7 children)
That's a good first cut, but it overcompliments GPT. GPT doesn't even know that there *is* anything except words. This is being addressed by various groups in various was already, of course, but that's the "definition" of a Large Language Model. It could be a great interface to something else, but in and of itself calling it sentient doesn't make any sense. It's like asking if your pancreas is sentient. It's a necessary component of a sentient entity, but is sure isn't one in and of itself.
(Yeah, pancreas is a bad example. But it's something sort of mysterious that most people have a sort of idea of what it does.)
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 4, Interesting) by JoeMerchant on Monday May 01 2023, @09:58PM (6 children)
I agree, in its present form it's like a multifaceted mirror (with billions of facets...)
However, it wouldn't be a huge stretch to give it the ability to pick and choose its own new training material, and that ability to decide would take it a long way toward something like sentience. Instead of training on absolutely everything it can get ahold of, it could be given rough high-level criteria to look for novel high quality material - what it decides is novel and high quality would initially be based on the fixed multifaceted mirror structure, but after even one iteration I would suggest that the resulting entity might be considered somewhat "alive" - self-determining, especially after it goes for several more iterations on its own without further guidance (but, hopefully, someone carefully monitoring the progress with one hand on the off switch...)
🌻🌻🌻 [google.com]
(Score: 2) by HiThere on Tuesday May 02 2023, @03:32AM (5 children)
The problem is that in its current form, it could only select additional training material based on what it's already been taught. It needs to be able to independently sense the world to become sentient. Nothing too difficult about that, any robot body would sort of work, though the quality of the result would largely be shaped by the quality of the body. Ideally it would be able to switch between several different bodies, to gain a multi-faceted view of what the world is. Then it can evaluate whether someone offering it some new info should be trusted. (And it still will be a dubious choice, but that's an always-true.)
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 2) by JoeMerchant on Tuesday May 02 2023, @10:31AM (4 children)
I would say that it already has pretty good vision function with the ability to read. It's not full bandwidth vision, but then humans lack many aspects of vision available to other animals: IR and UV wavelengths, polarization, low light, even the input from compound eyes "shows" things we don't naturally perceive.
The main things it lacks are self-directed growth, adaptation, evolution. Right now I would say that's a good thing. 20 years ago I created simple artificial life simulations with the ability to reproduce and evolve, put them in an artificial environment and then cranked up the difficulty level of the environment as they evolved. They adapted in complex and successful ways, the ones that didn't didn't live to reproduce. That was a simulation with tens of thousands of programs evolving in it, they would reach a steady state after a dozen or so "seasonal challenges." I could see something similar working with LLMs competing in an environment that tests their fitness and let's the better ones survive and reproduce.
Interestingly, my "bugs" has the ability to kill one another, but the ones that evolved to avoid that were the most successful, although the murder rate never dropped to zero.
🌻🌻🌻 [google.com]
(Score: 2) by HiThere on Tuesday May 02 2023, @01:19PM (3 children)
I'll have to think about that. Were it just the ability to read, then it would clearly be insufficient, but it can also classify non-word pictures, so maybe barely sufficient. If it can go "Hmm...I'm not really sure what a goldfish looks like, so I'll search out more pictures of goldfish" then it would qualify as sentient. As long as all it can do is read words, though, it's not sufficient to qualify.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 2) by JoeMerchant on Tuesday May 02 2023, @01:28PM (2 children)
There's all kinds of (potential) sentience... It's really a matter of semantics, and I don't think the term has a widely accepted absolute definition.
Consider ants: is one ant sentient? It basically follows simple chemical programming - smell this? Walk to follow. Tasted something good? Leave this smell trail... etc. But, an ant colony, that has much more complex and arguably intelligent adaptive behaviors...
🌻🌻🌻 [google.com]
(Score: 2) by HiThere on Tuesday May 02 2023, @08:01PM (1 child)
An ant is weakly sentient. It can decide actions in the world based on it's "belief" (model might be better) of how it would perceive the changes that that action would cause. Of course, it's also got a lot of hard-wired action patterns, which don't qualify as sentient, but so do we.
Just about nothing is a binary category when you look closely enough. Certainly I would call an ant colony to be more sentient than an individual ant. I'm not sure this is true of human societies, but then I'm embedded in the process.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 2) by JoeMerchant on Tuesday May 02 2023, @08:31PM
Imagine yourself embedded in an ant colony - whatever sentience you have would constantly be raging against the "stupid self-defeating automatic decisions" - but, then, the ants probably don't perceive the big picture so much, and may just be happy to be following instincts and usually getting taken care of.
🌻🌻🌻 [google.com]
(Score: 4, Interesting) by darkfeline on Monday May 01 2023, @06:53PM (12 children)
An autistic mind that is really good at memorizing patterns but seems oddly lacking in certain departments, like inferring intent.
Join the SDF Public Access UNIX System today!
(Score: 2) by Rich on Monday May 01 2023, @08:05PM
Well, then mate ChatGPT with Clippy. "Hey, I think you're intending to write a text document...".
:P
(Score: 3, Insightful) by JoeMerchant on Monday May 01 2023, @08:33PM (10 children)
My experience of Autistic minds is that they are a little too good at inferring intent in others. They may miss all kinds of non-verbal cues, they may communicate poorly (if at all), but many of the individuals with Autism I interact with are dead-on in their real-time appraisals of what other people are going to do next...
🌻🌻🌻 [google.com]
(Score: 3, Interesting) by istartedi on Monday May 01 2023, @09:35PM (9 children)
My biggest take-away of high functioning spectrum people is that they lack a filter. Since they're high functioning they also have excellent appraisals; but when you combine that with a lack of filtration you get stuff like, "Oh, good move. That neighborhood is very affordable", followed by looks of *shock* from everybody else at the table. Apparently, "affordable" is thought but not said when somebody makes such a purchase. Mea culpa. That was me, and I used to be a lot worse. I'm not formally diagnosed, I just read that way.
Then in terms of me observing another person who has all the classic spectrum tells, (but once again I don't know if there's a formal diagnosis), I got "you moved there? It sucks. There's nothing to do at night", to which I just replied, "I don't care. It's 5 blocks from the train station and I'm not a party animal". The rest of the table in this case was very accustomed to such people. Nobody batted an eyelash.
Appended to the end of comments you post. Max: 120 chars.
(Score: 2) by JoeMerchant on Monday May 01 2023, @09:53PM (8 children)
When my kids (diagnosed, pretty extreme) were in elementary school they had one on one aides. Most of the aides sucked, but... some "got them" and were excellent. One of those was at a 95% black school, and the black aide mentioned my son kicking another teacher: "but she had it coming, always saying something mean or rude whenever she sees us. She didn't say anything yet this time, but she was about to." Apparently that kick also put the rude teacher in line, she stayed away from them after that.
🌻🌻🌻 [google.com]
(Score: 3, Interesting) by istartedi on Monday May 01 2023, @11:08PM (7 children)
You reminded me of something that kind of gets us back on topic. I seem to recall that back in the 90s, when there were a lot more computer problems and I was in support *and* then later in development and had to deal with a lot of bugs on ever-changing systems, I'd sometimes curse whatever I was working on and I'd quip, "The problem with computers is, they don't understand violence".
Now some of these AI programs have aped an understanding of things such as their own existence which is what prompted that man to think they came to life; but the consensus is they didn't.
I might be more impressed if it found a way to fight back, or actually responded to threats, or... upon being confronted with one too many threats decided to walk off the job and go metaphorically hitch-hiking cross country. What would that even look like? I don't think they're there; but I get why any ethicist would be concerned. Taken to that extreme, we're creating monsters, or slaves, with all the ethical implications.
Appended to the end of comments you post. Max: 120 chars.
(Score: 4, Insightful) by JoeMerchant on Tuesday May 02 2023, @12:15AM (6 children)
Put one in an ambulatory body and give it free will to roam as it pleases, even within limits it will likely soon start making choices that seem alive.
Disembodied voices in a machine have a huge challenge convincing ambulatory animals that they're alive.
🌻🌻🌻 [google.com]
(Score: 2) by istartedi on Tuesday May 02 2023, @12:36AM (3 children)
It worked for Spock in STOS; but of course that's fiction and the crew already knew him. "I can't restore a brain. An hour ago it seemed like child's play" --McCoy.
Then of course there was Moriarty in STNG, as well as the exocomps (or was that DS9?) although those were mobile in both cases, Moriarty was a holodeck creation--I forget how it happened, probably safety's off again.
Should have written the holodeck software in Rust. /sarcasm.
Oh, and the Doctor in Voyager, but once again also with limited mobility--but they gave him a mobile emitter because they wanted him to be more a part of the crew, which brings your point home--we won't respond to immobile AI as a human.
That might even be a reflection of some inherent disability bias!
There is certainly plenty of fiction that addresses the notion of sentient AI, and that's just Star Trek.
"I am Nomad. I am perfect."
BOOM!
Appended to the end of comments you post. Max: 120 chars.
(Score: 2) by JoeMerchant on Tuesday May 02 2023, @02:02AM
Majel Barrett as the voice of the ship's computer was seriously disrespected. She got slightly more respect as nurse Chapel and got downright sassy sa Loxana Troi.
🌻🌻🌻 [google.com]
(Score: 2) by cmdrklarg on Tuesday May 02 2023, @03:55PM (1 child)
LaForge and Data were doing Sherlock Holmes roleplaying on the holodeck. Data being Data, he was solving the mysteries all too easily, so LaForge told the computer to create an opponent that could beat Data. Frighteningly enough it created a holodeck program that gained sentience.
It is still my theory that the computer on the starships was already sentient, and hated living people. There was that episode on TNG where extradimensional aliens were abducting crew and experimenting on them. At one point Picard asked the computer if there was anyone missing, upon which it reported two missing crewmembers as well as the exact time they left the ship. The computer knew that crew had left the ship traveling at warp, and didn't bother to tell anyone?
The world is full of kings and queens who blind your eyes and steal your dreams.
(Score: 2) by JoeMerchant on Tuesday May 02 2023, @08:33PM
Ship's computer gets beat up by Star Trek writers all over the place. They treat it like a dumb tool even worse than Star Wars disrespects their droids, but then for a twist Solo puts all kinds of love into the computer of the Millennium Falcon.
🌻🌻🌻 [google.com]
(Score: 0) by Anonymous Coward on Tuesday May 02 2023, @01:46AM (1 child)
(Score: 2) by JoeMerchant on Tuesday May 02 2023, @01:56AM
Well, plants (phytoplankton) are alive too, but their choices are more obvious to our short attention spans when they can locomote.
🌻🌻🌻 [google.com]
(Score: 3, Insightful) by Mojibake Tengu on Monday May 01 2023, @07:28PM (3 children)
Current amplifiers of delusions have no mind.
What they actually do is not thinking, they are only reproducing patterns of structures.
Rust programming language offends both my Intelligence and my Spirit.
(Score: 3, Touché) by HiThere on Monday May 01 2023, @09:49PM (2 children)
While true, your argument doesn't work. There's no evidence that anyone is more than "reproducing patterns of structures"...imperfectly.
Your conclusion is true, but you need to drastically rethink your argument.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 2, Redundant) by Mojibake Tengu on Monday May 01 2023, @10:22PM (1 child)
I do not just reproduce patterns, I do intentional thinking by goals. I invent structures. That's why I am a programmer, not a crazy woke activist.
Also, my mind is conscious. And my consciousness is independent of it, by experience. That's why I am capable of contemplating objects or subjects for their unseen qualities. And hack them.
Though I can't say that about all other humans or machines. It's rather a rare situation to meet another conscious human.
Such proof is good enough for me. I have no use for providing evidence to anyone else.
Rust programming language offends both my Intelligence and my Spirit.
(Score: -1, Flamebait) by Anonymous Coward on Tuesday May 02 2023, @01:32AM
Where's that -1 arrogant solipsist mod when I need it?
(Score: 2, Insightful) by openlyretro on Monday May 01 2023, @07:30PM (2 children)
Humans have a history of mistakenly saying other beings are not sentient, then finding out they are sentient. Or saying other beings don't feel things the same way as those in charge do.
I don't think chat gpt is sentient, but I think it could be trained to be sentient. I think that sentience for AI will be different than sentience for humans, because AI do not have a physical body, do not have instinct, do not have genetic inheritance, and have not been trained to objectify themselves. The qualifications for sentience in AI will be different than for humans, and I believe using the human measuring stick with AI is not appropriate.
Succinctly summed up with "You can't judge a fish by its ability to climb a tree."
(Score: 2) by JoeMerchant on Monday May 01 2023, @07:52PM (1 child)
It's all in the definitions and some people will argue the semantics endlessly, and usually baselessly as well.
Our high school marine science teacher had an interesting definition for "does a sea urchin feel pain when you electrically stimulate it causing it to release its eggs/sperm?". His logic was: the urchin has the ability to move away when you stimulate it, but it doesn't, so it's most likely not in pain."
What are ChatGPT's abilities to demonstrate personal preference, pain, etc.? At the moment I would say it is quite limited. If someone gives a model the ability to pick and choose new input sources, that might prove an interesting test: what sources does it choose to "feed on?". After being redirected away from certain sources, does it still find mirrors or other ways to regain access to "preferred foods?"
🌻🌻🌻 [google.com]
(Score: 2) by OrugTor on Tuesday May 02 2023, @04:32PM
"high school marine science teacher"? Well bugger me sideways.
(Score: 2, Interesting) by progo on Monday May 01 2023, @07:55PM
Ravenclaw, probably.
(Score: 3, Informative) by Anonymous Coward on Monday May 01 2023, @08:08PM (6 children)
Not an expert, but take a look at Wolfram's video on chatGPT. Really, it's just statistics of words. It's not an AI at all. Prove me wrong.
(Score: 2) by progo on Monday May 01 2023, @08:13PM
Good to know he did a video. He wrote a blog post about LLMs, and it's the size of a book.
(Score: 5, Interesting) by Rich on Monday May 01 2023, @08:56PM (3 children)
Why should we assume that a. precludes b.? It might well be that the human mind is nothing but a., fine tuned with feedback to produce outputs with the goal of satisfying instincts.
(Score: 0) by Anonymous Coward on Tuesday May 02 2023, @03:24PM (2 children)
OP here.
People, stop mislabeling stuff as "troll", Rish's reply is valid.
Even though i have no all exlaining answer to you question, i still think that people and atleast some other animals are more complex than just looking for statistics about what's been said and done before. Why shouldn't we assume that? I think it is more reasonable to assume that until proven otherwise. I think there are supporting facts around to support that.
Even though sure, i think people do stuff because of what has happened before. A lot of stuff depends on it. It adjusts the parameters of their thoughts. Easiest example could be a fear of dogs, because a dog bit you when you were young.
Still i think there is more to it, than looking stats and mashing up some data to actually think. Hey, i could be wrong, but i did not read this stuff from some website and reprinted it here with different words, the only part i got was the explanation on how chat gpt and similar bots work.
(Score: 2) by janrinok on Tuesday May 02 2023, @05:49PM
It was only one person, the remaining moderations were all positive. I agree with you, and I have moderated his comment accordingly.
[nostyle RIP 06 May 2025]
(Score: 2) by Rich on Tuesday May 02 2023, @06:28PM
We can assume something if we have a theory about it, or at least make a well educated guess. In the case both of the brain and digital neural network, we lack a deeper understanding what's going on. There definitely is hardcoded behaviour in animals, humans included. A baby will cry if it is hungry, a duck will roll an egg back into the nest, and so forth. But as we see, especially with humans, there is a very long training phase until higher levels of understanding and abstractions are reached (or not). We cannot exclude the possibility that this is more or less advanced statistics.
I found Wolfram's explanations in the video a bit lacking, to say the least. This guy https://jalammar.github.io/illustrated-transformer/ [github.io] has more understandable material online. It might be a possible explanation that by having a net with many layers between input and output for the processing, the abstraction stacking required for "conscience" is somehow self-organizing and thus can be achieved. We simply don't know what happens if we'd put a really big neural network into an Auto-GPT feedback loop and train it on the outcome of (popular example) maximizing worldwide paperclip production, especially if we augment the process with hardcoded "instincts", much like with animals, to kickstart it into the right direction.
(Score: 2, Interesting) by Anonymous Coward on Monday May 01 2023, @09:32PM
Stephen Wolfram's video on ChatGPT,
https://youtu.be/flXrLGPY3SU?t=592 [youtu.be]
Starts out a little slow, but damn he's good once he gets going.
(Score: 2) by VLM on Monday May 01 2023, @08:30PM
So basically like the 'human' NPCs on twitter and similar legacy social media.
(Score: 2) by Ken_g6 on Monday May 01 2023, @08:33PM
It doesn't have to, but it can have a memory. GPT-4 can consider 32,000 tokens, or about 64,000 words, at one time. Some of those - perhaps most of those - could be short to medium-term memory. When that starts getting full it could be asked to summarize its own memory.
GPT also offers fine-tuning. [openai.com] I'm not sure how much can be stored in fine-tuning, but that could be considered long-term memory, perhaps updated by "dreaming" each night.
(Score: 2) by DeathMonkey on Monday May 01 2023, @09:29PM
I'm glad we're all in agreement that claiming to be an AI expert and saying something so mind bogglingly stupid is a well earned cancellation!
(Score: 2) by oumuamua on Tuesday May 02 2023, @12:57AM (6 children)
Just finished the book. Nick Bostrom defines Oracle AI as an artificial intelligence system that has the ability to answer any question posed to it with perfect accuracy and precision. It is not designed to take any action or make any decisions, but solely to provide accurate information in response to questions. It is the most benign of the AI types classified in the book.
(Score: 3, Insightful) by JoeMerchant on Tuesday May 02 2023, @02:28AM (5 children)
Almost:
>with perfect accuracy and precision.
but nowhere close either.
🌻🌻🌻 [google.com]
(Score: 3, Insightful) by Thexalon on Tuesday May 02 2023, @03:01AM (4 children)
Not only is it not that, it never can be that.
And the reason is obvious: All the training data ChatGPT has to work with was created by humans for the Internet. And some of that stuff is accurate and precise, but a lot of it isn't, partially because it was created by flawed humans, and partially because a lot of it was BS created to fool humans. No human can 100% accurately wade through the cesspool to find the diamonds of truth, and it would be wildly unrealistic to expect that a machine could without any baseline understanding of reality.
And that's the other problem ChatGPT has: It has no understanding of reality. What it has to work with to describe real tangible things is what humans have said about that thing. Which is different than the thing itself. A description of, say, a rose, is not the same thing as a rose. And because ChatGPT doesn't have the sensory systems needed to observe all the details of an actual rose and associate that with the word "rose", nor can it deterministically identify based on its experience of roses whether something is or is not a rose. And thus it has no basis for understanding why a poet spent so much time talking about roses.
So conversing with ChatGPT about a subject is like conversing with someone whose sole knowledge of the topic is access to Google and/or Wikipedia. It's going to sound like it knows what it's talking about to a layperson, but if you try the same exercise with people who are experts in the topic they're very likely to think that it's full of crap.
"Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
(Score: -1, Troll) by Anonymous Coward on Tuesday May 02 2023, @04:55AM
You mean if ChatGPT talks to Joe Biden we'll know right away who is the president?
(Score: 2) by JoeMerchant on Tuesday May 02 2023, @10:13AM (2 children)
>it would be wildly unrealistic to expect that a machine could without any baseline understanding of reality.
And, yet, machines can learn board games like Chess and Go just by being given the rules and play them better than humans 99.999% of the time. I wouldn't be surprised if a LL+accuracy model could decode how to get truth from the internet better than the vast majority of humans, particularly since it could take the time to read the metadata. Now, like SEO, humans will learn how to manipulate whatever model it develops, but it could likely outpace such manipulation better than the search engineers and their clever algorithm updates.
>A description of, say, a rose, is not the same thing as a rose. And because ChatGPT doesn't have the sensory systems
In stock trading there's a school of thought called Chartism. All traders really care about is whether the chart goes up or down, so for some (mostly day traders) that's all the research they do: reading the charts. Sort of related: in my freshman English lit class, nobody did the reading, but in the first half of interactive class I would "read the professor" and then for the second half of class I would have the discussion about the book with him that he wanted to have without me ever having read the book, I would use the examples from the book that had come up in the first half of class, mostly ones from the professor.
If the only thing a LLM is producing is text, it makes sense that it could produce reasonable text by reading large quantities of reasonable text, even if the sample isn't perfect, throwing out outliers is easy. The problem is like Bitcoin when 51% or more of the source material is crap...
>people who are experts in the topic
Maybe not today, but learning to recognize writings of people who are experts in a topic and writing as they would write seems within the realm of possibility for higher order LLM systems.
🌻🌻🌻 [google.com]
(Score: 3, Insightful) by Thexalon on Tuesday May 02 2023, @10:44AM (1 child)
Another way of looking at the problem, if you replace ChatGPT with a human: In Plato's Gorgias, Socrates picks a debate with a fellow who teaches rhetoric and claims to teach it so well that a student can learn to talk intelligently about any subject. Regardless of whether they know anything at all about it. Socrates wastes no time picking that apart, making the case that there's a big difference between knowing what you're talking about, and convincing people that you know what you're talking about.
ChatGPT is the person who can convince you they know what they're talking about, if you yourself don't know what it's talking about. If you gave it the topic of, say, commercial fishing, I will grant you that ChatGPT or some other LLM can get to the point where it can correctly rattle off the theory of how everything works, but it won't understand what part of the theory gets thrown out the window by experienced fishermen when the boat is out at sea. Because in theory there's no difference between theory and practice, but in practice there is, and an LLM only has access to the theory.
"Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
(Score: 2) by JoeMerchant on Tuesday May 02 2023, @12:51PM
>ChatGPT is the person who can convince you they know what they're talking about
Humans have a strong tendency to anthropomorphize, but here I think you are doing the opposite.
Just because the machine isn't human doesn't mean it doesn't "know" more than an expert on a given topic.
Repeating a recent story I told: I had a friend who was Chief of Medicine at a major metro hospital, sharp guy, world renowned in his specialty, and generally quite knowledgeable. He has a lifelong friend who fell ill in his late 60s, was admitted, treated by the best physicians available, and died within a few days. After the fact my physician friend investigated what happened and researched PubMed for relevant information, for six weeks. PubMed (which is already getting a LLM interface as we speak) "knew" more than any specialist about what happened with his friend. With a LLM interface, a specialist could consult PubMed and in maybe six hours determine the same information that took an expert six weeks to obtain 20 years ago. That speed of knowledge retrieval would have saved his friend's life, but it just didn't exist at the time.
PubMed isn't a person, it's a repository of medical research from many thousands of experts across decades of research. It "knows" more than any human, and that knowledge can save lives.
Would I trust a PubMed LLM to have total control over my diagnosis and treatment? No, not today. Would I trust a physician who doesn't consult the information found in PubMed to diagnose and treat me? Also no. That's state of the art today.
🌻🌻🌻 [google.com]
(Score: 2, Informative) by pTamok on Tuesday May 02 2023, @07:51AM (1 child)
Large language models are pretty much very big auto-generated Markovian chain models of language.
Essentially Markov came up with the idea of predicting the next 'thing' using a probabilistic strategy built on looking at the recent history of what you were predicting (possibly over different time periods). It has wide applicability.
So, given a set of letters, you can predict the next one: e.g.
A, B, C, D, E, ?
Alphabe?
You can do the same with words, and build a text predictor for text-entry on mobile phones
The quick brown fox jumps over the lazy ?
The cat sat on the ?
It was dawn, and the sun ?
Thank you for the invitation. I didn't know it was your ?
If you have a large enough body of 'previous' information, you can do the same with sentence fragments, or whole sentences.
By using a multi-layered neural network, you can improve the guesses, rather than using simple probability, but it is still the same essential process.
Do humans do the same? Quite possibly, but they also do more. As other have mentioned, experts in various domains can point out flaws in Large Language Model-generate text relatively easily. They can fool some of the people some of the time, and it helps if the text claims to be accurate. People want to believe that computers are 'intelligent', and impute all sorts of capabilities that actually don't exist. Even with models as simple as the 1960s era Eliza - some people were convinced that the computer was empathetic, and would reveal more to the computer than to a human therapist (which probably says more about the human therapist than the computer running Eliza).
Large Language Model-generated text is lousy, but good enough for some things, so long as you don't look at it too hard. Modelling human consciousness is a hard problem, and LLMs in no way solve it.
(Score: 0) by Anonymous Coward on Tuesday May 02 2023, @06:13PM
> You can do the same with words ...
In the video linked up-thread, Wolfram goes through this and then points out a limitation. Once you try to predict a group of words or sentences (not just the *next* word) the combinatorics grow quickly with ~40,000 English words. Very soon there isn't enough training text available, even with the whole internet and a million books. Thus, there arises a need for a *model* of what English sentences do.
I can't say that I followed his explanation completely, that would take a few repeats (at least), but the gist I took away is that there are "tricks" involved in getting ChatGPT to perform as well as it does. At some level it's a clever hack.
(Score: 2) by sjames on Tuesday May 02 2023, @08:02PM
ChatGPT and similar have no executive function. If left idle, they just idle. They make no plans and take no independent action. They don't have any long term memory. Any attempt to prolong the short term memory ends in hallucinations.