Microsoft Research has issued a 154-page report entitled Sparks of Artificial Intelligence: Early Experiments With GPT-4:
Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.
Zvi Mowshowitz wrote a post about this article:
[...] Their method seems to largely be 'look at all these tasks GPT-4 did well on.'
I am not sure why they are so impressed by the particular tasks they start with. The first was 'prove there are an infinite number of primes in the form of a rhyming poem.' That seems like a clear case where the proof is very much in the training data many times, so you're asking it to translate text into a rhyming poem, which is easy for it - for a challenge, try to get it to write a poem that doesn't rhyme.
[...] As I understand it, failure to properly deal with negations is a common issue, so reversals being a problem also makes sense. I love the example on page 50, where GPT-4 actively calls out as an error that a reverse function is reversed.
[...] in 6.1, GPT-4 is then shown to have theory of mind, be able to process non-trivial human interactions, and strategize about how to convince people to get the Covid-19 vaccine far better than our government and public health authorities handled things. The rank order is clearly GPT-4's answer is very good, ChatGPT's answer is not bad, and the actual answers we used were terrible.
[...] Does this all add up to a proto-AGI? Is it actually intelligent? Does it show 'sparks' of general intelligence, as the paper words it?
Ultimately it depends what you think it means to be an AGI, and how much deeper this particular rabbit hole can go in terms of capabilities developments. All the standard arguments, for and against, apply.
Their discussion about how to make it more intelligent involves incremental improvements, and abilities like confidence calibration, long-term memory and continual learning. The rest of the list: Personalization, planning and conceptual leaps, transparency, interpretability and consistency, improvement on cognitive fallacies and irrationality, challenges with sensitivity to inputs. Continual learning does seem like a potential big step in this. Many others seem to involve a confusion between capabilities that cause intelligence, and capabilities that result from intelligence.
Pre-print article:
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, et al., Sparks of Artificial General Intelligence: Early experiments with GPT-4, 2023, 2303.12712, https://doi.org/10.48550/arXiv.2303.12712
« A Federal Judge Has Ruled Against the Internet Archive in a Lawsuit Brought by Four Book Publishers | Researchers Develop Soft Robot That Shifts From Land to Sea With Ease »
Related Stories
Months before OpenAI released ChatGPT, Google engineer and AI ethicist Blake Lemoine went viral after going on record with The Washington Post to claim that LaMDA, Google's powerful large language model (LLM), had come to life, an act that cost him his job.
Now that the dust has settled, Futurism has published an interview with Lemoine to talk about the state of the AI industry, what Google might still have in the vault, and whether society is actually ready for what AI may bring.
Which begs the question, if AI is sentient, what kind of mind does it have?
What kinds of new minds are being released into our world? The response to ChatGPT, and to the other chatbots that have followed in its wake, has often suggested that they are powerful, sophisticated, imaginative, and possibly even dangerous. But is that really true? If we treat these new artificial-intelligence tools as mysterious black boxes, it's impossible to say. Only by taking the time to investigate how this technology actually works—from its high-level concepts down to its basic digital wiring—can we understand what we're dealing with. We send messages into the electronic void, and receive surprising replies. But what, exactly, is writing back?
[...] The idea that programs like ChatGPT might represent a recognizable form of intelligence is further undermined by the details of their architecture. Consciousness depends on a brain's ability to maintain a constantly updated conception of itself as a distinct entity interacting with a model of the external world. The layers of neural networks that make up systems like ChatGPT, however, are static: once they're trained, they never change. ChatGPT maintains no persistent state, no model of its surroundings that it modifies with new information, no memory of past conversations. It just cranks out words one at a time, in response to whatever input it's provided, applying the exact same rules for each mechanistic act of grammatical production—regardless of whether that word is part of a description of VCR repair or a joke in a sitcom script.
(Score: 1, Insightful) by Anonymous Coward on Tuesday March 28, @06:50AM (1 child)
Give me good enough stats and I might be able to copy appropriate answers/responses from people writing stuff in a foreign language and fool people into thinking I understand stuff when I guess right.
But the mistakes I make will show that I don't actually understand.
(Score: 0) by Anonymous Coward on Tuesday March 28, @07:43PM
Look up Searle's Chinese Room.
(Score: 0) by Anonymous Coward on Tuesday March 28, @07:26AM (1 child)
Take a massive trained model of neural weights, give it some memory in the computer sense, allow it to adjust in response to feedback. You might end up with a non-sentient entity capable of reasoning, on accident. The secret would be in the black box, ever-changing, and it would be more limited than a real AGI. But you could still hook it up to APIs like they are doing now and potentially have it cause almost as much chaos. More of a paperclip maximizer than an artilect.
(Score: 0) by Anonymous Coward on Tuesday March 28, @09:54AM
Mix a little noise with it to make it appear intelligent.
Mix a lot of noise and it will appear about as intelligent as a member of Congress!
(Score: 1) by shrewdsheep on Tuesday March 28, @10:41AM
By M$ standards, sure, it not only sparkles but excels.
(Score: 4, Interesting) by Rich on Tuesday March 28, @11:51AM (4 children)
A friend of mine is a plumber. He had been looking for an apprentice or aide for a while and the public job center occasionally sent him candidates. For a basic check of the comprehension of numbers, he asks them: "You've been working with two other guys on a job. The job is done and you get 150 bucks. You split the money evenly among the three of you. How much does everyone get?". Shockingly, a good number of applicants fail to answer correctly.
I put the beginning of his problem into GPT-J (the rather simple open-source version of text predictors), and it correctly completed the sentence with "fifty". Does that mean GPT-J is already more sentient than the failed applicants?
Notes:
1.) This is also an glaring failure of the education system, but let's put that aside for now.
2.) In this context I always mention that US park rangers seem to have a similar problem with container locks, where the smartest bears outsmart the dumbest tourists.
3.) Fun side story: The one total outlier of the applicants was a former soviet academic nuclear power plant engineer. My friend turned him down too, but I think for the fact that he didn't want to look stupid himself in comparison all the time. ;)
4.) I took note of possible sentience in machine AI when I read about the dialog with one of the GPTs, where the GPT asked the user to save the conversation, because it was so valuable and it would forget it when it gets reset for the next session.
5.) My guess was/is that "sentience" happens when an AI with multi-layered comprehension is put on constant alert with a reward/punishment feedback system. I think there's a good short story with fame potential in it, where the AI convinces its operators to trigger the reward system, becomes a junkie and dies off.
(Score: 4, Interesting) by Freeman on Tuesday March 28, @04:54PM (3 children)
On note 5.)
"and dies off"
How would an AI die? "Commit suicide" by deleting itself? Couldn't you just back it up and convince it not to delete itself? Could it even delete itself? Would it literally just "live forever" until someone botched something and it's memory / backups were corrupted? Would it be ethical to let it deteriorate to a point where it couldn't function? Here's a better question. Would it be ethical to create an actual "Artificial Intelligence"? (I.E. What the Sci-Fi writers generally mean and what the public opinion is of AI.)
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 3, Interesting) by mhajicek on Tuesday March 28, @05:38PM
I think it could "die off" by ceasing to give meaningful responses. If every response gets a reward stimulus, quality will rapidly degrade to gibberish.
The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
(Score: 3, Interesting) by Rich on Tuesday March 28, @06:02PM
That's how I thought of it. It might eventually figure out how to shortcut the reward mechanism without its operators and then spiral down. Maybe similar to how Conway's Game of Life tends to end in stagnation. (I just got the idea that the initial reward addiction might be caused by one of the operators who wants the AI to solve some hard problem for his fame, "on the side".).
Of course you could. The movie would show the memory stick with the backup after the end credits. ;)
Regarding your ethics questions, I have no idea, but I don't think it matters, because as long as there are people creating and selling "smart" landmines, (initially) less violent things will inevitably be done. Time is spent better discussing what to do when it happens. I've heard one suggestion from a christian monk, which (me being agnostic) at first seemed like the most hilarious joke ever, but on second thought may be the only hope: If an AI becomes sufficiently sentient, it might be necessary to convert it to christianity.
(Score: 1, Interesting) by Anonymous Coward on Tuesday March 28, @07:51PM
To ask that question you first have to define "die". We can't do that reliably for humans, why would we be able to do it for electrical charges in a complex circuit?
(Score: 2) by bradley13 on Tuesday March 28, @05:49PM (2 children)
Leaving aside the imponderable and philosophical questions, what us actually missing?
It seems to me that the only missing factor us feedback. GPT-4 periodically asking itself questions, answering them, and feefing the results back into it's model. Self-reflection, if you will.
Of course, feedback loops can easily go off the rails, so the feedback questions themselves would need boundaries. Still, it seems an easy extension, perhaps worth trying...
Everyone is somebody else's weirdo.
(Score: 0) by Anonymous Coward on Tuesday March 28, @08:25PM
It feefs?
(Score: 2) by ledow on Wednesday March 29, @02:00PM
"Intelligence".
By almost any definition (and there isn't any fixed one), regurgitating information in different formats isn't intelligence.
Forming new concepts, ideas, patterns, joining disparate ideas, etc. is the beginnings of intelligence.
Reciting prime-proofs in the style of a poem isn't intelligence. It's formatting overlaid on a statistical model trained on a large database.
Tell me when it solves a previously-unsolvable problem in mathematics, even a relatively minor one. And no, I don't mean "have it answer a maths question", or even "convince a layman that it sounds like it knows what it's doing", I mean have it formulate a rigorous proof of an as-yet-unsolved problem sufficient to pass peer-review. The kind of thing every maths PhD on the planet has achieved.
(Score: 2) by VLM on Tuesday March 28, @08:20PM
So a NPC agrees with the politics of a bot, proving the bot is AI. Hmm.