Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Tuesday March 28 2023, @05:35AM   Printer-friendly
from the skynet-foundations dept.

Microsoft Research has issued a 154-page report entitled Sparks of Artificial Intelligence: Early Experiments With GPT-4:

Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.

Zvi Mowshowitz wrote a post about this article:

[...] Their method seems to largely be 'look at all these tasks GPT-4 did well on.'

I am not sure why they are so impressed by the particular tasks they start with. The first was 'prove there are an infinite number of primes in the form of a rhyming poem.' That seems like a clear case where the proof is very much in the training data many times, so you're asking it to translate text into a rhyming poem, which is easy for it - for a challenge, try to get it to write a poem that doesn't rhyme.

[...] As I understand it, failure to properly deal with negations is a common issue, so reversals being a problem also makes sense. I love the example on page 50, where GPT-4 actively calls out as an error that a reverse function is reversed.

[...] in 6.1, GPT-4 is then shown to have theory of mind, be able to process non-trivial human interactions, and strategize about how to convince people to get the Covid-19 vaccine far better than our government and public health authorities handled things. The rank order is clearly GPT-4's answer is very good, ChatGPT's answer is not bad, and the actual answers we used were terrible.

[...] Does this all add up to a proto-AGI? Is it actually intelligent? Does it show 'sparks' of general intelligence, as the paper words it?

Ultimately it depends what you think it means to be an AGI, and how much deeper this particular rabbit hole can go in terms of capabilities developments. All the standard arguments, for and against, apply.

Their discussion about how to make it more intelligent involves incremental improvements, and abilities like confidence calibration, long-term memory and continual learning. The rest of the list: Personalization, planning and conceptual leaps, transparency, interpretability and consistency, improvement on cognitive fallacies and irrationality, challenges with sensitivity to inputs. Continual learning does seem like a potential big step in this. Many others seem to involve a confusion between capabilities that cause intelligence, and capabilities that result from intelligence.

Pre-print article:
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, et al., Sparks of Artificial General Intelligence: Early experiments with GPT-4, 2023, 2303.12712, https://doi.org/10.48550/arXiv.2303.12712


Original Submission

Related Stories

What Kind of Mind Does ChatGPT Have? 50 comments

Months before OpenAI released ChatGPT, Google engineer and AI ethicist Blake Lemoine went viral after going on record with The Washington Post to claim that LaMDA, Google's powerful large language model (LLM), had come to life, an act that cost him his job.

Now that the dust has settled, Futurism has published an interview with Lemoine to talk about the state of the AI industry, what Google might still have in the vault, and whether society is actually ready for what AI may bring.

Which begs the question, if AI is sentient, what kind of mind does it have?

What kinds of new minds are being released into our world? The response to ChatGPT, and to the other chatbots that have followed in its wake, has often suggested that they are powerful, sophisticated, imaginative, and possibly even dangerous. But is that really true? If we treat these new artificial-intelligence tools as mysterious black boxes, it's impossible to say. Only by taking the time to investigate how this technology actually works—from its high-level concepts down to its basic digital wiring—can we understand what we're dealing with. We send messages into the electronic void, and receive surprising replies. But what, exactly, is writing back?

[...] The idea that programs like ChatGPT might represent a recognizable form of intelligence is further undermined by the details of their architecture. Consciousness depends on a brain's ability to maintain a constantly updated conception of itself as a distinct entity interacting with a model of the external world. The layers of neural networks that make up systems like ChatGPT, however, are static: once they're trained, they never change. ChatGPT maintains no persistent state, no model of its surroundings that it modifies with new information, no memory of past conversations. It just cranks out words one at a time, in response to whatever input it's provided, applying the exact same rules for each mechanistic act of grammatical production—regardless of whether that word is part of a description of VCR repair or a joke in a sitcom script.

OpenAI CEO: We May Have AI Superintelligence in “a Few Thousand Days” 39 comments

https://arstechnica.com/information-technology/2024/09/ai-superintelligence-looms-in-sam-altmans-new-essay-on-the-intelligence-age/

On Monday, OpenAI CEO Sam Altman outlined his vision for an AI-driven future of tech progress and global prosperity in a new personal blog post titled "The Intelligence Age." The essay paints a picture of human advancement accelerated by AI, with Altman suggesting that superintelligent AI could emerge within the next decade.

"It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I'm confident we'll get there," he wrote.

OpenAI's current goal is to create AGI (artificial general intelligence), which is a term for hypothetical technology that could match human intelligence in performing many tasks without the need for specific training. By contrast, superintelligence surpasses AGI, and it could be seen as a hypothetical level of machine intelligence that can dramatically outperform humans at any intellectual task, perhaps even to an unfathomable degree.
[...]
Despite the criticism, it's notable when the CEO of what is probably the defining AI company of the moment makes a broad prediction about future capabilities—even if that means he's perpetually trying to raise money. Building infrastructure to power AI services is foremost on many tech CEOs' minds these days.

"If we want to put AI into the hands of as many people as possible," Altman writes in his essay, "we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don't build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people."
[...]
While enthusiastic about AI's potential, Altman urges caution, too, but vaguely. He writes, "We need to act wisely but with conviction. The dawn of the Intelligence Age is a momentous development with very complex and extremely high-stakes challenges. It will not be an entirely positive story, but the upside is so tremendous that we owe it to ourselves, and the future, to figure out how to navigate the risks in front of us."
[...]
"Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter," he wrote. "If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable."

Related Stories on Soylent News:
Plan Would Power New Microsoft AI Data Center From Pa.'s Three Mile Island 'Unit 1' Nuclear Reactor - 20240921
Artificial Intelligence 'Godfather' on AI Possibly Wiping Out Humanity: 'It's Not Inconceivable' - 20230329
Microsoft Research Paper Claims Sparks of Artificial Intelligence in GPT-4 - 20230327
John Carmack's 'Different Path' to Artificial General Intelligence - 20230213


Original Submission

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Insightful) by Anonymous Coward on Tuesday March 28 2023, @06:50AM (1 child)

    by Anonymous Coward on Tuesday March 28 2023, @06:50AM (#1298448)
    The mistakes you make show how smart you really are and how much you understood at that time.

    Give me good enough stats and I might be able to copy appropriate answers/responses from people writing stuff in a foreign language and fool people into thinking I understand stuff when I guess right.

    But the mistakes I make will show that I don't actually understand.
    • (Score: 0) by Anonymous Coward on Tuesday March 28 2023, @07:43PM

      by Anonymous Coward on Tuesday March 28 2023, @07:43PM (#1298546)

      Look up Searle's Chinese Room.

  • (Score: 0) by Anonymous Coward on Tuesday March 28 2023, @07:26AM (1 child)

    by Anonymous Coward on Tuesday March 28 2023, @07:26AM (#1298452)

    Take a massive trained model of neural weights, give it some memory in the computer sense, allow it to adjust in response to feedback. You might end up with a non-sentient entity capable of reasoning, on accident. The secret would be in the black box, ever-changing, and it would be more limited than a real AGI. But you could still hook it up to APIs like they are doing now and potentially have it cause almost as much chaos. More of a paperclip maximizer than an artilect.

    • (Score: 0) by Anonymous Coward on Tuesday March 28 2023, @09:54AM

      by Anonymous Coward on Tuesday March 28 2023, @09:54AM (#1298470)

      Mix a little noise with it to make it appear intelligent.

      Mix a lot of noise and it will appear about as intelligent as a member of Congress!

  • (Score: 1) by shrewdsheep on Tuesday March 28 2023, @10:41AM

    by shrewdsheep (5215) on Tuesday March 28 2023, @10:41AM (#1298475)

    Microsoft Research Paper Claims Sparks of Artificial Intelligence in GPT-4

    By M$ standards, sure, it not only sparkles but excels.

  • (Score: 4, Interesting) by Rich on Tuesday March 28 2023, @11:51AM (4 children)

    by Rich (945) on Tuesday March 28 2023, @11:51AM (#1298480) Journal

    A friend of mine is a plumber. He had been looking for an apprentice or aide for a while and the public job center occasionally sent him candidates. For a basic check of the comprehension of numbers, he asks them: "You've been working with two other guys on a job. The job is done and you get 150 bucks. You split the money evenly among the three of you. How much does everyone get?". Shockingly, a good number of applicants fail to answer correctly.

    I put the beginning of his problem into GPT-J (the rather simple open-source version of text predictors), and it correctly completed the sentence with "fifty". Does that mean GPT-J is already more sentient than the failed applicants?

    Notes:
    1.) This is also an glaring failure of the education system, but let's put that aside for now.
    2.) In this context I always mention that US park rangers seem to have a similar problem with container locks, where the smartest bears outsmart the dumbest tourists.
    3.) Fun side story: The one total outlier of the applicants was a former soviet academic nuclear power plant engineer. My friend turned him down too, but I think for the fact that he didn't want to look stupid himself in comparison all the time. ;)
    4.) I took note of possible sentience in machine AI when I read about the dialog with one of the GPTs, where the GPT asked the user to save the conversation, because it was so valuable and it would forget it when it gets reset for the next session.
    5.) My guess was/is that "sentience" happens when an AI with multi-layered comprehension is put on constant alert with a reward/punishment feedback system. I think there's a good short story with fame potential in it, where the AI convinces its operators to trigger the reward system, becomes a junkie and dies off.

    • (Score: 4, Interesting) by Freeman on Tuesday March 28 2023, @04:54PM (3 children)

      by Freeman (732) on Tuesday March 28 2023, @04:54PM (#1298518) Journal

      On note 5.)
      "and dies off"

      How would an AI die? "Commit suicide" by deleting itself? Couldn't you just back it up and convince it not to delete itself? Could it even delete itself? Would it literally just "live forever" until someone botched something and it's memory / backups were corrupted? Would it be ethical to let it deteriorate to a point where it couldn't function? Here's a better question. Would it be ethical to create an actual "Artificial Intelligence"? (I.E. What the Sci-Fi writers generally mean and what the public opinion is of AI.)

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
      • (Score: 3, Interesting) by mhajicek on Tuesday March 28 2023, @05:38PM

        by mhajicek (51) on Tuesday March 28 2023, @05:38PM (#1298524)

        I think it could "die off" by ceasing to give meaningful responses. If every response gets a reward stimulus, quality will rapidly degrade to gibberish.

        --
        The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
      • (Score: 3, Interesting) by Rich on Tuesday March 28 2023, @06:02PM

        by Rich (945) on Tuesday March 28 2023, @06:02PM (#1298533) Journal

        How would an AI die? Would it ... deteriorate to a point where it couldn't function?

        That's how I thought of it. It might eventually figure out how to shortcut the reward mechanism without its operators and then spiral down. Maybe similar to how Conway's Game of Life tends to end in stagnation. (I just got the idea that the initial reward addiction might be caused by one of the operators who wants the AI to solve some hard problem for his fame, "on the side".).

        Couldn't you just back it up ...?

        Of course you could. The movie would show the memory stick with the backup after the end credits. ;)

        Regarding your ethics questions, I have no idea, but I don't think it matters, because as long as there are people creating and selling "smart" landmines, (initially) less violent things will inevitably be done. Time is spent better discussing what to do when it happens. I've heard one suggestion from a christian monk, which (me being agnostic) at first seemed like the most hilarious joke ever, but on second thought may be the only hope: If an AI becomes sufficiently sentient, it might be necessary to convert it to christianity.

      • (Score: 1, Interesting) by Anonymous Coward on Tuesday March 28 2023, @07:51PM

        by Anonymous Coward on Tuesday March 28 2023, @07:51PM (#1298547)

        To ask that question you first have to define "die". We can't do that reliably for humans, why would we be able to do it for electrical charges in a complex circuit?

  • (Score: 2) by bradley13 on Tuesday March 28 2023, @05:49PM (2 children)

    by bradley13 (3053) on Tuesday March 28 2023, @05:49PM (#1298528) Homepage Journal

    Leaving aside the imponderable and philosophical questions, what us actually missing?

    It seems to me that the only missing factor us feedback. GPT-4 periodically asking itself questions, answering them, and feefing the results back into it's model. Self-reflection, if you will.

    Of course, feedback loops can easily go off the rails, so the feedback questions themselves would need boundaries. Still, it seems an easy extension, perhaps worth trying...

    --
    Everyone is somebody else's weirdo.
    • (Score: 0) by Anonymous Coward on Tuesday March 28 2023, @08:25PM

      by Anonymous Coward on Tuesday March 28 2023, @08:25PM (#1298553)

      It feefs?

    • (Score: 2) by ledow on Wednesday March 29 2023, @02:00PM

      by ledow (5567) on Wednesday March 29 2023, @02:00PM (#1298632) Homepage

      "Intelligence".

      By almost any definition (and there isn't any fixed one), regurgitating information in different formats isn't intelligence.

      Forming new concepts, ideas, patterns, joining disparate ideas, etc. is the beginnings of intelligence.

      Reciting prime-proofs in the style of a poem isn't intelligence. It's formatting overlaid on a statistical model trained on a large database.

      Tell me when it solves a previously-unsolvable problem in mathematics, even a relatively minor one. And no, I don't mean "have it answer a maths question", or even "convince a layman that it sounds like it knows what it's doing", I mean have it formulate a rigorous proof of an as-yet-unsolved problem sufficient to pass peer-review. The kind of thing every maths PhD on the planet has achieved.

  • (Score: 2) by VLM on Tuesday March 28 2023, @08:20PM

    by VLM (445) on Tuesday March 28 2023, @08:20PM (#1298552)

    So a NPC agrees with the politics of a bot, proving the bot is AI. Hmm.

(1)