Stories
Slash Boxes
Comments

SoylentNews is people

Submission Preview

Link to Story

Microsoft Research Paper Claims Sparks of Artificial Intelligence in GPT-4

Accepted submission by guest reader at 2023-03-26 07:01:51 from the skynet-foundations dept.
Software

Microsoft Research has issued a 154-page report entitled Sparks of Artificial Intelligence: Early Experiments With GPT-4 [arxiv.org]:

Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.

Zvi Mowshowitz wrote a post about this article: [substack.com]

[...]Their method seems to largely be ‘look at all these tasks GPT-4 did well on.’

I am not sure why they are so impressed by the particular tasks they start with. The first was ‘prove there are an infinite number of primes in the form of a rhyming poem.’ That seems like a clear case where the proof is very much in the training data many times, so you’re asking it to translate text into a rhyming poem, which is easy for it - for a challenge, try to get it to write a poem that doesn’t rhyme.

[...]As I understand it, failure to properly deal with negations is a common issue, so reversals being a problem also makes sense. I love the example on page 50, where GPT-4 actively calls out as an error that a reverse function is reversed.

[...]in 6.1, GPT-4 is then shown to have theory of mind, be able to process non-trivial human interactions, and strategize about how to convince people to get the Covid-19 vaccine far better than our government and public health authorities handled things. The rank order is clearly GPT-4’s answer is very good, ChatGPT’s answer is not bad, and the actual answers we used were terrible.

[...]Does this all add up to a proto-AGI? Is it actually intelligent? Does it show ‘sparks’ of general intelligence, as the paper words it?

Ultimately it depends what you think it means to be an AGI, and how much deeper this particular rabbit hole can go in terms of capabilities developments. All the standard arguments, for and against, apply.

Their discussion about how to make it more intelligent involves incremental improvements, and abilities like confidence calibration, long-term memory and continual learning. The rest of the list: Personalization, planning and conceptual leaps, transparency, interpretability and consistency, improvement on cognitive fallacies and irrationality, challenges with sensitivity to inputs. Continual learning does seem like a potential big step in this. Many others seem to involve a confusion between capabilities that cause intelligence, and capabilities that result from intelligence.

Pre-print article:
Sébastien Bubeck and Varun Chandrasekaran and Ronen Eldan and Johannes Gehrke and Eric Horvitz and Ece Kamar and Peter Lee and Yin Tat Lee and Yuanzhi Li and Scott Lundberg and Harsha Nori and Hamid Palangi and Marco Tulio Ribeiro and Yi Zhang, Sparks of Artificial General Intelligence: Early experiments with GPT-4, 2023, 2303.12712, https://doi.org/10.48550/arXiv.2303.12712 [doi.org]


Original Submission