Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Wednesday September 06 2023, @07:08AM   Printer-friendly

With hopes and fears about this technology running wild, it's time to agree on what it can and can't do:

When Taylor Webb played around with GPT-3 in early 2022, he was blown away by what OpenAI's large language model appeared to be able to do. Here was a neural network trained only to predict the next word in a block of text—a jumped-up autocomplete. And yet it gave correct answers to many of the abstract problems that Webb set for it—the kind of thing you'd find in an IQ test. "I was really shocked by its ability to solve these problems," he says. "It completely upended everything I would have predicted."

[...] Last month Webb and his colleagues published an article in Nature, in which they describe GPT-3's ability to pass a variety of tests devised to assess the use of analogy to solve problems (known as analogical reasoning). On some of those tests GPT-3 scored better than a group of undergrads. "Analogy is central to human reasoning," says Webb. "We think of it as being one of the major things that any kind of machine intelligence would need to demonstrate."

What Webb's research highlights is only the latest in a long string of remarkable tricks pulled off by large language models. [...]

And multiple researchers claim to have shown that large language models can pass tests designed to identify certain cognitive abilities in humans, from chain-of-thought reasoning (working through a problem step by step) to theory of mind (guessing what other people are thinking).

These kinds of results are feeding a hype machine predicting that these machines will soon come for white-collar jobs, replacing teachers, doctors, journalists, and lawyers. Geoffrey Hinton has called out GPT-4's apparent ability to string together thoughts as one reason he is now scared of the technology he helped create.

But there's a problem: there is little agreement on what those results really mean. Some people are dazzled by what they see as glimmers of human-like intelligence; others aren't convinced one bit.

"There are several critical issues with current evaluation techniques for large language models," says Natalie Shapira, a computer scientist at Bar-Ilan University in Ramat Gan, Israel. "It creates the illusion that they have greater capabilities than what truly exists."

That's why a growing number of researchers—computer scientists, cognitive scientists, neuroscientists, linguists—want to overhaul the way they are assessed, calling for more rigorous and exhaustive evaluation. Some think that the practice of scoring machines on human tests is wrongheaded, period, and should be ditched.

"People have been giving human intelligence tests—IQ tests and so on—to machines since the very beginning of AI," says Melanie Mitchell, an artificial-intelligence researcher at the Santa Fe Institute in New Mexico. "The issue throughout has been what it means when you test a machine like this. It doesn't mean the same thing that it means for a human."

[...] "There is a long history of developing methods to test the human mind," says Laura Weidinger, a senior research scientist at Google DeepMind. "With large language models producing text that seems so human-like, it is tempting to assume that human psychology tests will be useful for evaluating them. But that's not true: human psychology tests rely on many assumptions that may not hold for large language models."

Webb is aware of the issues he waded into. "I share the sense that these are difficult questions," he says. He notes that despite scoring better than undergrads on certain tests, GPT-3 produced absurd results on others. For example, it failed a version of an analogical reasoning test about physical objects that developmental psychologists sometimes give to kids.

[...] A lot of these tests—questions and answers—are online, says Webb: "Many of them are almost certainly in GPT-3's and GPT-4's training data, so I think we really can't conclude much of anything."

[...] The performance of large language models is brittle. Among people, it is safe to assume that someone who scores well on a test would also do well on a similar test. That's not the case with large language models: a small tweak to a test can drop an A grade to an F.

"In general, AI evaluation has not been done in such a way as to allow us to actually understand what capabilities these models have," says Lucy Cheke, a psychologist at the University of Cambridge, UK. "It's perfectly reasonable to test how well a system does at a particular task, but it's not useful to take that task and make claims about general abilities."

[...] "The assumption that cognitive or academic tests designed for humans serve as accurate measures of LLM capability stems from a tendency to anthropomorphize models and align their evaluation with human standards," says Shapira. "This assumption is misguided."

[...] The trouble is that nobody knows exactly how large language models work. Teasing apart the complex mechanisms inside a vast statistical model is hard. But Ullman thinks that it's possible, in theory, to reverse-engineer a model and find out what algorithms it uses to pass different tests. "I could more easily see myself being convinced if someone developed a technique for figuring out what these things have actually learned," he says.

"I think that the fundamental problem is that we keep focusing on test results rather than how you pass the tests."


Original Submission

 
This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by looorg on Wednesday September 06 2023, @11:09AM (2 children)

    by looorg (578) on Wednesday September 06 2023, @11:09AM (#1323366)

    > "I think that the fundamental problem is that we keep focusing on test results rather than how you pass the tests."

    If they built the model and the goal appears to be for it to try and mimic human writing, and some other skills, wouldn't it be appropriate to give it tests that in some regard measure such things in humans? It's kind of pointless, since we know that they are not humans and won't turn into one no matter how well they score, but it still in some regard makes sense. We are seeing if it can pass for human or not.

    One would assume then if you train the model on taking specific tests it would eventually become good at taking those tests, sort of like humans. But a lot faster. People that like to take a lot of aptitude tests of any/some kind tend to eventually figure out how they work and become better at them. Multiply that by large amounts of times and data and you have the LLM. It might not understand why it becomes better, it definitely doesn't, but the results at least become better, or should unless the data it is being fed is bad. Loop forever.

    It's a bit nonsensical to claim that we do not know how the various LLM work. I'm sure the creators actually know. Then some of us at least know the general principles of it all and theories of how it should work. Then I guess there is that large, very very large, group of people that think it's magic or that a new life form have been created. We have created human rivaling intelligence like some kind of bad sci-fi movie. Which is not true.

    Eliza could string along and drag out a conversation, if you will, to but I would not rate it as intelligent. In that regard the fundamental problem appears to be that some see artificial intelligence as actual intelligence. Just cause it can string the words together from, large, samples of data doesn't imply actual intelligence. At best it is faking it until it makes it. It is mimicking humans at specific tasks. For every good output there is a massive amount of pointless and bad once. We normally just don't see them. Unless we trick the model and then hilarity ensues. We all fondly remember Tay the little Nazi-bot and so forth.

    So in some regard we have a model built to trick or mimic us and it's currently doing a job rated somewhere between abysmal and fair. I have yet to see any of those really good papers that blow me away. Unless it was so good I actually thought it was written by a human and it fooled me already. But I don't think that has happened as of yet. But I'm sure they'll come eventually.

    Perhaps that is the fundamental problem. Some people are just reading way to much into this. The AI overlords are not taking over, yet.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 0) by Anonymous Coward on Wednesday September 06 2023, @07:11PM

    by Anonymous Coward on Wednesday September 06 2023, @07:11PM (#1323488)

    May I try this as a summary to your long post?

    George Box, "All models are wrong, some models are useful."

    Until shown otherwise, I'm lumping these LLMs into the same category as every other sort of model.

  • (Score: 2) by stormreaver on Thursday September 07 2023, @11:54AM

    by stormreaver (5101) on Thursday September 07 2023, @11:54AM (#1323577)

    LLM's are like cheating students who are praised for graduating with honors. When you give test takers all the questions and answers in advance, you expect them to regurgitate all the correct answers to those questions.