Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 12 submissions in the queue.
posted by hubie on Wednesday September 06 2023, @07:08AM   Printer-friendly

With hopes and fears about this technology running wild, it's time to agree on what it can and can't do:

When Taylor Webb played around with GPT-3 in early 2022, he was blown away by what OpenAI's large language model appeared to be able to do. Here was a neural network trained only to predict the next word in a block of text—a jumped-up autocomplete. And yet it gave correct answers to many of the abstract problems that Webb set for it—the kind of thing you'd find in an IQ test. "I was really shocked by its ability to solve these problems," he says. "It completely upended everything I would have predicted."

[...] Last month Webb and his colleagues published an article in Nature, in which they describe GPT-3's ability to pass a variety of tests devised to assess the use of analogy to solve problems (known as analogical reasoning). On some of those tests GPT-3 scored better than a group of undergrads. "Analogy is central to human reasoning," says Webb. "We think of it as being one of the major things that any kind of machine intelligence would need to demonstrate."

What Webb's research highlights is only the latest in a long string of remarkable tricks pulled off by large language models. [...]

And multiple researchers claim to have shown that large language models can pass tests designed to identify certain cognitive abilities in humans, from chain-of-thought reasoning (working through a problem step by step) to theory of mind (guessing what other people are thinking).

These kinds of results are feeding a hype machine predicting that these machines will soon come for white-collar jobs, replacing teachers, doctors, journalists, and lawyers. Geoffrey Hinton has called out GPT-4's apparent ability to string together thoughts as one reason he is now scared of the technology he helped create.

But there's a problem: there is little agreement on what those results really mean. Some people are dazzled by what they see as glimmers of human-like intelligence; others aren't convinced one bit.

"There are several critical issues with current evaluation techniques for large language models," says Natalie Shapira, a computer scientist at Bar-Ilan University in Ramat Gan, Israel. "It creates the illusion that they have greater capabilities than what truly exists."

That's why a growing number of researchers—computer scientists, cognitive scientists, neuroscientists, linguists—want to overhaul the way they are assessed, calling for more rigorous and exhaustive evaluation. Some think that the practice of scoring machines on human tests is wrongheaded, period, and should be ditched.

"People have been giving human intelligence tests—IQ tests and so on—to machines since the very beginning of AI," says Melanie Mitchell, an artificial-intelligence researcher at the Santa Fe Institute in New Mexico. "The issue throughout has been what it means when you test a machine like this. It doesn't mean the same thing that it means for a human."

[...] "There is a long history of developing methods to test the human mind," says Laura Weidinger, a senior research scientist at Google DeepMind. "With large language models producing text that seems so human-like, it is tempting to assume that human psychology tests will be useful for evaluating them. But that's not true: human psychology tests rely on many assumptions that may not hold for large language models."

Webb is aware of the issues he waded into. "I share the sense that these are difficult questions," he says. He notes that despite scoring better than undergrads on certain tests, GPT-3 produced absurd results on others. For example, it failed a version of an analogical reasoning test about physical objects that developmental psychologists sometimes give to kids.

[...] A lot of these tests—questions and answers—are online, says Webb: "Many of them are almost certainly in GPT-3's and GPT-4's training data, so I think we really can't conclude much of anything."

[...] The performance of large language models is brittle. Among people, it is safe to assume that someone who scores well on a test would also do well on a similar test. That's not the case with large language models: a small tweak to a test can drop an A grade to an F.

"In general, AI evaluation has not been done in such a way as to allow us to actually understand what capabilities these models have," says Lucy Cheke, a psychologist at the University of Cambridge, UK. "It's perfectly reasonable to test how well a system does at a particular task, but it's not useful to take that task and make claims about general abilities."

[...] "The assumption that cognitive or academic tests designed for humans serve as accurate measures of LLM capability stems from a tendency to anthropomorphize models and align their evaluation with human standards," says Shapira. "This assumption is misguided."

[...] The trouble is that nobody knows exactly how large language models work. Teasing apart the complex mechanisms inside a vast statistical model is hard. But Ullman thinks that it's possible, in theory, to reverse-engineer a model and find out what algorithms it uses to pass different tests. "I could more easily see myself being convinced if someone developed a technique for figuring out what these things have actually learned," he says.

"I think that the fundamental problem is that we keep focusing on test results rather than how you pass the tests."


Original Submission

 
This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by Thexalon on Wednesday September 06 2023, @10:46AM (9 children)

    by Thexalon (636) on Wednesday September 06 2023, @10:46AM (#1323363)

    LLMs, generally trained on text they find on the web, can produce text that fulfills 2 requirements simultaneously:
    1. Fool many people into thinking that they know what they're talking about.
    2. May or may not be right about anything, just like the web.

    This makes LLMs the software equivalent of bullshit artists, trained to say a bunch of stuff with no regards for its truth or lack thereof. We could use them to replace politicians, C-suite executives, and news pundits and I doubt anyone would really notice.

    --
    "Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
    Starting Score:    1  point
    Moderation   +3  
       Insightful=2, Touché=1, Total=3
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 4, Insightful) by VLM on Wednesday September 06 2023, @11:58AM (5 children)

    by VLM (445) on Wednesday September 06 2023, @11:58AM (#1323377)

    trained on text they find on the web

    The problem with the web as a source is online wildly over-represents chronically online / mentally ill people, propaganda both humans and bot form, marketing, clickbait, pr0n and other addictions, virtue signaling. The "useful stuff" in general is not online in 2023.

    If you train a bot on tumblr and reddit, the best case outcome of the training is as useless as tumblr and reddit addicts, which isn't very much.

    • (Score: 2) by JoeMerchant on Wednesday September 06 2023, @02:03PM (4 children)

      by JoeMerchant (3937) on Wednesday September 06 2023, @02:03PM (#1323426)

      As with all ML development, the trick is in the curation of the training (and testing) data sets.

      This is where the humans can actually improve the ML output dramatically.

      First thing that comes to mind would be: use of peer reviewed science papers as input data sources. Not only restricting to reputable journals' accepted papers, but also weighting those papers based on their number of references, etc. It's not a perfect system, but it beats comment moderation scores. Next up: translating that peer reviewed literature into something that people outside the niche fields can comprehend.

      --
      🌻🌻 [google.com]
      • (Score: 4, Insightful) by VLM on Wednesday September 06 2023, @06:32PM (2 children)

        by VLM (445) on Wednesday September 06 2023, @06:32PM (#1323481)

        Good points but even peer reviewed science has its issues.

        Nobody funds and publishes negative results, so there's a weird positivity bias.

        Speaking of funding, consider how research results tend to correlate incredibly strongly with funding source, so there's a patronage issue. According to research, diseases are never curable by diet, only via patentable very expensive pills, for example. Oddly enough research funded by vegetarian organizations never discovers the most healthy human diet is omnivore or carnivore.

        Usually the academics are a decade or two behind the cutting edge so you could get programming advice from the turn of the century from a model but only humans could provide advice about post Y2K topics, as a general rule. This, by the way, is likely to be the long term niche of AI. Is the answer from 2015 good enough, then use an AI. The problem is that's a commodity that'll approach zero worth, so if you want to make any money in the economy, you need the answer from 2023, which is only available in human format at current human prices.

        • (Score: 2) by JoeMerchant on Wednesday September 06 2023, @08:29PM

          by JoeMerchant (3937) on Wednesday September 06 2023, @08:29PM (#1323498)

          I have been happy living 5-10 years "behind the curve" with computer tech since about 2010. At this stage I am good at 40+ years behind the cutting edge of automotive tech too, although I won't turn down a good carburetor replacement with EFI and electronic ignition....

          There is a whole world of plumbers and electricians who are doing things very similarly to how they were done 50+ years ago, with little tweaks.

          Yes, academia is horribly flawed, skewed, and corrupted, but with all those warts and boils, it's still more valuable information than most other sources... As long as you know when to pay attention to the competing sources (as you point out: when academia stubbornly only examines profitable alternatives.). I forget the name of the movie, but there are a few documentaries out there about the ketogenic diet as a treatment for epilepsy... Classic example of medicine ignoring treatments that lack profit centers.

          --
          🌻🌻 [google.com]
        • (Score: 1, Insightful) by Anonymous Coward on Wednesday September 06 2023, @10:47PM

          by Anonymous Coward on Wednesday September 06 2023, @10:47PM (#1323514)

          > Nobody funds and publishes negative results, so there's a weird positivity bias.

          It's also horrifically self-praising and narcissistic, hidden behind a veneer of objectivity.

      • (Score: 1) by khallow on Wednesday September 06 2023, @09:34PM

        by khallow (3766) Subscriber Badge on Wednesday September 06 2023, @09:34PM (#1323511) Journal

        Not only restricting to reputable journals' accepted papers, but also weighting those papers based on their number of references, etc.

        What happens when the reputable journal is disputable. Or you run into a web of referencing abuse (bad papers referencing each other to boost citation count)?

        It's not a perfect system, but it beats comment moderation scores.

        Who will peer review my 33k posts?

  • (Score: 0) by Anonymous Coward on Wednesday September 06 2023, @03:06PM (2 children)

    by Anonymous Coward on Wednesday September 06 2023, @03:06PM (#1323439)

    We could use them to replace ... news pundits and I doubt anyone would really notice.

    I dunno, it was really obvious when Microsoft replaced their travel journalists with LLMs because it started recommending visitors to Canada's capital should go to the Ottawa Food Bank with an empty stomach [soylentnews.org].

    • (Score: 0) by Anonymous Coward on Wednesday September 06 2023, @04:12PM

      by Anonymous Coward on Wednesday September 06 2023, @04:12PM (#1323451)
      Did they get lots more hits though? That might be all some "news" sites care about... Click bait and all that.
    • (Score: 2) by Thexalon on Wednesday September 06 2023, @05:12PM

      by Thexalon (636) on Wednesday September 06 2023, @05:12PM (#1323459)

      I didn't say "reporters", I said "pundits", i.e. the people that alternate between writing op-eds and being talking heads on TV and get to spout whatever speculative nonsense they like. So not a robotic travel reporter, but a robotic equivalent of, say, David Brooks.

      --
      "Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin