Stories
Slash Boxes
Comments

SoylentNews is people

The Fine print: The following are owned by whoever posted them. We are not responsible for them in any way.

Journal by acid andy

Most people on here seem pretty convinced that large language models aren't capable of much useful intelligence.

I do think that language can be a powerful tool for reasoning and thinking about facts but as far as I can see there's something fundamentally different about the way humans use language to these AIs.

Specifically, humans get lived sensory experiences of the concepts that they learn words for. When they learn to talk about roses, there's a good chance they'll get to see and touch and smell a rose and they'll have vivid sensory experiences about that. When they learn what a hill is, they might feel the effort of walking up one. Correct me if I'm wrong, but I don't think most of these current chat bots are exposed to any rich sense of meaning of the words; as I understand it they're just spotting patterns between milions of different sentences that include those words. If they can only "understand" words in terms of other words, then that's a pretty poor imitation of what humans mean when they say they understand a word.

I think the same thing goes for Searle's Chinese Room thought experiment as well. Maybe the system as a whole "understands" Chinese in that it knows everything it needs to about translating it into English, but it doesn't have a rich experience of any of those words, so it's a very dull kind of understanding.

Chat bots are probably already being given access to images and text at the same time, but seeing an image of a rose along with lots of sentences about it is still pretty meaningless if you don't have senses and a life living and moving around plants to know what it is in its proper context, how it relates to you and your story.

Display Options Threshold/Breakthrough Reply to Article Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Interesting) by Anonymous Coward on Thursday June 01, @07:08PM (5 children)

    by Anonymous Coward on Thursday June 01, @07:08PM (#1309289)

    Simple pattern matching is the basis for intelligence. I agree there is not yet deep understanding, but in a few years and with quantum processors I think we'll see the first true AI if it isn't already in some secret project.

    • (Score: 1, Interesting) by Anonymous Coward on Thursday June 01, @07:53PM (4 children)

      by Anonymous Coward on Thursday June 01, @07:53PM (#1309300)

      > Simple pattern matching is the basis for intelligence

      Yeah, It was only within the last few years I began to consider statistical models as being capable of intelligence. The more I thought about human memory and decision making the more I realized how powerful Neural Nets would become. It makes no difference if people disagree. Consider by analogy the simulation hypothesis, if we were living in a perfect simulation of reality what is the difference?

      I watched some of the senate hearings [youtube.com] and it was incoherent. There's a difference between a practical AI (diffusion based imaging) and an AGI. Artificial General Intelligence models (eg: ChatGPT) are trained to produce pleasing output. They are trained to be non-offensive and frequently "hallucinate" (present non-factual information as fact). They lie because we are training them in the art of deception.

      Before we begin legislating we should start being honest. Could we ban Gain of Function research? How about autonomous, weaponized drones that many here could make with $100 worth of parts without training a NN? We need clarity, not science fiction. Clarity starts with a broad agreement that current AGI constitutes an intelligence - albeit not in the form of an actual consciousness.

      • (Score: 0) by Anonymous Coward on Thursday June 01, @09:29PM (1 child)

        by Anonymous Coward on Thursday June 01, @09:29PM (#1309324)

        Hallucinating AI is the red flag, there is no intelligence just pattern matching. They aren't being trained to deceive they are being trained to provide useful responses. As for banning things, yes we could ban gain of function, weapons, or anything else. The usefulness of the bans is debatable, and there are many other legislative options available without total bans.

        What clarity are you hoping for? Your statement of intelligence is how everyone thinks about AI now anyway.

        • (Score: 0) by Anonymous Coward on Thursday June 01, @10:33PM

          by Anonymous Coward on Thursday June 01, @10:33PM (#1309338)

          > there is no intelligence just pattern matching.
          > Your statement of intelligence is how everyone thinks about AI now anyway.

          Was this your demonstration of "pattern matching" in action?

          > As for banning things, yes we could ban

          Yes, we could legislate but against what? That's why the clarity is required, although I myself wasn't entirely clear - these are just statistical models. Most of the bad things AI could be used for are already illegal. If you watch the Senate hearings or NVidia presentation [youtube.com] some of this stuff is just fanciful. Image rendering improvements using AI noise reduction and upscaling are one thing, some of the rest... . VRML was widely heralded as the next big thing back in 1997.

      • (Score: 0) by Anonymous Coward on Thursday June 01, @10:36PM (1 child)

        by Anonymous Coward on Thursday June 01, @10:36PM (#1309339)

        This isn't aimed directly at you but it has been bugging me for awhile. Not all AI mistakes are hallucinations. Not all erroneous information provided by an AI is a hallucination. An AI choosing to lie to you isn't a hallucination. Hallucinations refer to a specific subset of errors by an AI. It just drives me nuts every time I see an AI do something anomalous and people start calling it a hallucination. They demonstrating how little they actually know about AI while misleading others due to their apparent knowledge. And then when people actually start looking at AI themselves, they have to unlearn the colloquial mistakes before moving on to actual learning.

        • (Score: 0) by Anonymous Coward on Thursday June 01, @11:16PM

          by Anonymous Coward on Thursday June 01, @11:16PM (#1309341)

          I wasn't conflating hallucination [wikipedia.org] with willful deception or every class of error. It is, however, no coincidence that users made the comparison to sociopathy, equating pathological lying with AI's frequent inclusion of erroneous information in responses.

          "Most of our future attempts to build large, growing Artificial Intelligences will be subject to all sorts of mental disorders." -- The Emotion Machine (Minsky 2006)

  • (Score: 1, Informative) by Anonymous Coward on Thursday June 01, @09:05PM

    by Anonymous Coward on Thursday June 01, @09:05PM (#1309322)

    "behave" by robert sapolsky https://www.goodreads.com/book/show/31170723-behave [goodreads.com]
    "brain bugs" by dean buonomano https://www.goodreads.com/book/show/11213439-brain-bugs [goodreads.com]

    I've read the "conversation" (posted here a couple of weeks ago) that Donald Knuth had with ChatGPT or whatever, and to me it looks like a weak version of the Turing test has been passed.
    I personally won't waste time talking to chatgpt just yet, but it seems much more capable (in writing) than most teenagers.
    modulo the easily discovered nonsense, of course, but have you ever read stuff that teenagers write...?

    but I digress.
    neurologists will point out that concepts in the brain, to the extent that the word can be used, are embedded in the topology of connections between neurons.
    what you can also find out (and the neurologists will typically not say, because it's so obvious to them) is that different classes of neurons exist, some of them connected in predetermined fashion such that you can never disconnect them (think hardwired reflexes), and some of them happy to connect, disconnect, reconnect throughout the brain's lifetime.
    the brain is also layered in "evolutionary older" and "evolutionary younger" layers, and connections are typically more rigid in the "older" layers.

    there's a clichee about psychologists asking people whether they were breastfed as babies.
    why?
    because human babies really want to be breastfed, they are hardwired for it, there are a lot of ways in which the information is hardwired: suckling reflex (poke a baby with a finger on their cheek and see), attraction to circles with dots in the middle (yes, they like to look at naked boobies), etc.
    the question comes because it somewhat makes sense to assume the complexity of our adult minds is somehow built on top of these comparatively simple and common builtin desires.
    while the sentence "all love neural networks connect to the neural network responsible for hormone release during suckling" is false, it does set up a reasonable starting point for figuring out individual humans.

    what you are saying is somewhat related to this: humans learn language while they are learning about the world, and often words come with direct experiences of the objects that the words stand for.

    however, you should keep in mind that human brains are also hardwired for language: "the language instinct" by steven pinker (https://www.goodreads.com/book/show/5755.The_Language_Instinct).
    while this is not fully understood, it seems fairly clear that the human brain (unlikes all other brains that we know of) has a natural facility with a type of grammar --- rules for nesting sentences are somehow builtin.

    so yes, LLMs are missing something.
    large language models won't associate concepts to hardware selected by Darwinian evolution (at most there's some builtin structure imposed by the procedure of setting up the various layers of the neural network or whatever more elaborate thing they're using).
    large language models won't have a builtin association between concepts and the experience of the object (as human brains can and do connect symbols to smells tastes etc)

    but no, it's not at all clear that LLMs are missing something essential.
    large language models succeed (at least partially) in utilizing grammar, specifically they can nest sentences.
    much of the "abstract knowledge" that we humans have is essentially just nesting concepts into hierarchies.
    all of math is basically a series of definitions (hierarchy of concepts) and then long sentences that connect some concepts to other concepts (i.e. proofs).

    am I saying these things are intelligent or self-aware?
    no, and what I've seen so far doesn't suggest it's worth my time to look into it.
    do I believe this will directly lead to something intelligent and/or self-aware?
    yes.
    what they have right now is very expensive and very stupid, but it IS something to which humans will intuitively apply the word "stupid" to.
    which is a huge deal (and an important part of the Turing test).

(1)