Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday May 29 2018, @01:33AM   Printer-friendly
from the then-again-what-can? dept.

The hysteria about the future of artificial intelligence (AI) is everywhere. There seems to be no shortage of sensationalist news about how AI could cure diseases, accelerate human innovation and improve human creativity. Just looking at the media headlines, you might think that we are already living in a future where AI has infiltrated every aspect of society.

While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". This is the philosophy that, given enough data, machine learning algorithms can solve all of humanity's problems.

But there's a big problem with this idea. Instead of supporting AI progress, it actually jeopardises the value of machine intelligence by disregarding important AI safety principles and setting unrealistic expectations about what AI can really do for humanity.

In only a few years, the pendulum has swung from the dystopian notion that AI will destroy humanity to the utopian belief that our algorithmic saviour is here.

[...] Examples demonstrate that there is no AI solution for everything. Using AI simply for the sake of AI may not always be productive or useful. Not every problem is best addressed by applying machine intelligence to it. This is the crucial lesson for everyone aiming to boost investments in national AI programmes: all solutions come with a cost and not everything that can be automated should be.

The Conversation

What is your take on this? Do you think AI (as currently defined), can solve any of the problems, man-made and otherwise, of this world?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Informative) by bradley13 on Tuesday May 29 2018, @06:02AM (23 children)

    by bradley13 (3053) on Tuesday May 29 2018, @06:02AM (#685455) Homepage Journal

    I've been working in the area of AI since around 1985. The fact is, there is no "I" in AI. With symbolic AI, we have some clever algorithms, developed by people that do exactly what people tell them to do. With neural nets (and to a lesser extent genetic algorithms, etc.), we have a way of throwing piles of computing power at a problem, and getting a half-assed solution that sometimes works, only we don't know why it works or why it sometimes doesn't.

    What none of these solutions bring to the table is any sort of intelligence, as people understand the term. There is only a piece of code, working on demand, taking inputs and generating outputs. The code has no "understanding" of its task, it takes no initiative, never does anything new or surprising.

    The main change from 1985 to today is the massive amount of computing power we can throw at problems. This hasn't helped much in symbolic AI, where we are still proceeding in tiny baby steps. In the area of neural nets, we also haven't made much progress, but the computing power means that we can build much bigger networks and throw more (and more complex) data at them. It isn't much of an oversimplification to say that today's neural nets are still the same perceptrons developed in 1957.

    --
    Everyone is somebody else's weirdo.
    Starting Score:    1  point
    Moderation   +4  
       Interesting=2, Informative=2, Total=4
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 2) by NotSanguine on Tuesday May 29 2018, @08:44AM (17 children)

    You are quite correct.

    What the shills call "A.I." isn't what a lay person thinks of as artificial intelligence.

    The shills are talking about expert systems [wikipedia.org], which can actually be quite useful, especially with (as you pointed out) the large amounts of computational resources we can throw at these systems.

    When a person who isn't shilling for vendors of expert systems, "A.I." is generally interpreted as a generalized intelligence [wikipedia.org] that can perform human-like mental activities.

    In some respects those fields are related, but not in most of the important ways.

    Regardless of technological advances, such a generalized intelligence isn't anywhere on the horizon. The complex interplay of mental and physical stimuli and feedback in living systems are crucial to a generalized intelligence, and we don't have the knowledge, the scientific principles or, frankly, a clue, as to how to create such systems. As such, generalized AI is (optimistically) centuries, if not millenia (tens of millenia?) away, if possible at all.

    As such, if anyone tells you that current "A.I." is anything more than expert systems, they are either woefully uninformed or have some ulterior motive (probably to get you, and/or others, to buy something).

    --
    No, no, you're not thinking; you're just being logical. --Niels Bohr
    • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @09:03AM (11 children)

      by Anonymous Coward on Tuesday May 29 2018, @09:03AM (#685500)

      As such, generalized AI is (optimistically) centuries, if not millenia (tens of millenia?) away, if possible at all.

      You have zero data to make any estimate about how far or close we are from hard AI.

      Note that "tens of millennia" already describes all of human history. Seriously, could you have predicted from looking at civilization from merely 5000 years ago that we're already sending people into space? Or even from civilization 2000 years ago? What about 500 years ago? 200 years ago?

      • (Score: 3, Funny) by NotSanguine on Tuesday May 29 2018, @09:10AM (8 children)

        Seriously, could you have predicted from looking at civilization from merely 5000 years ago that we're already sending people into space? Or even from civilization 2000 years ago? What about 500 years ago? 200 years ago?

        Yes. Next question.

        --
        No, no, you're not thinking; you're just being logical. --Niels Bohr
        • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @01:04PM

          by Anonymous Coward on Tuesday May 29 2018, @01:04PM (#685556)

          Well, it depends on POV.

          Looking back, now that we know that space exists (or more to the point, that "sky" ends after a while when going up), that it is reachable with finite amount of energy, and that forces of nature can be harvested to provide the sufficient amount of energy, we could easily predict that humans will figure it out eventually.

          However, looking from perspective of someone from one of those respective ages, there is an infinite number of reasons for doubt.

          We could make a wager today, and someone 5000 years from now (if there'll be someone) could judge our bet, about something not existent today that could or could not be possible in 5000 years time, but we cannot predict with certainty, because we are still in process of finding out.

        • (Score: 1) by khallow on Tuesday May 29 2018, @10:22PM (6 children)

          by khallow (3766) Subscriber Badge on Tuesday May 29 2018, @10:22PM (#685955) Journal

          Seriously, could you have predicted from looking at civilization from merely 5000 years ago that we're already sending people into space?

          Yes. Next question.

          No. Next bald assertion.

          • (Score: 3, Touché) by NotSanguine on Tuesday May 29 2018, @10:42PM (5 children)

            Seriously, could you have predicted from looking at civilization from merely 5000 years ago that we're already sending people into space?

                    Yes. Next question.

            No. Next bald assertion.

            Not really. 5,000 years ago, I *could* have predicted space travel, impressionist art, vajazzling [wikipedia.org], self-immolation as a form of yoga, or any number of other things.

            It seems unlikely that I would have done so, but I certainly could have done so.

            Words have specific meanings. Perhaps you, and the poster whose blathering I initially replied, might remember that next time.

            What's more, such blathering adds nothing to the discussion.

            --
            No, no, you're not thinking; you're just being logical. --Niels Bohr
            • (Score: 1) by khallow on Wednesday May 30 2018, @01:20AM (4 children)

              by khallow (3766) Subscriber Badge on Wednesday May 30 2018, @01:20AM (#686027) Journal

              Not really. 5,000 years ago, I *could* have predicted space travel, impressionist art, vajazzling, self-immolation as a form of yoga, or any number of other things.

              It seems unlikely that I would have done so, but I certainly could have done so.

              You wouldn't have, but you could have. Such is the difference between reality and what is mathematically possible.

              • (Score: 2) by Gaaark on Wednesday May 30 2018, @02:32PM (3 children)

                by Gaaark (41) on Wednesday May 30 2018, @02:32PM (#686279) Journal

                No no...step back: science fiction writers do this all the time: make predictions.

                Someone MAY have written a story about an all powerful 'god-like' being that is really truly fucked (lol), whereas others might have written a story about aliens visiting (some hieroglyphs suggest such a thing)

                It IS possible and quite realistic someone COULD have 'predicted' all SORTS of things.

                --
                --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
                • (Score: 1) by khallow on Thursday May 31 2018, @04:39AM

                  by khallow (3766) Subscriber Badge on Thursday May 31 2018, @04:39AM (#686592) Journal
                  You say that now...
                • (Score: 1) by khallow on Thursday May 31 2018, @12:32PM (1 child)

                  by khallow (3766) Subscriber Badge on Thursday May 31 2018, @12:32PM (#686693) Journal
                  I'll note also that science fiction didn't exist in any form till around three centuries ago. The people who "do it all the time" didn't exist 5000 years ago.
                  • (Score: 2) by Gaaark on Friday June 01 2018, @12:24AM

                    by Gaaark (41) on Friday June 01 2018, @12:24AM (#687000) Journal

                    Science fiction had its beginnings in the time when the line between myth and fact was blurred. Written in the 2nd century AD by the Hellenized Syrian satirist Lucian, A True Story contains many themes and tropes that are characteristic of modern science fiction, including travel to other worlds, extraterrestrial lifeforms, interplanetary warfare, and artificial life. Some consider it the first science fiction novel.
                    --- https://en.m.wikipedia.org/wiki/Science_fiction [wikipedia.org]

                    But past that, the writing would have to be in stone, I guess, or kept in an urn-like vessel: kind of like the dead sea Scrolls.

                    *Cough* That is unless you see the Bible as a sort of science fiction story *cough*

                    --
                    --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
      • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @09:52AM (1 child)

        by Anonymous Coward on Tuesday May 29 2018, @09:52AM (#685510)

        Seriously, could you have predicted from looking at civilization from merely 5000 years ago that we're already sending people into space? Or even from civilization 2000 years ago? What about 500 years ago? 200 years ago?

        Yes. All of them.

        What I would seriously have had difficulty being able to predict 100 years ago is that we'd still be reliant on the petrol-driven automobile.

        • (Score: 1, Insightful) by Anonymous Coward on Tuesday May 29 2018, @09:58AM

          by Anonymous Coward on Tuesday May 29 2018, @09:58AM (#685512)

          Or that we'd still be murdering each other in the name of God.

    • (Score: 3, Insightful) by bzipitidoo on Tuesday May 29 2018, @11:22AM (4 children)

      by bzipitidoo (4388) on Tuesday May 29 2018, @11:22AM (#685534) Journal

      > Regardless of technological advances, such a generalized intelligence isn't anywhere on the horizon.

      I wouldn't be so sure of that. We could still accidentally stumble over ways to do it. Now we're throwing around an awful lot of computing power, far more capacity than we ourselves have in many areas. We can no longer match our creations in raw mathematical calculation, haven't been able to do that since the 1950s or even the 1940s, and now computers have surpassed us at mental games such as chess and Go, and also poker and arcade games.

      > we don't have the knowledge, the scientific principles or, frankly, a clue, as to how to create such systems.

      Our ignorance isn't quite that bad. A very basic problem is figuring out what intelligence is, what do we mean by it? Our IQ tests measure our personal computational and intuition abilities and knowledge as a sort of proxy for general intelligence. Our veneration of chess comes from the same thinking, and success in creating machines that surpass us at chess showed that ability at chess does not translate into general intelligence as some had wishfully hoped. Clearly, intelligence is more than a phenomenal memory capacity and ability at specific types of mental tasks, we know that better now, aren't guessing as much at that one.

      Soon, if not already, our computers will be able to tackle basic philosophy, and there are a lot of uncomfortable questions there. We hide a lot from ourselves. We like to think the age old "why are we here?" and "what is the meaning of life?" questions are unanswerable, because we don't like some of the answers. For instance, what if nihilism, that life has no meaning, is correct? Then there's Good and Evil. We like to think we're good, but what does that mean, and are we as nice as we like to think we are? The way life works has a lot of frankly evil activities such as predation and parasitism that are absolutely necessary for the functioning of the ecology. All animals, including us, have to eat other living things to survive. Plants have evolved strategies for that environment, and now many depend upon the very animals that eat them. Even plants can't be considered innocent, in that they fight each other for sunlight and other resources. We do a lot of rationalizing and excusing over it, but you can count on intelligent computers not being impressed by that. On occasion, entire nations explore directions that lead to a lot of death, and it's all the more tragic in that so often those paths didn't need to be explored, we already knew how it would turn out, and the results merely confirm that some simplistic and evil thinking was stupid and wrong.

      • (Score: 3, Interesting) by NotSanguine on Tuesday May 29 2018, @03:47PM (3 children)

        You're looking at this as strictly a computational problem. It is not. Generalized intelligence is a synthesis of computational resources, sensing mechanisms that integrate a wide variety of input stimuli (smell, taste, touch, hearing, vision) which have complex and sophisticated systems for gathering, organizing and interpreting that stimuli.

        What's more, those systems are tightly coupled with mechanical, chemical, electrical and biological processes, many of which include components which aren't even part of the brain or nervous system. And many of those, we don't understand at all. A human (or a dolphin or a dog) intelligence is a synthesis of all of those systems and processes. They've been integrated and optimized via millions of years of evolution.

        We have some understanding of many of these systems and processes. However, our understanding as to how they integrate to create generalized intelligence and the difficult to define concept of "consciousness" is, at best, minimal.

        Sure, there could be a breakthrough tomorrow that lays it all bare for us. However, given the state of our science and technology, that's unlikely in the extreme.

        We have an enormously better understanding (via general relativity) of how space-time can be stretched, contracted, bent, folded, spindled and mutilated, than we do about how generalized intelligence works.

        In fact, our understanding of general relativity gives us a pretty clear idea [wikipedia.org] about how we might achieve practical interstellar travel in reasonable human-scale time spans. That said, there are significant areas of science which would require development, as well as technology/engineering obstacles (not least of which is a source of energy concentrated enough to create the required conditions) which, it's pretty clear, are centuries, if not millenia beyond us.

        If I apply your reasoning to the Alcubierre drive, we could be off with the kids to Gliese 581 [wikipedia.org] for summer holiday, rather than Mallorca, Disney World or Kuala Lumpur in 25-30 years, assuming there's some sort of breakthrough. Which is absurd on its face.

        Given that our understanding of the processes involved in generating a conscious intelligence is miniscule by comparison to our understanding of space-time, the idea that we could create such a thing any time soon, is even more absurd. The science that needs to be developed, as well as the technology/engineering challenges required to create a generalized intelligence are orders of magnitude greater than creating some form of "warp drive."

        Don't believe me. Go look at the *actual* science, research and engineering going on around both topics.

        --
        No, no, you're not thinking; you're just being logical. --Niels Bohr
        • (Score: 1) by khallow on Thursday May 31 2018, @12:37PM (2 children)

          by khallow (3766) Subscriber Badge on Thursday May 31 2018, @12:37PM (#686694) Journal

          You're looking at this as strictly a computational problem. It is not. Generalized intelligence is a synthesis of computational resources, sensing mechanisms that integrate a wide variety of input stimuli (smell, taste, touch, hearing, vision) which have complex and sophisticated systems for gathering, organizing and interpreting that stimuli.

          Or maybe it is a strictly computational problem. Certainly, integrating a wide variety of input stimuli is such a computational problem.


          My view is that we have ample hardware resources for building genuine AI, we just haven't done it yet.

  • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @08:49AM

    by Anonymous Coward on Tuesday May 29 2018, @08:49AM (#685497)

    and getting a half-assed solution that sometimes works, only we don't know why it works or why it sometimes doesn't.

    Sounds the same that we get from most humans, too.

  • (Score: 0) by Anonymous Coward on Tuesday May 29 2018, @01:36PM

    by Anonymous Coward on Tuesday May 29 2018, @01:36PM (#685582)
  • (Score: 3, Touché) by Wootery on Wednesday May 30 2018, @10:44AM

    by Wootery (2341) on Wednesday May 30 2018, @10:44AM (#686212)

    The code has no "understanding" of its task, it takes no initiative, never does anything new or surprising.

    Well, the brain is deterministic too...

    Whatever 'understanding' might mean, it clearly doesn't mean 'nondeterministic'.

  • (Score: 2) by Gaaark on Wednesday May 30 2018, @02:50PM (1 child)

    by Gaaark (41) on Wednesday May 30 2018, @02:50PM (#686287) Journal

    Thanks for the confirmation:
    kind of what i gathered. More computational power does not mean more 'intelligence'. Until we figure out the holy grail of AI, it will all just be programmers telling computers what 'intelligence' is.

    On teh other hand, when the holy grail IS found, Dog help us all...here's hoping our new AI overlords will "use their knowledge for good...instead of evil" and know what the difference between good and evil is, lol.

    --
    --- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
    • (Score: 2) by Wootery on Friday June 01 2018, @09:02AM

      by Wootery (2341) on Friday June 01 2018, @09:02AM (#687158)

      You're assuming there's a discrete 'Holy Grail'. I think we can be confident this isn't how intelligence works. It isn't boolean. At no point in our evolutionary history did we suddenly 'become intelligent'. Instead we grew more intelligent, incrementally, the way everything works in evolution.