Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday May 29 2018, @01:33AM   Printer-friendly
from the then-again-what-can? dept.

The hysteria about the future of artificial intelligence (AI) is everywhere. There seems to be no shortage of sensationalist news about how AI could cure diseases, accelerate human innovation and improve human creativity. Just looking at the media headlines, you might think that we are already living in a future where AI has infiltrated every aspect of society.

While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". This is the philosophy that, given enough data, machine learning algorithms can solve all of humanity's problems.

But there's a big problem with this idea. Instead of supporting AI progress, it actually jeopardises the value of machine intelligence by disregarding important AI safety principles and setting unrealistic expectations about what AI can really do for humanity.

In only a few years, the pendulum has swung from the dystopian notion that AI will destroy humanity to the utopian belief that our algorithmic saviour is here.

[...] Examples demonstrate that there is no AI solution for everything. Using AI simply for the sake of AI may not always be productive or useful. Not every problem is best addressed by applying machine intelligence to it. This is the crucial lesson for everyone aiming to boost investments in national AI programmes: all solutions come with a cost and not everything that can be automated should be.

The Conversation

What is your take on this? Do you think AI (as currently defined), can solve any of the problems, man-made and otherwise, of this world?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by bzipitidoo on Tuesday May 29 2018, @11:22AM (4 children)

    by bzipitidoo (4388) on Tuesday May 29 2018, @11:22AM (#685534) Journal

    > Regardless of technological advances, such a generalized intelligence isn't anywhere on the horizon.

    I wouldn't be so sure of that. We could still accidentally stumble over ways to do it. Now we're throwing around an awful lot of computing power, far more capacity than we ourselves have in many areas. We can no longer match our creations in raw mathematical calculation, haven't been able to do that since the 1950s or even the 1940s, and now computers have surpassed us at mental games such as chess and Go, and also poker and arcade games.

    > we don't have the knowledge, the scientific principles or, frankly, a clue, as to how to create such systems.

    Our ignorance isn't quite that bad. A very basic problem is figuring out what intelligence is, what do we mean by it? Our IQ tests measure our personal computational and intuition abilities and knowledge as a sort of proxy for general intelligence. Our veneration of chess comes from the same thinking, and success in creating machines that surpass us at chess showed that ability at chess does not translate into general intelligence as some had wishfully hoped. Clearly, intelligence is more than a phenomenal memory capacity and ability at specific types of mental tasks, we know that better now, aren't guessing as much at that one.

    Soon, if not already, our computers will be able to tackle basic philosophy, and there are a lot of uncomfortable questions there. We hide a lot from ourselves. We like to think the age old "why are we here?" and "what is the meaning of life?" questions are unanswerable, because we don't like some of the answers. For instance, what if nihilism, that life has no meaning, is correct? Then there's Good and Evil. We like to think we're good, but what does that mean, and are we as nice as we like to think we are? The way life works has a lot of frankly evil activities such as predation and parasitism that are absolutely necessary for the functioning of the ecology. All animals, including us, have to eat other living things to survive. Plants have evolved strategies for that environment, and now many depend upon the very animals that eat them. Even plants can't be considered innocent, in that they fight each other for sunlight and other resources. We do a lot of rationalizing and excusing over it, but you can count on intelligent computers not being impressed by that. On occasion, entire nations explore directions that lead to a lot of death, and it's all the more tragic in that so often those paths didn't need to be explored, we already knew how it would turn out, and the results merely confirm that some simplistic and evil thinking was stupid and wrong.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 3, Interesting) by NotSanguine on Tuesday May 29 2018, @03:47PM (3 children)

    You're looking at this as strictly a computational problem. It is not. Generalized intelligence is a synthesis of computational resources, sensing mechanisms that integrate a wide variety of input stimuli (smell, taste, touch, hearing, vision) which have complex and sophisticated systems for gathering, organizing and interpreting that stimuli.

    What's more, those systems are tightly coupled with mechanical, chemical, electrical and biological processes, many of which include components which aren't even part of the brain or nervous system. And many of those, we don't understand at all. A human (or a dolphin or a dog) intelligence is a synthesis of all of those systems and processes. They've been integrated and optimized via millions of years of evolution.

    We have some understanding of many of these systems and processes. However, our understanding as to how they integrate to create generalized intelligence and the difficult to define concept of "consciousness" is, at best, minimal.

    Sure, there could be a breakthrough tomorrow that lays it all bare for us. However, given the state of our science and technology, that's unlikely in the extreme.

    We have an enormously better understanding (via general relativity) of how space-time can be stretched, contracted, bent, folded, spindled and mutilated, than we do about how generalized intelligence works.

    In fact, our understanding of general relativity gives us a pretty clear idea [wikipedia.org] about how we might achieve practical interstellar travel in reasonable human-scale time spans. That said, there are significant areas of science which would require development, as well as technology/engineering obstacles (not least of which is a source of energy concentrated enough to create the required conditions) which, it's pretty clear, are centuries, if not millenia beyond us.

    If I apply your reasoning to the Alcubierre drive, we could be off with the kids to Gliese 581 [wikipedia.org] for summer holiday, rather than Mallorca, Disney World or Kuala Lumpur in 25-30 years, assuming there's some sort of breakthrough. Which is absurd on its face.

    Given that our understanding of the processes involved in generating a conscious intelligence is miniscule by comparison to our understanding of space-time, the idea that we could create such a thing any time soon, is even more absurd. The science that needs to be developed, as well as the technology/engineering challenges required to create a generalized intelligence are orders of magnitude greater than creating some form of "warp drive."

    Don't believe me. Go look at the *actual* science, research and engineering going on around both topics.

    --
    No, no, you're not thinking; you're just being logical. --Niels Bohr
    • (Score: 1) by khallow on Thursday May 31 2018, @12:37PM (2 children)

      by khallow (3766) Subscriber Badge on Thursday May 31 2018, @12:37PM (#686694) Journal

      You're looking at this as strictly a computational problem. It is not. Generalized intelligence is a synthesis of computational resources, sensing mechanisms that integrate a wide variety of input stimuli (smell, taste, touch, hearing, vision) which have complex and sophisticated systems for gathering, organizing and interpreting that stimuli.

      Or maybe it is a strictly computational problem. Certainly, integrating a wide variety of input stimuli is such a computational problem.


      My view is that we have ample hardware resources for building genuine AI, we just haven't done it yet.