Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.
posted by Fnord666 on Tuesday May 29 2018, @01:33AM   Printer-friendly
from the then-again-what-can? dept.

The hysteria about the future of artificial intelligence (AI) is everywhere. There seems to be no shortage of sensationalist news about how AI could cure diseases, accelerate human innovation and improve human creativity. Just looking at the media headlines, you might think that we are already living in a future where AI has infiltrated every aspect of society.

While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". This is the philosophy that, given enough data, machine learning algorithms can solve all of humanity's problems.

But there's a big problem with this idea. Instead of supporting AI progress, it actually jeopardises the value of machine intelligence by disregarding important AI safety principles and setting unrealistic expectations about what AI can really do for humanity.

In only a few years, the pendulum has swung from the dystopian notion that AI will destroy humanity to the utopian belief that our algorithmic saviour is here.

[...] Examples demonstrate that there is no AI solution for everything. Using AI simply for the sake of AI may not always be productive or useful. Not every problem is best addressed by applying machine intelligence to it. This is the crucial lesson for everyone aiming to boost investments in national AI programmes: all solutions come with a cost and not everything that can be automated should be.

The Conversation

What is your take on this? Do you think AI (as currently defined), can solve any of the problems, man-made and otherwise, of this world?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by NotSanguine on Tuesday May 29 2018, @03:47PM (3 children)

    You're looking at this as strictly a computational problem. It is not. Generalized intelligence is a synthesis of computational resources, sensing mechanisms that integrate a wide variety of input stimuli (smell, taste, touch, hearing, vision) which have complex and sophisticated systems for gathering, organizing and interpreting that stimuli.

    What's more, those systems are tightly coupled with mechanical, chemical, electrical and biological processes, many of which include components which aren't even part of the brain or nervous system. And many of those, we don't understand at all. A human (or a dolphin or a dog) intelligence is a synthesis of all of those systems and processes. They've been integrated and optimized via millions of years of evolution.

    We have some understanding of many of these systems and processes. However, our understanding as to how they integrate to create generalized intelligence and the difficult to define concept of "consciousness" is, at best, minimal.

    Sure, there could be a breakthrough tomorrow that lays it all bare for us. However, given the state of our science and technology, that's unlikely in the extreme.

    We have an enormously better understanding (via general relativity) of how space-time can be stretched, contracted, bent, folded, spindled and mutilated, than we do about how generalized intelligence works.

    In fact, our understanding of general relativity gives us a pretty clear idea [wikipedia.org] about how we might achieve practical interstellar travel in reasonable human-scale time spans. That said, there are significant areas of science which would require development, as well as technology/engineering obstacles (not least of which is a source of energy concentrated enough to create the required conditions) which, it's pretty clear, are centuries, if not millenia beyond us.

    If I apply your reasoning to the Alcubierre drive, we could be off with the kids to Gliese 581 [wikipedia.org] for summer holiday, rather than Mallorca, Disney World or Kuala Lumpur in 25-30 years, assuming there's some sort of breakthrough. Which is absurd on its face.

    Given that our understanding of the processes involved in generating a conscious intelligence is miniscule by comparison to our understanding of space-time, the idea that we could create such a thing any time soon, is even more absurd. The science that needs to be developed, as well as the technology/engineering challenges required to create a generalized intelligence are orders of magnitude greater than creating some form of "warp drive."

    Don't believe me. Go look at the *actual* science, research and engineering going on around both topics.

    --
    No, no, you're not thinking; you're just being logical. --Niels Bohr
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 1) by khallow on Thursday May 31 2018, @12:37PM (2 children)

    by khallow (3766) Subscriber Badge on Thursday May 31 2018, @12:37PM (#686694) Journal

    You're looking at this as strictly a computational problem. It is not. Generalized intelligence is a synthesis of computational resources, sensing mechanisms that integrate a wide variety of input stimuli (smell, taste, touch, hearing, vision) which have complex and sophisticated systems for gathering, organizing and interpreting that stimuli.

    Or maybe it is a strictly computational problem. Certainly, integrating a wide variety of input stimuli is such a computational problem.


    My view is that we have ample hardware resources for building genuine AI, we just haven't done it yet.