The hysteria about the future of artificial intelligence (AI) is everywhere. There seems to be no shortage of sensationalist news about how AI could cure diseases, accelerate human innovation and improve human creativity. Just looking at the media headlines, you might think that we are already living in a future where AI has infiltrated every aspect of society.
While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". This is the philosophy that, given enough data, machine learning algorithms can solve all of humanity's problems.
But there's a big problem with this idea. Instead of supporting AI progress, it actually jeopardises the value of machine intelligence by disregarding important AI safety principles and setting unrealistic expectations about what AI can really do for humanity.
In only a few years, the pendulum has swung from the dystopian notion that AI will destroy humanity to the utopian belief that our algorithmic saviour is here.
[...] Examples demonstrate that there is no AI solution for everything. Using AI simply for the sake of AI may not always be productive or useful. Not every problem is best addressed by applying machine intelligence to it. This is the crucial lesson for everyone aiming to boost investments in national AI programmes: all solutions come with a cost and not everything that can be automated should be.
What is your take on this? Do you think AI (as currently defined), can solve any of the problems, man-made and otherwise, of this world?
(Score: 3, Interesting) by NotSanguine on Tuesday May 29 2018, @03:47PM (3 children)
You're looking at this as strictly a computational problem. It is not. Generalized intelligence is a synthesis of computational resources, sensing mechanisms that integrate a wide variety of input stimuli (smell, taste, touch, hearing, vision) which have complex and sophisticated systems for gathering, organizing and interpreting that stimuli.
What's more, those systems are tightly coupled with mechanical, chemical, electrical and biological processes, many of which include components which aren't even part of the brain or nervous system. And many of those, we don't understand at all. A human (or a dolphin or a dog) intelligence is a synthesis of all of those systems and processes. They've been integrated and optimized via millions of years of evolution.
We have some understanding of many of these systems and processes. However, our understanding as to how they integrate to create generalized intelligence and the difficult to define concept of "consciousness" is, at best, minimal.
Sure, there could be a breakthrough tomorrow that lays it all bare for us. However, given the state of our science and technology, that's unlikely in the extreme.
We have an enormously better understanding (via general relativity) of how space-time can be stretched, contracted, bent, folded, spindled and mutilated, than we do about how generalized intelligence works.
In fact, our understanding of general relativity gives us a pretty clear idea [wikipedia.org] about how we might achieve practical interstellar travel in reasonable human-scale time spans. That said, there are significant areas of science which would require development, as well as technology/engineering obstacles (not least of which is a source of energy concentrated enough to create the required conditions) which, it's pretty clear, are centuries, if not millenia beyond us.
If I apply your reasoning to the Alcubierre drive, we could be off with the kids to Gliese 581 [wikipedia.org] for summer holiday, rather than Mallorca, Disney World or Kuala Lumpur in 25-30 years, assuming there's some sort of breakthrough. Which is absurd on its face.
Given that our understanding of the processes involved in generating a conscious intelligence is miniscule by comparison to our understanding of space-time, the idea that we could create such a thing any time soon, is even more absurd. The science that needs to be developed, as well as the technology/engineering challenges required to create a generalized intelligence are orders of magnitude greater than creating some form of "warp drive."
Don't believe me. Go look at the *actual* science, research and engineering going on around both topics.
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 1) by khallow on Thursday May 31 2018, @12:37PM (2 children)
Or maybe it is a strictly computational problem. Certainly, integrating a wide variety of input stimuli is such a computational problem.
My view is that we have ample hardware resources for building genuine AI, we just haven't done it yet.
(Score: 2) by NotSanguine on Thursday May 31 2018, @08:06PM (1 child)
Given your response, it appears that you aren't very clear on what exactly generalized AI encompasses and what the challenges in developing such systems are.
As such, you'll forgive me if I take the opinions of those actually working in the field rather than yours, Khallow.
As I said:
Here are some places to start (just the first few links to a search for "how far away is generalized AI" (I'm not going to curate the literature for you. Perhaps you could design, build and perfect a generalized AI to do it for you -- I won't hold my breath)
https://en.wikipedia.org/wiki/Artificial_general_intelligence [wikipedia.org]
http://aiindex.org/ [aiindex.org]
https://www.technologyreview.com/s/609611/progress-in-ai-isnt-as-impressive-as-you-might-think/ [technologyreview.com]
https://techcrunch.com/2016/12/14/why-we-are-still-light-years-away-from-full-artificial-intelligence/ [techcrunch.com]
https://www.huffingtonpost.com/entry/human-level-ai-how-far-are-we_us_59ecc013e4b092f9f241931e [huffingtonpost.com]
No, no, you're not thinking; you're just being logical. --Niels Bohr
(Score: 0, Troll) by khallow on Friday June 01 2018, @04:08AM
What is there to be clear about?
If they haven't developed generalized AI, then why should I start there? Even keeping in mind the argument from ignorance fallacy, it still remains that these are opinions by people who haven't done the deed (and admit to not doing anything close to the deed). So why should their opinion on how far away we are from such things be taken seriously?