The hysteria about the future of artificial intelligence (AI) is everywhere. There seems to be no shortage of sensationalist news about how AI could cure diseases, accelerate human innovation and improve human creativity. Just looking at the media headlines, you might think that we are already living in a future where AI has infiltrated every aspect of society.
While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". This is the philosophy that, given enough data, machine learning algorithms can solve all of humanity's problems.
But there's a big problem with this idea. Instead of supporting AI progress, it actually jeopardises the value of machine intelligence by disregarding important AI safety principles and setting unrealistic expectations about what AI can really do for humanity.
In only a few years, the pendulum has swung from the dystopian notion that AI will destroy humanity to the utopian belief that our algorithmic saviour is here.
[...] Examples demonstrate that there is no AI solution for everything. Using AI simply for the sake of AI may not always be productive or useful. Not every problem is best addressed by applying machine intelligence to it. This is the crucial lesson for everyone aiming to boost investments in national AI programmes: all solutions come with a cost and not everything that can be automated should be.
What is your take on this? Do you think AI (as currently defined), can solve any of the problems, man-made and otherwise, of this world?
(Score: 2) by Wootery on Tuesday May 29 2018, @05:34PM (5 children)
That doesn't sound like 'understanding', it sounds more like online machine learning. [wikipedia.org]
Sounds like a rather arbitrary threshold to set, and again, it seems to me that even the most primitive online machine learning algorithms already fit the bill, although they don't necessarily use the approach that you describe where there are discrete stages.
(Score: 2) by cubancigar11 on Wednesday May 30 2018, @04:18AM (4 children)
Not arbitrary at all! This is how we measure intelligence in fellow human beings! We don't think that a plumber is intelligent because it can turn a screw, we think of him as intelligent when he figures out a problem on its own and solves it to the expectation that was set before the problem was even discovered.
(Score: 2) by Wootery on Wednesday May 30 2018, @09:03AM (3 children)
Not really - we look at how good people are at solving problems. I don't see the need for any explicit model involving distinct stages and 'shifting goalposts'.
(Score: 2) by cubancigar11 on Wednesday May 30 2018, @09:36AM (2 children)
That is because we have implicit understanding of what it means to be people. As much as you might dislike it, we have come to understand AI as having more than just how good it is at solving a particular problem. Otherwise a shell script can be better at solving particular problems than human, but yet it is not called AI.
(Score: 2) by Wootery on Wednesday May 30 2018, @10:29AM (1 child)
So you agree intelligence tests have no 3-stage model?
You've not phrased that very clearly, but I think your point is that extremely simple programs can be more effective at solving certain problems than even very intelligent people.
This is certainly true. When it comes to memorising numbers, multiplying numbers, etc, even very humble computers easily outperform the most intelligent humans. (This is what Daniel Dennett calls competence without comprehension.)
When we say 'AI', we tend to mean either
A pocket calculator doesn't qualify as any of these three, and so isn't considered AI, despite being superhumanly effective in its problem-domain.
None of this tells us what 'understanding' means. The best definition I can think of is - as I mentioned elsewhere in this thread - the ability to reason about the abstract concepts related to a problem-domain. (Of course, no current software system is capable of doing this.)
That's not all that precise, but it seems like a good starting point. I don't see that it would be reasonable to define understanding to only ever apply to humans. That's just cheating.
(Score: 2) by cubancigar11 on Wednesday May 30 2018, @01:11PM
Huh? How did you get that feeling? I am critiquing your following statement:
by pointing out that how good someone is at solving a problem is not at all related to how we measure intelligence. Intelligence is about how novel your solution is not and at all about how good that solution is.
Only if by 'we' you mean people working on AI. For me, 'we' means general populace and they don't care about machine learning.
No, a pocket calculator doesn't qualify as AI because it doesn't have any way to recognize a problem and come with a novel solution, or in colloquial terms, "understands" the problem, or to put it in a testable way, it doesn't follow the model I am suggesting.
I think I realize where the disagreement is originating - I am talking about why a layman doesn't consider something to be an AI and you are talking about something that has already been done in the name of AI.