Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday May 29 2018, @01:33AM   Printer-friendly
from the then-again-what-can? dept.

The hysteria about the future of artificial intelligence (AI) is everywhere. There seems to be no shortage of sensationalist news about how AI could cure diseases, accelerate human innovation and improve human creativity. Just looking at the media headlines, you might think that we are already living in a future where AI has infiltrated every aspect of society.

While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". This is the philosophy that, given enough data, machine learning algorithms can solve all of humanity's problems.

But there's a big problem with this idea. Instead of supporting AI progress, it actually jeopardises the value of machine intelligence by disregarding important AI safety principles and setting unrealistic expectations about what AI can really do for humanity.

In only a few years, the pendulum has swung from the dystopian notion that AI will destroy humanity to the utopian belief that our algorithmic saviour is here.

[...] Examples demonstrate that there is no AI solution for everything. Using AI simply for the sake of AI may not always be productive or useful. Not every problem is best addressed by applying machine intelligence to it. This is the crucial lesson for everyone aiming to boost investments in national AI programmes: all solutions come with a cost and not everything that can be automated should be.

The Conversation

What is your take on this? Do you think AI (as currently defined), can solve any of the problems, man-made and otherwise, of this world?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Wootery on Tuesday May 29 2018, @05:34PM (5 children)

    by Wootery (2341) on Tuesday May 29 2018, @05:34PM (#685738)

    I think a problem can be well defined as 'understood' when AI can change itself according to shifting goalpost without any external intervention.

    That doesn't sound like 'understanding', it sounds more like online machine learning. [wikipedia.org]

    for any of those to be called Real(TM) AI, they will need (A) a mechanism to know that the goalpost has shifted (B) find that goal post (C) surpass it before it is shifted again.

    Sounds like a rather arbitrary threshold to set, and again, it seems to me that even the most primitive online machine learning algorithms already fit the bill, although they don't necessarily use the approach that you describe where there are discrete stages.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by cubancigar11 on Wednesday May 30 2018, @04:18AM (4 children)

    by cubancigar11 (330) on Wednesday May 30 2018, @04:18AM (#686090) Homepage Journal

    Not arbitrary at all! This is how we measure intelligence in fellow human beings! We don't think that a plumber is intelligent because it can turn a screw, we think of him as intelligent when he figures out a problem on its own and solves it to the expectation that was set before the problem was even discovered.

    • (Score: 2) by Wootery on Wednesday May 30 2018, @09:03AM (3 children)

      by Wootery (2341) on Wednesday May 30 2018, @09:03AM (#686178)

      This is how we measure intelligence in fellow human beings!

      Not really - we look at how good people are at solving problems. I don't see the need for any explicit model involving distinct stages and 'shifting goalposts'.

      • (Score: 2) by cubancigar11 on Wednesday May 30 2018, @09:36AM (2 children)

        by cubancigar11 (330) on Wednesday May 30 2018, @09:36AM (#686193) Homepage Journal

        That is because we have implicit understanding of what it means to be people. As much as you might dislike it, we have come to understand AI as having more than just how good it is at solving a particular problem. Otherwise a shell script can be better at solving particular problems than human, but yet it is not called AI.

        • (Score: 2) by Wootery on Wednesday May 30 2018, @10:29AM (1 child)

          by Wootery (2341) on Wednesday May 30 2018, @10:29AM (#686206)

          So you agree intelligence tests have no 3-stage model?

          Otherwise a shell script can be better at solving particular problems than human, but yet it is not called AI.

          You've not phrased that very clearly, but I think your point is that extremely simple programs can be more effective at solving certain problems than even very intelligent people.

          This is certainly true. When it comes to memorising numbers, multiplying numbers, etc, even very humble computers easily outperform the most intelligent humans. (This is what Daniel Dennett calls competence without comprehension.)

          When we say 'AI', we tend to mean either

          1. Software that attempts to solve a problem that previously no software system has solved
          2. Software that uses machine-learning in at least some way
          3. 'General artificial intelligence', which roughly speaking tends to mean it has the generality of intelligence that a human has (rather fuzzy, and we'll ignore for now that it will presumably be better than any human at multiplying numbers)

          A pocket calculator doesn't qualify as any of these three, and so isn't considered AI, despite being superhumanly effective in its problem-domain.

          None of this tells us what 'understanding' means. The best definition I can think of is - as I mentioned elsewhere in this thread - the ability to reason about the abstract concepts related to a problem-domain. (Of course, no current software system is capable of doing this.)

          That's not all that precise, but it seems like a good starting point. I don't see that it would be reasonable to define understanding to only ever apply to humans. That's just cheating.

          • (Score: 2) by cubancigar11 on Wednesday May 30 2018, @01:11PM

            by cubancigar11 (330) on Wednesday May 30 2018, @01:11PM (#686239) Homepage Journal

            So you agree intelligence tests have no 3-stage model?

            Huh? How did you get that feeling? I am critiquing your following statement:

            Not really - we look at how good people are at solving problems.

            by pointing out that how good someone is at solving a problem is not at all related to how we measure intelligence. Intelligence is about how novel your solution is not and at all about how good that solution is.

            When we say 'AI', we tend to mean either
            1. Software that attempts to solve a problem that previously no software system has solved
            2. Software that uses machine-learning in at least some way

            Only if by 'we' you mean people working on AI. For me, 'we' means general populace and they don't care about machine learning.

            A pocket calculator doesn't qualify as any of these three, and so isn't considered AI, despite being superhumanly effective in its problem-domain.

            No, a pocket calculator doesn't qualify as AI because it doesn't have any way to recognize a problem and come with a novel solution, or in colloquial terms, "understands" the problem, or to put it in a testable way, it doesn't follow the model I am suggesting.

            I think I realize where the disagreement is originating - I am talking about why a layman doesn't consider something to be an AI and you are talking about something that has already been done in the name of AI.