Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday May 29 2018, @01:33AM   Printer-friendly
from the then-again-what-can? dept.

The hysteria about the future of artificial intelligence (AI) is everywhere. There seems to be no shortage of sensationalist news about how AI could cure diseases, accelerate human innovation and improve human creativity. Just looking at the media headlines, you might think that we are already living in a future where AI has infiltrated every aspect of society.

While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". This is the philosophy that, given enough data, machine learning algorithms can solve all of humanity's problems.

But there's a big problem with this idea. Instead of supporting AI progress, it actually jeopardises the value of machine intelligence by disregarding important AI safety principles and setting unrealistic expectations about what AI can really do for humanity.

In only a few years, the pendulum has swung from the dystopian notion that AI will destroy humanity to the utopian belief that our algorithmic saviour is here.

[...] Examples demonstrate that there is no AI solution for everything. Using AI simply for the sake of AI may not always be productive or useful. Not every problem is best addressed by applying machine intelligence to it. This is the crucial lesson for everyone aiming to boost investments in national AI programmes: all solutions come with a cost and not everything that can be automated should be.

The Conversation

What is your take on this? Do you think AI (as currently defined), can solve any of the problems, man-made and otherwise, of this world?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Wootery on Tuesday May 29 2018, @06:13PM (3 children)

    by Wootery (2341) on Tuesday May 29 2018, @06:13PM (#685769)

    Competence simply requires looking at what everyone else is doing and parroting or solving a problem given a sufficiently exhaustive input data set. Intuition is what handles everything else.

    Seems to me you've introduced another rather unclear and loaded term ('intuition'), and are gradually building a somewhat half-baked theory of mind.

    1. 'Parroting' isn't trivial, as we're seeing with non-'creative' AI tasks like self-driving cars, and voice-recognition
    2. There's no bright line between parroting and creativity. Does the work of an undergraduate math student count as 'creative'? Most people think not. How about a math researcher? They're both in the business of coming up with proofs they've not seen before, right? It's just that only the latter produces something worth publishing. Certainly, no current AI system could do the work of either. (Ignoring Gödel's incompleteness theorem for a moment.)
    3. By any sensible definition, every intelligent entity inspects a data-set, does some processing, and produces a result. Efficient intelligences do better than brute-force, and effective ones produce desirable results (by whatever metric). I don't need to define 'creativity' for that to be true: it's true of numerical addition, chess, and yes, of writing poetry, composing music, and battling it out on Soylent News :P

    We really don't need to define understanding here unless you're a machine yourself.

    Disagree. If we're not going to give it a reasonably clear definition, we can't meaningfully reason about it.

    Computers are incapable of it because it requires more than executing op codes and reading memory addresses.

    That doesn't sound right. In principle, a sufficiently powerful computer should be able to simulate the human brain, no?

    The earliest computer scientists never dreamed that computers could do what they do today. Let's not constrain ourselves to the limitations of modern hardware and software, when we're really concerned with the principle.

    Humans are not incapable; they're born capable, it simply takes them time to acquire

    So we humans are stateful machines capable of online machine learning, and we're pre-programmed with certain biases? Well sure. But I'm not seeing the 'in principle' difference between us and computers.

    These have to be pre-programmed because you cannot release potentially dangerous machines into the wild with absolutely no idea what kind of solution they're going to come up with for a given problem.

    People do that all the time. They're call 'parents'. We end up assigning moral responsibility to their new 'release', of course. Sometimes we even ritualise doing so. [wikipedia.org]

    Anyway, surely your point about safety is really a matter of behaviour, and system-correctness, no? Does it matter whether the implementation uses software or hard-wiring?

    Humans are very effective general learning machines. In principle, computers are capable of everything we're capable of. The universe supports the functionality of the human brain (obviously), and there's nothing magic about our substrate (neurons) vs theirs (transistors).

    Or do you really think that it would be impossible, even just in principle, to simulate the human brain using transistors?

    We already know that the inverse is possible: brains can simulate computers, it just takes an impractically long time. (That's why we build the things, after all.)

    If they are pre-programmed, that's human understanding not machine understanding.

    The interesting thing there is the generality, no? If the machine is pre-programmed to be good at driving, but it can then learn to be effective at poetry (i.e. without manually imposed code-changes), it still 'counts', no?

    Existence may technically be finite but it's finite beyond ability to describe in given our storage limitations.

    I don't get you.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by The Mighty Buzzard on Wednesday May 30 2018, @12:34PM (2 children)

    In principle, a sufficiently powerful computer should be able to simulate the human brain, no?

    No. The programmers would have to understand precisely how the human brain worked for that to be possible. Neuroscientists don't even have this level of understanding at the moment. And if you don't think it needs to be precise, ask the fine folks over at Wine [winehq.org] to school you on emulation.

    Well sure. But I'm not seeing the 'in principle' difference between us and computers.

    Which is what I'd been trying to correct. It appears I've failed. Such is life.

    Anyway, surely your point about safety is really a matter of behaviour, and system-correctness, no? Does it matter whether the implementation uses software or hard-wiring?

    Abso-fucking-lutely. Try running mesa in software rendering mode sometime and you'll see why. Or try running a PS3 emulator on an x86 processor that's only clocked twice as fast as the original hardware. Apples and oranges matters a hell of a lot.

    The interesting thing there is the generality, no? If the machine is pre-programmed to be good at driving, but it can then learn to be effective at poetry (i.e. without manually imposed code-changes), it still 'counts', no?

    No. The appearance of understanding is not the same as understanding. You can teach any fool enough of a raindance to be able to wire their house for electricity without teaching them why they're doing what they're doing. You're not going to let them loose as a licensed electrician like that though because they're either going to die or cause other people to die. Thus the apprenticeship and licensing requirements for electricians; we demand that they understand, not just ape what they see others doing. I'm not saying there's no utility to be found in teaching a machine a raindance or letting it raindance things of little importance or danger on its own but the capability is simply not the same, only the outcome.

    I don't get you.

    No, you don't. Which is why you should not be monkeying with AI. Ever.

    --
    My rights don't end where your fear begins.
    • (Score: 2) by Wootery on Wednesday May 30 2018, @02:09PM (1 child)

      by Wootery (2341) on Wednesday May 30 2018, @02:09PM (#686268)

      Come on Buzz, I was quite clear: in principle. It gets us nowhere for you to write about how difficult it would be. Those points are short-sighted and, frankly, rather obvious. You may as well remind me that no-one has yet successfully simulated a human brain.

      It remains that there is no reason in principle that transistors can't do what neurons can.

      Do you really want to assert that this is beyond the capability of any Turing-complete machine? It seems absurd. It's a physical process. You think physical processes can't be modelled by sufficiently powerful Turing-complete systems? We're not talking about nondeterministic quantum phenomena here.

      Do you think it would be impossible even in principle for any computer system to simulate the brain of a wasp? It's the same physical process at work, just in miniature.

      It appears I've failed.

      You've given me no good reason to believe that the physical processes of the brain follows special rules which are beyond the capabilities of any hypothetical Turing-complete computer, regardless of power.

      This is an extraordinary claim, but you've got nothing to support it.

      Try running mesa in software rendering mode sometime and you'll see why

      Again, you're writing about the performance challenges that face the computer systems of today. What's relevant is whether the computational simulation of a brain is possible in principle, and there seems to me to be every reason to think that the answer is yes: modern computers are quite capable of modelling physical processes, so I see no reason to assume the physical processes of the brain are categorically impossible to model computationally.

      The appearance of understanding is not the same as understanding.

      'Appearance' means behaviour. You're going with a definition of 'understanding' which isn't based on how something acts? I'm reminded of the Chinese Room Argument. You need to explain what you do mean by 'understanding' (as does Searle for that matter).

      the capability is simply not the same, only the outcome

      If two things are equally able to bring about some desired outcome, we say they have equal capability. When discussing capability, we don't care if they have brains or not.

      I remind you of what you said earlier:

      We really don't need to define understanding here unless you're a machine yourself. Computers are incapable of it because it requires more than executing op codes and reading memory addresses.

      Are you saying that 'real understanding' needs consciousness? If so, just say so.

      You seem to be saying that no computer system can be said to have 'understanding', even if it behaves exactly the same way a human behaves. Presumably then you think physical brains are metaphysically magical? No matter what computers do, it still doesn't count!

      • (Score: 2) by The Mighty Buzzard on Wednesday May 30 2018, @05:34PM

        In theory cracking 21024 bit EC encryption is possible. You'd just need a universe that was going to last a lot longer than ours. Now I don't dislike science fiction but I do prefer to leave it on novel pages or a screen until something gives us an idea that it might actually be possible. Neither current nor proposed hardware has given us any indication of being able to approximate human intelligence. Computers are extremely good at being high-speed idiots but extremely bad at being anything else.

        --
        My rights don't end where your fear begins.