The hysteria about the future of artificial intelligence (AI) is everywhere. There seems to be no shortage of sensationalist news about how AI could cure diseases, accelerate human innovation and improve human creativity. Just looking at the media headlines, you might think that we are already living in a future where AI has infiltrated every aspect of society.
While it is undeniable that AI has opened up a wealth of promising opportunities, it has also led to the emergence of a mindset that can be best described as "AI solutionism". This is the philosophy that, given enough data, machine learning algorithms can solve all of humanity's problems.
But there's a big problem with this idea. Instead of supporting AI progress, it actually jeopardises the value of machine intelligence by disregarding important AI safety principles and setting unrealistic expectations about what AI can really do for humanity.
In only a few years, the pendulum has swung from the dystopian notion that AI will destroy humanity to the utopian belief that our algorithmic saviour is here.
[...] Examples demonstrate that there is no AI solution for everything. Using AI simply for the sake of AI may not always be productive or useful. Not every problem is best addressed by applying machine intelligence to it. This is the crucial lesson for everyone aiming to boost investments in national AI programmes: all solutions come with a cost and not everything that can be automated should be.
What is your take on this? Do you think AI (as currently defined), can solve any of the problems, man-made and otherwise, of this world?
(Score: 2) by Wootery on Tuesday May 29 2018, @06:13PM (3 children)
Seems to me you've introduced another rather unclear and loaded term ('intuition'), and are gradually building a somewhat half-baked theory of mind.
Disagree. If we're not going to give it a reasonably clear definition, we can't meaningfully reason about it.
That doesn't sound right. In principle, a sufficiently powerful computer should be able to simulate the human brain, no?
The earliest computer scientists never dreamed that computers could do what they do today. Let's not constrain ourselves to the limitations of modern hardware and software, when we're really concerned with the principle.
So we humans are stateful machines capable of online machine learning, and we're pre-programmed with certain biases? Well sure. But I'm not seeing the 'in principle' difference between us and computers.
People do that all the time. They're call 'parents'. We end up assigning moral responsibility to their new 'release', of course. Sometimes we even ritualise doing so. [wikipedia.org]
Anyway, surely your point about safety is really a matter of behaviour, and system-correctness, no? Does it matter whether the implementation uses software or hard-wiring?
Humans are very effective general learning machines. In principle, computers are capable of everything we're capable of. The universe supports the functionality of the human brain (obviously), and there's nothing magic about our substrate (neurons) vs theirs (transistors).
Or do you really think that it would be impossible, even just in principle, to simulate the human brain using transistors?
We already know that the inverse is possible: brains can simulate computers, it just takes an impractically long time. (That's why we build the things, after all.)
The interesting thing there is the generality, no? If the machine is pre-programmed to be good at driving, but it can then learn to be effective at poetry (i.e. without manually imposed code-changes), it still 'counts', no?
I don't get you.
(Score: 2) by The Mighty Buzzard on Wednesday May 30 2018, @12:34PM (2 children)
No. The programmers would have to understand precisely how the human brain worked for that to be possible. Neuroscientists don't even have this level of understanding at the moment. And if you don't think it needs to be precise, ask the fine folks over at Wine [winehq.org] to school you on emulation.
Which is what I'd been trying to correct. It appears I've failed. Such is life.
Abso-fucking-lutely. Try running mesa in software rendering mode sometime and you'll see why. Or try running a PS3 emulator on an x86 processor that's only clocked twice as fast as the original hardware. Apples and oranges matters a hell of a lot.
No. The appearance of understanding is not the same as understanding. You can teach any fool enough of a raindance to be able to wire their house for electricity without teaching them why they're doing what they're doing. You're not going to let them loose as a licensed electrician like that though because they're either going to die or cause other people to die. Thus the apprenticeship and licensing requirements for electricians; we demand that they understand, not just ape what they see others doing. I'm not saying there's no utility to be found in teaching a machine a raindance or letting it raindance things of little importance or danger on its own but the capability is simply not the same, only the outcome.
No, you don't. Which is why you should not be monkeying with AI. Ever.
My rights don't end where your fear begins.
(Score: 2) by Wootery on Wednesday May 30 2018, @02:09PM (1 child)
Come on Buzz, I was quite clear: in principle. It gets us nowhere for you to write about how difficult it would be. Those points are short-sighted and, frankly, rather obvious. You may as well remind me that no-one has yet successfully simulated a human brain.
It remains that there is no reason in principle that transistors can't do what neurons can.
Do you really want to assert that this is beyond the capability of any Turing-complete machine? It seems absurd. It's a physical process. You think physical processes can't be modelled by sufficiently powerful Turing-complete systems? We're not talking about nondeterministic quantum phenomena here.
Do you think it would be impossible even in principle for any computer system to simulate the brain of a wasp? It's the same physical process at work, just in miniature.
You've given me no good reason to believe that the physical processes of the brain follows special rules which are beyond the capabilities of any hypothetical Turing-complete computer, regardless of power.
This is an extraordinary claim, but you've got nothing to support it.
Again, you're writing about the performance challenges that face the computer systems of today. What's relevant is whether the computational simulation of a brain is possible in principle, and there seems to me to be every reason to think that the answer is yes: modern computers are quite capable of modelling physical processes, so I see no reason to assume the physical processes of the brain are categorically impossible to model computationally.
'Appearance' means behaviour. You're going with a definition of 'understanding' which isn't based on how something acts? I'm reminded of the Chinese Room Argument. You need to explain what you do mean by 'understanding' (as does Searle for that matter).
If two things are equally able to bring about some desired outcome, we say they have equal capability. When discussing capability, we don't care if they have brains or not.
I remind you of what you said earlier:
Are you saying that 'real understanding' needs consciousness? If so, just say so.
You seem to be saying that no computer system can be said to have 'understanding', even if it behaves exactly the same way a human behaves. Presumably then you think physical brains are metaphysically magical? No matter what computers do, it still doesn't count!
(Score: 2) by The Mighty Buzzard on Wednesday May 30 2018, @05:34PM
In theory cracking 21024 bit EC encryption is possible. You'd just need a universe that was going to last a lot longer than ours. Now I don't dislike science fiction but I do prefer to leave it on novel pages or a screen until something gives us an idea that it might actually be possible. Neither current nor proposed hardware has given us any indication of being able to approximate human intelligence. Computers are extremely good at being high-speed idiots but extremely bad at being anything else.
My rights don't end where your fear begins.