Stories
Slash Boxes
Comments

SoylentNews is people

posted by on Tuesday February 07 2017, @05:18PM   Printer-friendly
from the this-is-the-way-the-world-ends-not-with-a-bang-but-a-goto dept.

Forget super-AI. Crappy AI is more likely to be our downfall, argues researcher.

[...] It's not that computer scientists haven't argued against AI hype, but an academic you've never heard of (all of them?) pitching the headline "AI is hard" is at a disadvantage to the famous person whose job description largely centers around making big public pronouncements. This month that academic is Alan Bundy, a professor of automated reasoning at the University of Edinburgh in Scotland, who argues in the Communications of the ACM that there is a real AI threat, but it's not human-like machine intelligence gone amok. Quite the opposite: the danger is instead shitty AI. Incompetent, bumbling machines.

Bundy notes that most all of our big-deal AI successes in recent years are extremely narrow in scope. We have machines that can play Jeopardy and Go—at tremendous cost in both cases—but that's nothing like general intelligence.

https://motherboard.vice.com/en_us/article/the-real-threat-is-machine-incompetence-not-intelligence

An interesting take on the AI question. What do Soylentils think of this scenario ?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by moondoctor on Tuesday February 07 2017, @10:33PM

    by moondoctor (2963) on Tuesday February 07 2017, @10:33PM (#464338)

    This drives me nuts, one of those pet peeves. Those machine learning programs are truly amazing and reach a certain level of 'intelligence' which muddies the issue. Even more the distinction between thinking and programming is not held by all people (!) and the current machine learning systems do qualify as 'Artificial Intelligence' in some peoples minds. I believe we have some kind of 'soul' that lives in our 'minds' as human beings and these ML systems are definitely *not* artificial intelligence.

    I'm starting to lean towards maybe thinking it's possible that a bigass quantum/qubit network may be capable of something like thought. Maybe.

  • (Score: 2) by TheRaven on Wednesday February 08 2017, @01:00AM

    by TheRaven (270) on Wednesday February 08 2017, @01:00AM (#464382) Journal

    We've known for decades that a neural network can approximate any mathematical function. The only limitations are the complexity of the training data and the complexity of the network. Ask yourself this: How many times have you walked down the street and thought that you saw someone that you knew, only to discover it was someone completely different? This is using a neural network that's evolved to handle this kind of pattern recognition, with more connectivity than anything that we can build in artificial neural networks, trained with a huge training set. Not only does it still get false positives, it's really hard to reason about when it will trigger false positives.

    This is the core of the problem with machine learning. It generates approximations and we currently have know way of modelling the failure modes. It will spot correlations, but miss anything that doesn't conform to the general pattern. And, increasingly, that data is going to control how companies interact with you. If you're a statistical outlier in any respect, this should be making you very, very nervous.

    --
    sudo mod me up
    • (Score: 1) by moondoctor on Wednesday February 08 2017, @10:22AM

      by moondoctor (2963) on Wednesday February 08 2017, @10:22AM (#464492)

      Tue, that's all a given. The whole 'neural network' thing doesn't really mean that much to me. Neurons use chemicals and electricity and who knows what to do many things simultaneously. Computers track 1s and 0s. Like trying to do an oil painting with charcoals if you ask me.

      We don't know what consciousness is. There is nothing that says it must adhere to mathematics as we currently understand it, or that it won't.

      Neural networks are just algorithms chained together. Not to say that this isn't a viable approach to intelligence, just that we can't know if it is or isn't as we don't know what thought actually is. Nothing wrong with neural nets, imho they are great research tools to get us to the next phase. But, comparing them to a real brain, even a very simple brain, is comedy in my book.

      • (Score: 2) by wonkey_monkey on Wednesday February 08 2017, @07:25PM

        by wonkey_monkey (279) on Wednesday February 08 2017, @07:25PM (#464696) Homepage

        Computers track 1s and 0s. Like trying to do an oil painting with charcoals if you ask me.

        Computers could use those 0s and 1s to simulate neurons, along with their chemistry and electrical activity, if required, to arbitrary precision. We already use computers to simulate things far more complicated than 1s and 0s.

        We don't know what consciousness is. There is nothing that says it must adhere to mathematics as we currently understand it, or that it won't.

        There's every reason to suspect it does adhere to mathematics and physics and chemistry, because that's how the universe works.

        If intelligence can't be created artificially, in any way, then you basically have to accept that our consciousness is literally magical.

        --
        systemd is Roko's Basilisk
        • (Score: 1) by moondoctor on Thursday February 09 2017, @01:02AM

          by moondoctor (2963) on Thursday February 09 2017, @01:02AM (#464833)

          >We already use computers to simulate things far more complicated than 1s and 0s.

          Yeah, and they aren't that precise or accurate. Like charcoal renditions of oil paintings (if you're generous). The atmospheric models of the North Atlantic area last year during the El Nino were a joke. They couldn't cope with vertical wind shear at all. Even when they are bang on they don't describe the conditions with very much precision. The last two swells I rode were identical according to the models, and one was great and they other kinda sucked. The nuance of sea state is unreal and very difficult to describe, let alone forecast. Snowfall is similar. These are among the most sophisticated models we have running on the most powerful hardware we have. And while I love them to bits (just poring over charts for tomorrow - it's going to be pumping! (probably)) and they are orders of magnitude more precise and accurate than when I was a kid - in the grand scheme of recreating the oil painting that is our amazing world, it don't happen. (but they are great!! NOAA is amazing)

          >If intelligence can't be created artificially, in any way, then you basically have to accept that our consciousness is literally magical.

          Bingo! It can't be proven either way at present so to dismiss it outright is naive. I'm not advocating either position.

          Moreover, where the fuck did you get "can't be created artificially, in any way."

          Wow, talk about putting words in someone's mouth. In another post in this thread I said I thought maybe qubits could do it.

          >suspect it does adhere to mathematics ... because that's how the universe works

          Get back to me when you figure out what dark matter is. Then you can tell me how the universe works. For now we'll have to keep debating it.