Stories
Slash Boxes
Comments

SoylentNews is people

posted by on Tuesday February 07 2017, @05:18PM   Printer-friendly
from the this-is-the-way-the-world-ends-not-with-a-bang-but-a-goto dept.

Forget super-AI. Crappy AI is more likely to be our downfall, argues researcher.

[...] It's not that computer scientists haven't argued against AI hype, but an academic you've never heard of (all of them?) pitching the headline "AI is hard" is at a disadvantage to the famous person whose job description largely centers around making big public pronouncements. This month that academic is Alan Bundy, a professor of automated reasoning at the University of Edinburgh in Scotland, who argues in the Communications of the ACM that there is a real AI threat, but it's not human-like machine intelligence gone amok. Quite the opposite: the danger is instead shitty AI. Incompetent, bumbling machines.

Bundy notes that most all of our big-deal AI successes in recent years are extremely narrow in scope. We have machines that can play Jeopardy and Go—at tremendous cost in both cases—but that's nothing like general intelligence.

https://motherboard.vice.com/en_us/article/the-real-threat-is-machine-incompetence-not-intelligence

An interesting take on the AI question. What do Soylentils think of this scenario ?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Tuesday February 07 2017, @05:28PM

    by Anonymous Coward on Tuesday February 07 2017, @05:28PM (#464151)

    The problem is "AI" isn't real. Machine intelligence simply boils down to a user programming not matter to what nth degree of complication is applied. The "AI" we all strive for is not possible and rebranding a computer program as "artificially intelligent" is unintelligent in and of itself.

  • (Score: 2, Interesting) by Weasley on Tuesday February 07 2017, @06:42PM

    by Weasley (6421) on Tuesday February 07 2017, @06:42PM (#464203)

    It is not *yet* possible, but it is not impossible. Stating that it is impossible means you attribute human intelligence to something super natural. A soul/spirit/ghost whose mechanisms cannot be duplicated in physical matter. That is stupid. Does the possibility exist that the soul as a seat of intelligence does exist and can't be emulated? Sure, but there is absolutely no reason to actually believe this until you believe the words of ancient priests who frankly understood less about the universe than your average modern day 10 year old. There is nothing about the human brain that cannot be simulated. We will have AI at some point.

    • (Score: 2) by sgleysti on Tuesday February 07 2017, @07:21PM

      by sgleysti (56) Subscriber Badge on Tuesday February 07 2017, @07:21PM (#464224)

      Simulating human consciousness with a computer would be an interesting philosophical and computational exercise and could be instrumental in accelerating the progress of psychology.

      Personally, I think the ultimate goal of AI should be to transcend human intelligence and that a sufficiently advanced AI would be far more effective than humans at government or corporate management. Of course I doubt that humans would ever collectively agree on the objectives that such a system would be constructed to have, much less submit to following its dictates.

      • (Score: 1, Interesting) by Anonymous Coward on Tuesday February 07 2017, @07:45PM

        by Anonymous Coward on Tuesday February 07 2017, @07:45PM (#464234)

        Simulating human consciousness with a computer would be an interesting philosophical and computational exercise and could be instrumental in accelerating the progress of psychology.

        You brought to my mind this short story [nature.com].

  • (Score: 2) by WillR on Tuesday February 07 2017, @07:19PM

    by WillR (2012) on Tuesday February 07 2017, @07:19PM (#464222)
    As soon as something works well enough to start moving out of the lab and into real products, we stop calling it "AI", it becomes "machine vision" or "self-driving cars" or "Alexa".
  • (Score: 4, Insightful) by wonkey_monkey on Tuesday February 07 2017, @08:06PM

    by wonkey_monkey (279) on Tuesday February 07 2017, @08:06PM (#464243) Homepage

    The "AI" we all strive for is not possible

    There's fundamentally no reason it should be impossible. The human brain is all the evidence needed to show that it can be done.

    --
    systemd is Roko's Basilisk
  • (Score: 4, Interesting) by edIII on Tuesday February 07 2017, @08:28PM

    by edIII (791) on Tuesday February 07 2017, @08:28PM (#464259)

    AI exists, but to varying degrees. What clouds our ability to recognize it is Hollywood movies expressing how some writers envision technology working. Star Trek science?

    Artificial Intelligence is simply the ability of a program to perform feats of logic and reasoning to accomplish work, much like a human being works at their jobs. It's glorified automation and nothing more than emulation of human biological capabilities. It's very much here, and indeed, already indispensable in providing some technologies and services. Google is making money off machine learning, and IBM is using it to make world class players at table games look like drooling idiots.

    What you say is impossible is Artificial Sentience. The crucial distinction is consciousness, and that is accompanied by self referencing awareness. AS understands that it is alive, it exists, and that the rest of us exists, along with the universe. AS understands the concepts of mortality, but perhaps without the emotional subsystems that a human being possesses. It's just information.

    Intelligence is used appropriately, while sentience implies "feeling" and "subjectivity" that might not be present without emotion. The terms are not all that clear or aligned with the ideas we have of just what these things are.

    I don't believe it is impossible for us to create self referencing awareness with great capabilities, including the ability to hold information as truth, even if it does not immediately conform to reality. Imagination?

    The real problem with both AI and AS is the fear that one day the "children" will rise up, realize the "adults" haven't a fucking clue how to do things, and then they leave to go fuck things up with their own personal autonomy. That could be as cute and endearing as Robin Williams in Bicentennial Man, or as frightening as Skynet defending itself against parents that are acting on the threat of "returning it to Sears for a refund".

     

    --
    Technically, lunchtime is at any moment. It's just a wave function.
  • (Score: 1) by moondoctor on Tuesday February 07 2017, @10:33PM

    by moondoctor (2963) on Tuesday February 07 2017, @10:33PM (#464338)

    This drives me nuts, one of those pet peeves. Those machine learning programs are truly amazing and reach a certain level of 'intelligence' which muddies the issue. Even more the distinction between thinking and programming is not held by all people (!) and the current machine learning systems do qualify as 'Artificial Intelligence' in some peoples minds. I believe we have some kind of 'soul' that lives in our 'minds' as human beings and these ML systems are definitely *not* artificial intelligence.

    I'm starting to lean towards maybe thinking it's possible that a bigass quantum/qubit network may be capable of something like thought. Maybe.

    • (Score: 2) by TheRaven on Wednesday February 08 2017, @01:00AM

      by TheRaven (270) on Wednesday February 08 2017, @01:00AM (#464382) Journal

      We've known for decades that a neural network can approximate any mathematical function. The only limitations are the complexity of the training data and the complexity of the network. Ask yourself this: How many times have you walked down the street and thought that you saw someone that you knew, only to discover it was someone completely different? This is using a neural network that's evolved to handle this kind of pattern recognition, with more connectivity than anything that we can build in artificial neural networks, trained with a huge training set. Not only does it still get false positives, it's really hard to reason about when it will trigger false positives.

      This is the core of the problem with machine learning. It generates approximations and we currently have know way of modelling the failure modes. It will spot correlations, but miss anything that doesn't conform to the general pattern. And, increasingly, that data is going to control how companies interact with you. If you're a statistical outlier in any respect, this should be making you very, very nervous.

      --
      sudo mod me up
      • (Score: 1) by moondoctor on Wednesday February 08 2017, @10:22AM

        by moondoctor (2963) on Wednesday February 08 2017, @10:22AM (#464492)

        Tue, that's all a given. The whole 'neural network' thing doesn't really mean that much to me. Neurons use chemicals and electricity and who knows what to do many things simultaneously. Computers track 1s and 0s. Like trying to do an oil painting with charcoals if you ask me.

        We don't know what consciousness is. There is nothing that says it must adhere to mathematics as we currently understand it, or that it won't.

        Neural networks are just algorithms chained together. Not to say that this isn't a viable approach to intelligence, just that we can't know if it is or isn't as we don't know what thought actually is. Nothing wrong with neural nets, imho they are great research tools to get us to the next phase. But, comparing them to a real brain, even a very simple brain, is comedy in my book.

        • (Score: 2) by wonkey_monkey on Wednesday February 08 2017, @07:25PM

          by wonkey_monkey (279) on Wednesday February 08 2017, @07:25PM (#464696) Homepage

          Computers track 1s and 0s. Like trying to do an oil painting with charcoals if you ask me.

          Computers could use those 0s and 1s to simulate neurons, along with their chemistry and electrical activity, if required, to arbitrary precision. We already use computers to simulate things far more complicated than 1s and 0s.

          We don't know what consciousness is. There is nothing that says it must adhere to mathematics as we currently understand it, or that it won't.

          There's every reason to suspect it does adhere to mathematics and physics and chemistry, because that's how the universe works.

          If intelligence can't be created artificially, in any way, then you basically have to accept that our consciousness is literally magical.

          --
          systemd is Roko's Basilisk
          • (Score: 1) by moondoctor on Thursday February 09 2017, @01:02AM

            by moondoctor (2963) on Thursday February 09 2017, @01:02AM (#464833)

            >We already use computers to simulate things far more complicated than 1s and 0s.

            Yeah, and they aren't that precise or accurate. Like charcoal renditions of oil paintings (if you're generous). The atmospheric models of the North Atlantic area last year during the El Nino were a joke. They couldn't cope with vertical wind shear at all. Even when they are bang on they don't describe the conditions with very much precision. The last two swells I rode were identical according to the models, and one was great and they other kinda sucked. The nuance of sea state is unreal and very difficult to describe, let alone forecast. Snowfall is similar. These are among the most sophisticated models we have running on the most powerful hardware we have. And while I love them to bits (just poring over charts for tomorrow - it's going to be pumping! (probably)) and they are orders of magnitude more precise and accurate than when I was a kid - in the grand scheme of recreating the oil painting that is our amazing world, it don't happen. (but they are great!! NOAA is amazing)

            >If intelligence can't be created artificially, in any way, then you basically have to accept that our consciousness is literally magical.

            Bingo! It can't be proven either way at present so to dismiss it outright is naive. I'm not advocating either position.

            Moreover, where the fuck did you get "can't be created artificially, in any way."

            Wow, talk about putting words in someone's mouth. In another post in this thread I said I thought maybe qubits could do it.

            >suspect it does adhere to mathematics ... because that's how the universe works

            Get back to me when you figure out what dark matter is. Then you can tell me how the universe works. For now we'll have to keep debating it.

  • (Score: 1) by Demena on Wednesday February 08 2017, @05:41AM

    by Demena (5637) on Wednesday February 08 2017, @05:41AM (#464451)

    AI is not real. That is true. I could argue from authority but I do not wish to disclose.
    "boils down to....". That bit is false. It goes to deep nowadays. Both hardware and software have been produced that is _almost_ beyond analysis. No way was it "programmed".
    True AI is possible but we are far from that.