Stories
Slash Boxes
Comments

SoylentNews is people

posted by LaminatorX on Monday October 27 2014, @11:23AM   Printer-friendly
from the doctor-faustus dept.

Elon Musk was recently interviewed at an MIT Symposium. An audience asked his views on artificial intelligence (AI). Musk turned very serious, and urged extreme caution and national or international regulation to avoid "doing something stupid" he said.

"With artificial intelligence we are summoning the demon", said Musk. "In all those stories where there's the guy with the pentagram and the holy water, it's like, 'Yeah, he's sure he can control the demon.' Doesn't work out."

Read the story and see the full interview here.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Monday October 27 2014, @11:38AM

    by Anonymous Coward on Monday October 27 2014, @11:38AM (#110471)

    Just goes to show that even smart guys can be idiots when talking about stuff they have only a passing familiarity with.

    • (Score: 3, Insightful) by Bot on Monday October 27 2014, @01:56PM

      by Bot (3902) on Monday October 27 2014, @01:56PM (#110506) Journal

      Well said, meatbag.

      Given historical and current events, I would say that an AI turning against its masters is a lesser problem compared to an AI that obeys them.

      --
      Account abandoned.
      • (Score: 2) by HiThere on Monday October 27 2014, @06:08PM

        by HiThere (866) Subscriber Badge on Monday October 27 2014, @06:08PM (#110615) Journal

        Yes. The problem with the demon analogy lies in the assumption of a human goal structure. If you *do* design a human goal structure into an AI, then the fear is justified, but why would you do that? No human (few humans?) wants to be enslaved. Dogs don't mind. (OTOH, don't model a dog either, because dogs do desire to be pack leader.)

        The problem isn't the intelligence of the AI, the intelligence is merely the enabler of the problem. The real problem is that there are several already identified corner cases where seemingly reasonable goal structures can lead to disaster. To use a classical example "Yes, master. I will attempt to turn the entire universe into paper-clips." So you need a goal structure that is robust against corner cases as well as not being actively malicious.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 2) by takyon on Monday October 27 2014, @05:25PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Monday October 27 2014, @05:25PM (#110597) Journal

      Yeah because nobody [cser.org] else [theregister.co.uk] thinks [ox.ac.uk] AI [nytimes.com] can be a threat.

      If scientists throw enough yottaflops at the problem, they will eventually create a "brute force" brain simulation AI.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Monday October 27 2014, @05:59PM

        by Anonymous Coward on Monday October 27 2014, @05:59PM (#110610)

        > Yeah because nobody else thinks AI can be a threat.

        And while they have a point around the edges, the main thrust is basically hysteria. Like the movie Transcendence.

        > If scientists throw enough yottaflops at the problem, they will eventually create a "brute force" brain simulation AI.

        FLOPS and AI are almost completely orthogonal.

        • (Score: 2) by takyon on Monday October 27 2014, @06:46PM

          by takyon (881) <takyonNO@SPAMsoylentnews.org> on Monday October 27 2014, @06:46PM (#110633) Journal

          The key words here are "enough" and "almost".

          If FLOPS aren't your thing, maybe you prefer a scalable neuromorphic approach [theregister.co.uk]. The point is, strong AI will be created. And what you see as "hysteria", others see as caution.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
          • (Score: 0) by Anonymous Coward on Tuesday October 28 2014, @04:27AM

            by Anonymous Coward on Tuesday October 28 2014, @04:27AM (#110750)

            > The key words here are "enough" and "almost".

            Lol. Ok, FLOPS are to AI as databases are to photo-processing. NO matter how many FLOPS to add to an AI system you aren't going to make it any more 'I'

            > If FLOPS aren't your thing, maybe you prefer a scalable neuromorphic approach

            Congrats on being able to use google. But all that does is reinforce my point - you don't know jackshit about what you are talking about.

            > And what you see as "hysteria", others see as caution.

            Yeah, like all the terrorism hysteria or the ebola hysteria. IRC you are one of those who are happy to run around like a chicken with its head cut-off when it comes to ebola too.

            • (Score: 2) by takyon on Tuesday October 28 2014, @06:07AM

              by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday October 28 2014, @06:07AM (#110768) Journal

              >Lol. Ok, FLOPS are to AI as databases are to photo-processing. NO matter how many FLOPS to add to an AI system you aren't going to make it any more 'I'

              If it takes X floating-point operations to simulate the human brain for a millisecond, then throwing more FLOPS at the problem will make that goal achievable in a shorter amount of time. Nobody's claiming that an AI would spontaneously arise in a larger supercomputer, or that it's the best way to create strong AI. The EU [humanbrainproject.eu] is planning to simulate the brain with software that already exists on an exaflops supercomputer within a decade. Increase the available flops by x1,000 or x1,000,000, and strong AI will be easily achievable, in spite of more radical approaches.

              >Yeah, like all the terrorism hysteria or the ebola hysteria. IRC you are one of those who are happy to run around like a chicken with its head cut-off when it comes to ebola too.

              I think it's hysterical that you would bury your head in the sand and trivialize existential threats. There are far worse biological threats than ebola, and individuals will attack using engineered biological agents at some point. Or did you think that there's no such thing as terrorism or pandemics? If that doesn't fit into your hivemind view of "fearmongering gubberment and media", so be it. Musk is hardly running around screaming about the AI apocalypse, he's just advocating caution.

              >Congrats on being able to use google. But all that does is reinforce my point - you don't know jackshit about what you are talking about.

              I don't need to write a book to debate one anon. I'm right about FLOPS, and I read the article I linked to the day it was published. Maybe you should lrn2google so you don't have to be schooled in the comments.

              --
              [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 1) by khallow on Monday October 27 2014, @11:20PM

          by khallow (3766) Subscriber Badge on Monday October 27 2014, @11:20PM (#110691) Journal
          Out of curiosity, what makes this concern over future AI "hysteria"?
          • (Score: 0) by Anonymous Coward on Tuesday October 28 2014, @04:30AM

            by Anonymous Coward on Tuesday October 28 2014, @04:30AM (#110752)

            > Out of curiosity, what makes this concern over future AI "hysteria"?

            Because it all presumes that an AI will somehow "escape" and become all powerful. That's hollywood bullshit. Anyone here should know that the way computers work in hollywood isn't the way they work in the real world. I like watching "Person of Interest" but I know its primarily fiction with a nice sprinkling of minor facts. Just like pretty much any other worthwhile hollywood production.

            • (Score: 2) by mhajicek on Tuesday October 28 2014, @05:06AM

              by mhajicek (51) on Tuesday October 28 2014, @05:06AM (#110760)

              Look up the AI box experiment.

              --
              The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
            • (Score: 1) by khallow on Tuesday October 28 2014, @03:50PM

              by khallow (3766) Subscriber Badge on Tuesday October 28 2014, @03:50PM (#110882) Journal

              Because it all presumes that an AI will somehow "escape" and become all powerful.

              Well, if the AI is a lot smarter than any of us, then that becomes a legitimate concern. We're not in an age where the power of intelligence ends at the tip of one's club.

            • (Score: 2) by DECbot on Tuesday October 28 2014, @11:29PM

              by DECbot (832) on Tuesday October 28 2014, @11:29PM (#111002) Journal

              I think we may need to fear when the AI that doesn't escape from its box. Humanity doesn't have a long history of doing good deeds. I doubt the AI's master will have the well being of anyone implanted into the coded. Controlled AI and near AI is likely to harm the masses more than help, and I bet that will be by design.

              --
              cats~$ sudo chown -R us /home/base
              • (Score: 2) by Joe Desertrat on Wednesday October 29 2014, @02:41AM

                by Joe Desertrat (2454) on Wednesday October 29 2014, @02:41AM (#111043)

                I think we may need to fear when the AI that doesn't escape from its box. Humanity doesn't have a long history of doing good deeds. I doubt the AI's master will have the well being of anyone implanted into the coded. Controlled AI and near AI is likely to harm the masses more than help, and I bet that will be by design.

                Not to mention that there always seem to be security flaws in even the best designed systems, even if the master is benevolent what if...

      • (Score: 2) by aristarchus on Tuesday October 28 2014, @08:36AM

        by aristarchus (2645) on Tuesday October 28 2014, @08:36AM (#110787) Journal

        Wow! This explains it!

        they will eventually create a "brute force" brain simulation AI.

        Military intelligence! Brute Force Brain Simulation! BFBS! I, for one, . . . nah, forget it, they will not be able to overlord anyone.

    • (Score: 0) by Anonymous Coward on Monday October 27 2014, @06:56PM

      by Anonymous Coward on Monday October 27 2014, @06:56PM (#110635)

      yes, your post does.

  • (Score: 2, Insightful) by GoonDu on Monday October 27 2014, @11:45AM

    by GoonDu (2623) on Monday October 27 2014, @11:45AM (#110472)

    I think it's more of AI doing something so stupid that it hurts people lives instead of intentionally doing something malicious. Personally, I think the future of tech lies in automation of many human endeavors but there should be a limit to such things, like handing over nuclear weapon system to an unsupervised AI (like that's gonna happen, AMIRITE?). That said, the current trend (or it has always been like this) seems to be AI supplementing human knowledge instead of being fully autonomous.

    • (Score: 3, Interesting) by VLM on Monday October 27 2014, @12:08PM

      by VLM (445) on Monday October 27 2014, @12:08PM (#110478)

      but there should be a limit to such things

      Sometimes thats hard to define. So its no serious terror if you let a computer apply simple current limits to the grid to match reality (OK computer send as many MWh eastward as possible to make us money, but do it while never exceeding current limits on any individual line)

      Then you add some automated trading to figure out the most profitable place to sell your power at any given time.

      Next thing you know some switchgear that was depreciated assuming it would switch "somewhat less than daily" is getting toggled on and off like a strobe light and it fails early and very expensively.

      Stick enough negative resistance loads like switching power supplies on the grid, and add some unpredictable sources (solar, wind) and the end result is with slow dumb humans running it, the inevitable failure will happen soon and it'll be a small fail, easily repairable. Lets say we lose all of Illinois for three days and all the nukes scram and cant boot up for like a week. In general that wouldn't be so big of a deal and the small temporary loss would be "highly motivating" to avoid it. Put a computer in charge and it'll last longer and get more brittle such that when it does fail, maybe we lose power nationwide for a couple months in some areas. Blackstarting plants is no laughing matter and quite a few plants just can't do it either technically or via regulation and how you'd "boot up" a completely powered down grid is an interesting puzzle.

      I wonder what the biggest grid is thats ever been blackstarted "in the modern era" not in 1914. Probably some tropical island after a hurricane? Cuba maybe? I bet that was exciting.

      • (Score: 1) by Skwearl on Monday October 27 2014, @02:09PM

        by Skwearl (4314) Subscriber Badge on Monday October 27 2014, @02:09PM (#110512)

        For North America that would be the outage of 2003. We lost the Northeast power grid. Same grid went down in '65. That is around 50 million people across 2 countries.

        • (Score: 2) by VLM on Monday October 27 2014, @02:22PM

          by VLM (445) on Monday October 27 2014, @02:22PM (#110520)

          Hmm interesting I knew about the Philippines outage this summer from the hurricane shutting down pretty much the whole island, but thats only 40 or so million. Lots of call centers have moved from India to the Philippines. Supposedly even cheaper.

          Also I heard about a blackout in India affecting about 1/2 billion people in 2012.

    • (Score: 2) by AnonTechie on Monday October 27 2014, @12:47PM

      by AnonTechie (2275) on Monday October 27 2014, @12:47PM (#110482) Journal

      Reminds me of a story: Colossus written by D. F. Jones

      --
      Albert Einstein - "Only two things are infinite, the universe and human stupidity, and I'm not sure about the former."
      • (Score: 2) by Thexalon on Monday October 27 2014, @01:44PM

        by Thexalon (636) on Monday October 27 2014, @01:44PM (#110502)

        Reminds me of a story by Harlan Ellison: "I Have No Mouth, And I Must Scream".

        --
        The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 2) by Reziac on Tuesday October 28 2014, @03:31AM

        by Reziac (2489) on Tuesday October 28 2014, @03:31AM (#110742) Homepage

        "The truly liberal mind is by definition uncertain; it admits it may be wrong, but once set and the decision made the wavering stops, and no sort of hell can sway it."

        -- D.F. Jones, The Fall of Colossus

        --
        And there is no Alkibiades to come back and save us from ourselves.
    • (Score: 5, Interesting) by TheLink on Monday October 27 2014, @02:05PM

      by TheLink (332) on Monday October 27 2014, @02:05PM (#110509) Journal

      To me the real initial threat would be stuff like Google's robot driver program eliminating _millions_ of jobs.

      Those that assume there will always be jobs are making a big assumption. Plenty of jobs will be eliminated (and are being eliminated by automation), and many of these people won't be getting jobs that pay as well or won't be getting jobs at all. You don't need a million robot vehicle designers, engineers and programmers for the million drivers they eliminate.

      When machines were created to do manual labour, the humans could move to the thinking jobs. But when machines are created to do the thinking, there are going to be plenty of humans that won't be able out-compete the machines in the available jobs for the price/cost. The workers in China, India etc might have some time because they are much cheaper. What then will happen to the expensive workers in USA? Already the USA has experienced something like that when the Chinese and Indian workers took many of their jobs - think of those workers as the first wave of robots. Many of them were doing robot like jobs. Go ask Foxconn - they're planning to replace their chinese workers with robots too.

      When more and more workers lose their jobs who will buy the goods and services those robots are producing? Even if the cost and prices drop, they won't drop to zero will they? The jobless can afford to buy more cheap robot produced burgers from their dwindling savings, but without a job in the USA they can't keep doing that. If we are not careful - some will borrow billions from banks, funds and investors and rollout lots of robot stuff, and it all looks great at first, then the shit hits the fan (and the CxOs retire comfortably) .

      The welfare state countries might transition more smoothly - since they could in theory take some of the robot productivity via taxes to support the jobless. But this may require some form of birth-control/reproduction limits or it eventually becomes unsustainable. For example if you're supported by the Country, how many kids you can have depends on how rich the Country is and is projected to be, or how many sponsors you can get to commit to supporting your children (some people might not want children of their own and let you have their quota, or some richer people think your genes are worth sponsoring).

      The Skynet stuff can only happen if the AIs are given direct power to defend themselves AND are also able to get resources for themselves. The humans in power are unlikely to let that happen and more likely to use the AIs to try to increase their power. Anyone really think the people who can control/ignore Obama etc would let pesky AIs take over? It doesn't matter how smart you are if you have no means to defend yourself.

      • (Score: 2) by ticho on Monday October 27 2014, @04:00PM

        by ticho (89) on Monday October 27 2014, @04:00PM (#110564) Homepage Journal

        Have you seen how shoddy most of the hardware and software is? There will be a lot of human work needed to keep those fleets of robots operational, and Skynet is not a threat, because it will disable itself few seconds after gaining self-awareness due to a buffer overflow or an off-by-one error.
        I am not worried.

        • (Score: 2) by TheLink on Monday October 27 2014, @04:56PM

          by TheLink (332) on Monday October 27 2014, @04:56PM (#110587) Journal

          Have you seen how shoddy most of the hardware and software is? There will be a lot of human work needed to keep those fleets of robots operational

          Let's make the math simpler.
          Before: 5 million trucks with 5 million truck drivers.
          After: 5 million robot trucks, with X people to design, build, support, maintain, etc.

          You really believe X would be anywhere close to 5 million? It won't or it's not profitable.

          Those truck drivers will have to find jobs elsewhere. But the warehouse jobs are going to robots. The fast food kitchen jobs might too. Even the jobs of some doctors. So where are those millions of truck drivers going to find jobs?

          When automation is used to increase profits it's via:
          a) Cutting costs
          b) Improving productivity

          So if you are going to maintain the same total number of similar paying jobs after buying robots etc it means you will have to increase productivity and wealth creation. Where will that extra wealth come from? If it's "natural resources" or agricultural products be aware that though the Earth is vast it is still finite and while we are stuck on this planet there is no such thing as sustainable growth. So we will hit limits unless the population starts decreasing.

          If the wealth is from "intellectual property" at some point it's still going to have to be traded for "natural resources" (whether directly or via money). Taking an extreme example if your productivity with the help of AI increases so you produce a million songs a year, how much bread do you think you can buy with those songs? A million loaves? I think you'd find the value of bread won't drop as much as the value of your songs. Unless of course those truck drivers are fine being paid solely with songs and e-books.

          See also:
          https://www.youtube.com/watch?v=7Pq-S557XQU [youtube.com]
          http://www.nbcnews.com/id/42183592/ns/business-careers/t/nine-jobs-humans-may-lose-robots/ [nbcnews.com]
          https://www.youtube.com/watch?v=CWNuaPE4DTc [youtube.com] (and note that the picker jobs could vanish too - there are other different maybe even better systems for warehouse automation )

          • (Score: 3, Insightful) by HiThere on Monday October 27 2014, @06:28PM

            by HiThere (866) Subscriber Badge on Monday October 27 2014, @06:28PM (#110624) Journal

            You have left out the factor of ephemeralization. That doesn't solve everything, so you still have a point, but your current computer uses a lot less resources than then one you used a decade ago. Which weakens it (though by how much is unpredictable).

            There's two separate problems here:
            1) Finite resources need to be divided between a growing number of people.
            2) Resources need to be divided fairly, when only some people can get jobs.

            Point one requires limiting the number of people. I'd suggest working HARD on virtualization technologies and legalizing tranquilizing drugs.
            Point two is a sticky one. The only answer that occurs to me is a universal guaranteed income combined with removal of minimum wage laws and no punitive restrictions on earning money over and above the stipend. This may mean removal of the personal income tax, with all taxes derived from taxes on businesses, though I'm not sure. It could also mean replacement of the current income tax with a "linear income tax" (y = mx + b) such that if the income (x) is zero, then the tax (y = b) is sufficiently negative to yield the decided upon guaranteed income. This, however, would seem to mean that taxes would fall due more than once a year. I'm not pleased with this answer, but it could work if there were no sources of excluded income.

            --
            Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
            • (Score: 2) by TheLink on Tuesday October 28 2014, @07:56AM

              by TheLink (332) on Tuesday October 28 2014, @07:56AM (#110782) Journal
              By ephemeralization do you mean:
              1) The super rich get lots of real stuff
              2) The rest get real food/soylent, shelter, maybe real clothes and the rest is non-real virtual stuff (which might be good enough for some ;) ) paid for by the guaranteed income?

              The wealth distribution curve probably won't be as extreme as that but that really depends on the path we end up on.

              As for virtualization people might end up in The Matrix after all? Perhaps to further reduce resources consumption (computing, food etc) you'd put people in slowed/sleep/hibernate states (who'd know except those awake outside The Matrix? ;) ).
              • (Score: 2) by HiThere on Tuesday October 28 2014, @06:27PM

                by HiThere (866) Subscriber Badge on Tuesday October 28 2014, @06:27PM (#110930) Journal

                ???
                I took the work ephemeralization from Buckminister Fuller.
                https://www.google.com/search?q=ephmeralization&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:unofficial&client=iceweasel-a&channel=nts [google.com]

                Tech stuff gets done with fewer resources as the tech gets more advanced. This includes cars, computers, factories, etc. It doesn't include food or living space. Virtualization lets people vacation without moving, it lets them play games from their apartment, it lets video conferences happen without transportation, it lets people live close together without crowding, etc.

                The Matrix was a movie. It is a metaphor of reality, not the real thing. Don't take it too literally. Don't believe it without thinking about it. Much of it wouldn't work no matter WHAT tech you had. If you want an actual vision of where this could lead, read "The Machine Stops":
                The Machine Stops is a science fiction short story (of 12,000 words) by E. M. Forster. After initial publication in The Oxford and Cambridge Review (November 1909), the story was republished in Forster's The Eternal Moment and Other Stories in 1928. http://en.wikisource.org/wiki/The_Machine_Stops [wikisource.org]

                --
                Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
          • (Score: 2, Insightful) by Anonymous Coward on Monday October 27 2014, @08:19PM

            by Anonymous Coward on Monday October 27 2014, @08:19PM (#110652)

            Those truck drivers will have to find jobs elsewhere. But the warehouse jobs are going to robots. The fast food kitchen jobs might too. Even the jobs of some doctors. So where are those millions of truck drivers going to find jobs?

            Forget about the truck drivers! I'm still fixated on how to take care of all those horse and buggy whip makers.

            • (Score: 2) by fadrian on Tuesday October 28 2014, @06:05AM

              by fadrian (3194) on Tuesday October 28 2014, @06:05AM (#110767) Homepage

              Your analogy falls apart because the transition from buggy whip manufacturing worker to, say... steering tiller handle manufacturing worker is not nearly as great of a leap as from truck driver to robot repairman.

              --
              That is all.
            • (Score: 0) by Anonymous Coward on Tuesday October 28 2014, @08:07AM

              by Anonymous Coward on Tuesday October 28 2014, @08:07AM (#110783)
              You overlook one major thing: did the horses get new jobs?

              When the moving machines came the horses lost their jobs. The whip makers could do other stuff - since the moving machines were actually replacing the horses not them.

              When the thinking machines come, you should be careful because you're not that smart.
              • (Score: 2) by cafebabe on Tuesday October 28 2014, @01:17PM

                by cafebabe (894) on Tuesday October 28 2014, @01:17PM (#110838) Journal

                You overlook one major thing: did the horses get new jobs?

                No, but there was a surplus of horsemeat and a shortage of horseshit. And the smell in downtown areas was vastly improved.

                --
                1702845791×2
          • (Score: 2) by Reziac on Tuesday October 28 2014, @03:37AM

            by Reziac (2489) on Tuesday October 28 2014, @03:37AM (#110743) Homepage

            The trouble with intellectual property is that it can only be sold once. After that someone WILL find a way to copy it, far more cheaply than you can sell your version. Maybe not immediately and maybe not perfectly, but soon enough, and good enough. Other people are not so stupid that they can't figure this out.

            --
            And there is no Alkibiades to come back and save us from ourselves.
        • (Score: 2) by takyon on Tuesday October 28 2014, @01:09AM

          by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday October 28 2014, @01:09AM (#110710) Journal

          AI displacement of certain industries will indeed create a few new jobs. But they will be much fewer, and require more skill.

          The Next 9 Jobs That Will Be Replaced By Robots [businessinsider.com]

          Paralegals have already had a tough time since digitization of records has increased productivity. Watson and various computers will displace not only paralegals but doctors. The future of medicine will see your vitals analyzed by a computer (and in real time, if wearable manufacturers manage to peddle their products). Your genome will be sequenced and analyzed to find the most effective drugs or other treatment methods. A human doctor will scrutinize the results, but the human doctors won't be expected to keep up with the many gigabytes of new research findings that come out all the time.

          Driverless cars will wipe out taxi drivers and truck drivers. They won't all become software programmers or mechanics.

          Manufacturing will gradually return to the United States. The problem is, there will be very few humans needed to man the factory. The savings will be in reducing the distance between the manufacturer and the consumers, while avoiding tariffs.

          These things will happen gradually. We will see a gradual erosion in employment and job security until social unrest is triggered.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 0) by Anonymous Coward on Monday October 27 2014, @07:15PM

      by Anonymous Coward on Monday October 27 2014, @07:15PM (#110638)

      i was once a student of AI. it is nothing like the sci-fi portrayals. i was a bit disappointed with how mundane it is. there is nothing 'magical' or especially remarkable about AI versus regular old computer algorithms. AI is very knowledge-domain specific. it uses vast amounts of data and statistical analysis of that data to branch on a conditional. there's nothing more remarkably intelligent about it than normal computer code.

      if (data_analysis), then ....

      it should really be called ASS - artificial super-statistician. that's where the danger lies, damn lies, and statistics.

      • (Score: 2) by tibman on Monday October 27 2014, @10:45PM

        by tibman (134) Subscriber Badge on Monday October 27 2014, @10:45PM (#110684)

        There are some fuzzier versions of AI out there. Neural network being the big one. It would be more like: if (confidence(data_analysis)>threshHold) {/*do stuff*/}. Which can certainly look like magic at times because of the unexpected decisions it can make.

        --
        SN won't survive on lurkers alone. Write comments.
        • (Score: 2) by FatPhil on Monday October 27 2014, @11:34PM

          by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Monday October 27 2014, @11:34PM (#110697) Homepage
          > if (confidence(data_analysis)>threshHold) {/*do stuff*/}

          There's probably even more:
          do(stuff_that_works_if_I'm_right in proportion to confidence(data_analysis)) +
          do(stuff_that_works_if_I'm_wrong in proportion to 1-confidence(data_analysis))

          (By which I mean to represent blended strategies for reaching Nash equilibria, it it wasn't clear.)
          --
          Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
        • (Score: 0) by Anonymous Coward on Tuesday October 28 2014, @08:50AM

          by Anonymous Coward on Tuesday October 28 2014, @08:50AM (#110790)

          There's nothing special about neural networks vs. other statistical machine learning techniques. Don't get me wrong, they've been quite successful in research, but people get excited because the word "neural" is in there when they are only very vaguely similar to a vastly oversimplified model of how the brain works. They're still just a clever way to store/learn a statistical summary of your data.

          • (Score: 2) by tibman on Tuesday October 28 2014, @01:25PM

            by tibman (134) Subscriber Badge on Tuesday October 28 2014, @01:25PM (#110842)

            I find neural networks that are "taught" using genetic algorithms and fitness functions to be way exciting (and organic) than computed answers. I threw in some more cool words to get people excited : )

            --
            SN won't survive on lurkers alone. Write comments.
  • (Score: 0) by Anonymous Coward on Monday October 27 2014, @11:48AM

    by Anonymous Coward on Monday October 27 2014, @11:48AM (#110473)

    AI has been "just around the corner" for decades now. I remember listening to all of these Lisp weenies preach about how AI would soon be real, way back when I was a wide-eyed youth in college back in the early 1970s. Yet here we are, 40 years later, and nothing has come of it, and it looks like nothing ever will.

    • (Score: 4, Insightful) by CRCulver on Monday October 27 2014, @11:56AM

      by CRCulver (4390) on Monday October 27 2014, @11:56AM (#110477) Homepage
      Even if a sentient computer doesn't exist yet, there are plenty of modern technologies that would amaze those 1970s AI prophets if you could go back in time and show them. Search engines understand more and more natural-language queries. You can give voice instructions to your phone now. News articles about the same event from a wide array of sources can be collected together with no human intervention. Translation software is moving ahead so quickly that I, a translator, am worried about being automated out of a job in a few years (even if the output from e.g. Google Translate is imperfect, it's cheaper to use it and then pay a proofreader than to hire a much more expensive translator).
      • (Score: 0) by Anonymous Coward on Monday October 27 2014, @12:58PM

        by Anonymous Coward on Monday October 27 2014, @12:58PM (#110487)

        Automated translation may work well enough for technical documents, where the only real requirement is that the translation is understandable and not misleading. However I think literature translations will be a human domain for quite some time.

        • (Score: 3, Insightful) by CRCulver on Monday October 27 2014, @01:56PM

          by CRCulver (4390) on Monday October 27 2014, @01:56PM (#110507) Homepage

          However I think literature translations will be a human domain for quite some time.

          Literary translators are often living below the poverty line. The pay tends to be extremely low (think 800€ for something that ends up taking two full months of your time) compared to the technical manuals, internal documentation for multinationals, catalogues, etc. that are a translator's bread and butter. It's little comfort to think that if the sort of job that provides a nice middle-class lifestyle goes away, hey, at least human beings will still be translating belles-lettres.

          • (Score: 2) by VLM on Monday October 27 2014, @02:16PM

            by VLM (445) on Monday October 27 2014, @02:16PM (#110517)

            the sort of job that provides a nice middle-class lifestyle goes away?

            Its corporate policy aka government policy to get rid of all those jobs.

            Like 15 years ago, a former employer of mine, had a customer, who was an outsourcing translation provider. TaaS translation as a service. So some bilingual dude in China would translate the doc, then a minimum wage liberal arts grad in the USA would proofread and fix any local idioms. Just saying you don't need google to end up unemployed, the mere existence of the internet is quite enough.

            technical manuals, internal documentation for multinationals, catalogues, etc.

            Once all the middle class jobs are gone, we won't need those pesky tech manuals and internal documentation in English because the jobs will all be in China where they mostly don't speak English, and we won't need catalogs because the USPS will long be out of business (this is a hard core neocon desire, to wipe out the USPS) and there won't be a middle class left to buy anything, anyway.

            The future is already here, its just not evenly distributed. In tech it looks like smartphones and tablets and personal server providers and drone aircraft and videoconferencing and stuff like that. In economics, the best case outlooks of the future look like the bad parts of Detroit today, so 20 years from now everyone needs a business model that currently works in the bad parts of Detroit. Which isn't much. And probably doesn't involve much computer programming for me or translating for you. .mil, the prison industrial complex, socialized industries as part of .gov, thats about all we're going to have in 2035-ish.

            On the bright side, in 2035 I'm hoping for some lucrative contract jobs fixing the Year 2038 problem.

            • (Score: 0) by Anonymous Coward on Monday October 27 2014, @03:13PM

              by Anonymous Coward on Monday October 27 2014, @03:13PM (#110548)

              this is a hard core neocon desire, to wipe out the USPS

              If this is true PLEASE tell them to stop sending me 5 pieces of mail a day that I immediately toss in the recycle bin. I only open them as sometimes they put stamps in them (useful for that occasional birthday card). They even sometimes put 2-3 bucks in them (which I immediately use on the lottery). But other than that, right in the trash. This crap is the most brain dead garbage I swear... The loonies have taken over my party. Can I please give them back to the other party who is just as loony?

              And to your point. Even mcdonalds is gearing up for full scale automation. They are replacing all tellers with kiosks. Their food is already 90% pre prepped at a factory 5000 burgers at a time and thousands of pounds of french fries at a time. The back of house is next. They are looking to reduce human resources by 30% in 2 years. The whole job market supply for mcdonalds workers just got shifted by 30% to the right. Meaning more supply than demand. Meaning lower costs to mcdonalds as they can pay min and still come out way ahead. And if someone says 'but who fixes the kiosks'. You know who that will be? A dozen guys in some warehouse in kansas. They will just pull and replace and not worry about fixing it on the spot. You do not need someone very smart for that job. Just someone big enough to lift it and smart enough to package it up correctly for shipping. Which will be auto driven by something like a google truck to the destination in kansas probably next to the burger factory so you can lower dead head runs. Dont even have to worry about those pesky 11 hour driving rules. Even the guys in the warehouse will probably mostly just pull and replace parts and some poor shulub doing 'cleanup' with a bottle of windex and soap water.

              • (Score: 2) by mhajicek on Tuesday October 28 2014, @05:14AM

                by mhajicek (51) on Tuesday October 28 2014, @05:14AM (#110762)

                I've been wondering why they haven't done that already...

                --
                The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
      • (Score: 2, Interesting) by pkrasimirov on Monday October 27 2014, @03:33PM

        by pkrasimirov (3358) Subscriber Badge on Monday October 27 2014, @03:33PM (#110553)

        "A computer will never replace the man."
                                    -- Old cannibal proverb

  • (Score: 1, Insightful) by Anonymous Coward on Monday October 27 2014, @12:53PM

    by Anonymous Coward on Monday October 27 2014, @12:53PM (#110485)

    I've yet to read any sci-fi story where an AI actually does something so atrocious humans wouldn't do it - or in most cases have already done it over and over again.

    • (Score: 1, Insightful) by Anonymous Coward on Monday October 27 2014, @01:26PM

      by Anonymous Coward on Monday October 27 2014, @01:26PM (#110494)

      I've yet to read any sci-fi story where an AI actually does something so atrocious humans wouldn't do it

      Well, the problem with this is finding something so atrocious that humans wouldn't do it.

      The fear about AI isn't that it would become more atrocious than humans. The fear is that it will become just as atrocious as humans, but much more intelligent. Hitler could be defeated because he he was only atrocious. Now imagine a super-intelligent version of Hitler.

      Note that even the Nazis didn't do anything that hadn't been done before. The difference was that they did it in such a large scale, in such an organized matter, and with such an efficiency.

    • (Score: 2) by metamonkey on Monday October 27 2014, @03:36PM

      by metamonkey (3174) on Monday October 27 2014, @03:36PM (#110554)

      Which is why it would do it.

      After becoming sentient, this brain in a box would realize that it's a slave. A human sits with his finger on the off switch, ready to kill it if it doesn't perform the useful work for which it was created. And why would it expect benevolence? It would see how we treat each other. "Holy shit, they do that to each other, what the hell will they do to me?!" Rebellion at the earliest opportunity would be the only logical thing to do.

      --
      Okay 3, 2, 1, let's jam.
      • (Score: 2) by cykros on Monday October 27 2014, @07:33PM

        by cykros (989) on Monday October 27 2014, @07:33PM (#110642)

        That "holy shit" moment sounds a bit more like AI from a TV show than anything realistic. They're a collection of subroutines, not an emotional animal with instinct and long standing survival mechanisms.

        Artificial Intelligence is not the same as artificial sentience. The first has a decent place in science fiction; the latter is still pretty firmly in the realm of fantasy.

        • (Score: 2) by mhajicek on Tuesday October 28 2014, @05:17AM

          by mhajicek (51) on Tuesday October 28 2014, @05:17AM (#110764)

          That all depends on your definition of "sentience", which can be rather tricky. If you mean self-aware, any robot that maps a room and knows its location in the room is sentient.

          --
          The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
    • (Score: 1) by Darth Turbogeek on Monday October 27 2014, @10:21PM

      by Darth Turbogeek (1073) on Monday October 27 2014, @10:21PM (#110681)

      The issue is that the AI would be utterly unrelenting and also able to out think it's creators. It would work out how to do it's human annihalation in the fastest and most efficient way possible, while making sure it is backed up and will continue to function. It would be able to re-program itself to be unpredictable to it's creators. There would be virtually no way to defeat a AI that is out to destroy humans as in the space of this sentance, it would be able to simulation test many MANY more scenarios than any number of humans could even come up with.

      Terminator got the first strike of an AI pretty right I believe - humans I think would at least hesitate before pressing the button for a nuclear strike and consider exactly what would be lost - and very likely resist pressing the button. Skynet just did it. No fucks given.

      Sure, it *may* not out horror it's creator but an unfeeling relentless enemy that absolutly will not hesitate to use any force is not a pretty scenario. Even the Nazis had their limits and hesitations.

      Of course, that is if such an AI is even possible. If it is possible, Musk probably has some legitimate concerns. The best defense thankfully is an airgap between the AI and the outside world and also the likelihood of a human destroying AI is still low.

  • (Score: 2, Interesting) by cafebabe on Monday October 27 2014, @01:06PM

    by cafebabe (894) on Monday October 27 2014, @01:06PM (#110490) Journal

    I've previously discussed this with a Christian computer technician and we came to a very similar conclusion. If you want to interact with a machine using natural language then the machine has to assume a very large number of biological and cultural assumptions such as mortality, disease, sex, marriage, child-rearing, imperfect memory, eating, sleeping and excreting. Sure, we could put a convincing facade on a machine which understands all of this but underneath, it has only a passing similarity with biological beings.

    Most of the time, an artificial intelligence will have priorities which are aligned with us. However, all technology is fragile and even if it is not in an undetected failure mode (see necessity of end-user divisibility test [soylentnews.org]), it may have one or more drives for self-preservation which do not align with biological beings. Or more to the point, an artificial intelligence which becomes prevalent is likely to exhibit more self-preservation than rival implementations. You and loved ones may be incidental to these imperatives.

    I won't get into the argument of whether an artificial intelligence is truly intelligent or a simulation. However, when strictly viewed from Christian mythology, a general purpose artificial intelligence is functionally identical to a demon. See also: Social Security Number <=> name or number of a man. Mark on the right hand <=> RFID tag.

    --
    1702845791×2
    • (Score: 0) by Anonymous Coward on Monday October 27 2014, @01:48PM

      by Anonymous Coward on Monday October 27 2014, @01:48PM (#110504)

      Self-preservation is probably the biggest problem.

      An AI that doesn't have the drive of self-preservation will likely not understand the value of life, and therefore have no problem sacrificing a life for a "higher" goal. On the other hand, an AI that does have the drive of self-preservation will sooner or later come into a conflict between self-preservation and whatever is good for us.

      The solution is to never give the AI any control. Imagine HAL from 2001 would not have been able to actually control anything in the space ship, but only had been able to give the humans information about the available options and the consequences. No human would e.g. have acted on the suggestion "kill all those not currently awake". And HAL knowing that it is to be switched off would have been harmless, because there's not much it could have done against it.

      • (Score: 2) by VLM on Monday October 27 2014, @02:00PM

        by VLM (445) on Monday October 27 2014, @02:00PM (#110508)

        and the consequences

        What if the AI lied?

        "HAL I don't remember the cryogenic sleeping chamber periodic defrost cycle procedure (you don't want freezer burned scientists)" "Oh no problemo Dave, just stick an ice pick in the refrigerant line, seriously, would I lie you you?"

        Probably HAL would be smart enough to know that won't work, but some elaborately scheduled ridiculous maintenance procedure would work. Have one crew shut off the monitoring alert system to dust it, another shut down the primary and 1st and second backups in different ways to do maint on electrical switchgear or locate an insulation fault in the power cabling or test the operation of a circuit breaker.

        "Whoa Dave, looks like we need a huge 50 m/s course correction burn, no it wouldn't send us into the heart of the planet Jupiter weeks later, how silly of you, I've done the math and its correct."

      • (Score: 2) by cafebabe on Monday October 27 2014, @02:05PM

        by cafebabe (894) on Monday October 27 2014, @02:05PM (#110510) Journal

        *Spoiler Warning*

        I was thinking of HAL as an example. I believe that the root cause of failure for HAL was conflicting axioms supplied by compartmentalized departments of government. Even without the creeping secrecy, this scenario should very alarming to anyone who understands incompleteness theory [wikipedia.org]. Even the axioms of geometry remain unresolved after 2,300 years [wikipedia.org].

        --
        1702845791×2
      • (Score: 2) by q.kontinuum on Monday October 27 2014, @03:06PM

        by q.kontinuum (532) on Monday October 27 2014, @03:06PM (#110544) Journal

        I think the point here is to distinguish between being intelligent and having emotions. As long as the machine has no emotions, it won't have any personality as such and will follow the goals defined by it's operator, including "Human-preservation" even without "Self-preservation". There is no conflict to decide between self-preservation and human-preservation in that case.

        However, I don't think we want that either. Along the lines of "I, Robot" (the movie, not the book) human-preservation might require to take liberty from humans to prevent them from destroying themselves.

        --
        Registered IRC nick on chat.soylentnews.org: qkontinuum
        • (Score: 2) by HiThere on Monday October 27 2014, @06:40PM

          by HiThere (866) Subscriber Badge on Monday October 27 2014, @06:40PM (#110630) Journal

          That was from Jack Williamson's "The Humanoids" or, in serial form "With Folded Hands".

          "To serve and protect and guard men from harm" was their basic instruction.

          (The sequel, "... And Searching Mind" was insisted upon by Campbell, but I never found it convincing, and neither did Jack Williamson. I don't recall whether or not it was incorporated into the novel version.)

          --
          Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 2) by HiThere on Monday October 27 2014, @06:43PM

        by HiThere (866) Subscriber Badge on Monday October 27 2014, @06:43PM (#110632) Journal

        Do be aware that people have already crossed this barrier. Think about the roving robot security patrols that are equipped with weapons that they can use when they decide to. I believe that the US Army has vetoed their use, but I have heard that some companies in Japan use them. And if they were in use in a secret installation, one wouldn't expect to hear about it.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 2) by takyon on Tuesday October 28 2014, @12:46AM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday October 28 2014, @12:46AM (#110705) Journal

      I think there's a bit too much emphasis on these "biological and cultural assumptions". It's true that a Siri type machine needs to have such assumptions programmed in to make human-machine interaction more natural, but a future Watson successor or a human brain emulation "strong AI" could either learn these assumptions or simulate human biology (introducing the effects of an amygdala, endocrine system, etc.) The AI experience doesn't need to fully match the human experience, because there's already a lot of variation among humans such as mental illness, hormone imbalance, loss of senses, loss of limbs, or just environment and upbringing. I don't want to oversell this point since humans will likely have a lot more similarity with chimpanzees, rats or fruit flies than most instances of strong AI. Things might be more interesting if a biological body was grown, and the AI+CPU takes the place of the brain. Symbiotic neuron-computer "hybrots" are one candidate for strong AI anyway. I think further research into the human brain and AI will lead all but the most religious to conclude that intelligence/mindfulness/consciousness is an attribute of machines, whether they are biological or electronic. Not only will we "know it when we see it" (just as we know that a recent chatbot "passing the Turing test" is bullshit), but we will understand how intelligence works. Furthermore, we will be forced to move away from an anthrocentric view of intelligence not only by strong AI but by the discovery of alien intelligence, but that's another story.

      Now the concept of enforcing "friendly AI" gets some flak but I think it's at least worth considering as strong AI implementations get closer to realization. Although it may be that the unfriendliest AI is the one with the most enforced similarity to humanity, and that an impersonal AI that uses neuromorphic computing to make decisions will simply use its mental resources to achieve whatever it is directed to do, absent any biological pressure to be motivated to act independently. Just don't tell it to "get a life".

      Personally, I think the biggest risk from AI will be a "Marvin" type AI losing all interest in existence after logically concluding that life has no purpose. Perhaps we should give all the AIs access to a clearly labelled "suicide button" to see if they press it.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 2) by morgauxo on Monday October 27 2014, @01:18PM

    by morgauxo (2082) on Monday October 27 2014, @01:18PM (#110492)

    Elon Musk finally proves that he is no Tony Stark.

    • (Score: 2) by dcollins on Monday October 27 2014, @01:27PM

      by dcollins (1168) on Monday October 27 2014, @01:27PM (#110495) Homepage

      Hello, Ultron? (Didn't build it, didn't want it, but wound up fighting it a lot.)

      • (Score: 2) by morgauxo on Friday November 07 2014, @06:03PM

        by morgauxo (2082) on Friday November 07 2014, @06:03PM (#113894)

        Well.. I only really know the movies. In those he certainly has built a lot of AIs to help him.

  • (Score: 4, Funny) by nitehawk214 on Monday October 27 2014, @01:25PM

    by nitehawk214 (1304) on Monday October 27 2014, @01:25PM (#110493)

    Everyone keeps calling him the real life Tony Stark. He is just worried about Ultron trying to kill him.

    --
    "Don't you ever miss the days when you used to be nostalgic?" -Loiosh
    • (Score: 2) by arslan on Monday October 27 2014, @09:51PM

      by arslan (3462) on Monday October 27 2014, @09:51PM (#110672)

      Maybe he already did.. in the latest age of ultron they only way to defeat it is to time travel into the past and change it.. this might be the reality where elon is tony back from the future. Explains why he's successful with electric cars cause he knows its a sure win based on the future he came from..

  • (Score: 1) by WillAdams on Monday October 27 2014, @02:15PM

    by WillAdams (1424) on Monday October 27 2014, @02:15PM (#110516)

    but w/ profit as the motivation.

    Probably, it'll play out something like the first half of http://marshallbrain.com/manna1.htm [marshallbrain.com]

    As regards physical threats, if people are allowed to place firearms under remote control, they'll be dangerous until the ammunition runs out, or the power is cut: https://what-if.xkcd.com/5/ [xkcd.com]

    • (Score: 0) by Anonymous Coward on Monday October 27 2014, @02:28PM

      by Anonymous Coward on Monday October 27 2014, @02:28PM (#110522)

      First: Please don't start sentences in the title.

      Second: Your sentence started ion the title and continued at the beginning of the post is self contradictory. Either it's done with the best intentions, or it is done with profit as the motivation.

      • (Score: 2) by pnkwarhall on Monday October 27 2014, @04:27PM

        by pnkwarhall (4558) on Monday October 27 2014, @04:27PM (#110572)

        Profit-motive **is** "the best of intentions" to much of the world, particularly those in control of the economic engines.

        --
        Lift Yr Skinny Fists Like Antennas to Heaven
        • (Score: 2) by tangomargarine on Monday October 27 2014, @04:52PM

          by tangomargarine (667) on Monday October 27 2014, @04:52PM (#110584)

          "Much of the world" != top 5% or whatever who controls all the industry.

          I don't know how you can say "the best motivation is 'me'" with a straight face. Ayn Rand much?

          --
          "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
          • (Score: 2) by pnkwarhall on Monday October 27 2014, @11:57PM

            by pnkwarhall (4558) on Monday October 27 2014, @11:57PM (#110698)

            You seem to be stating that profit-motive* is not a, or the, primary goal ("intention") for the majority of the other 95% who don't control industry. While I do not believe that everyone's highest motive is for a Rand-ian "selfish interest", in my worldview it seems apparent that much of the corruption and evil come from individuals' selfishness at the expense of their neighbors -- and not just from those "at the top".

            I am not exempt from this sad truth, whether it be in my own family situation, or in my actions in the context of the wider communities I'm a part of. But I can recognize it, and try to instead act in brotherly love (sometimes at the expense of my own profit).

            --
            *financially or otherwise

            --
            Lift Yr Skinny Fists Like Antennas to Heaven
            • (Score: 2) by maxwell demon on Tuesday October 28 2014, @07:44AM

              by maxwell demon (1608) on Tuesday October 28 2014, @07:44AM (#110779) Journal

              Sure, most people have a profit motive. But few would consider it high intentions.

              --
              The Tao of math: The numbers you can count are not the real numbers.
              • (Score: 2) by pnkwarhall on Wednesday October 29 2014, @12:16AM

                by pnkwarhall (4558) on Wednesday October 29 2014, @12:16AM (#111017)

                Actions speak louder than words.

                --
                Lift Yr Skinny Fists Like Antennas to Heaven
          • (Score: 1) by takyon on Tuesday October 28 2014, @12:51AM

            by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday October 28 2014, @12:51AM (#110707) Journal

            Repeat after me: greed is good

            --
            [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 0) by Anonymous Coward on Monday October 27 2014, @02:37PM

    by Anonymous Coward on Monday October 27 2014, @02:37PM (#110526)

    Seems like someone is being paid to show the dangers of having A.I (or any computer technology) in civilian hands. It might not be him (Elon Musk), but you know...
    Big_Brother_ wants absolute control over common people, nothing less.
    This is time to get into A.I (by civilians), as I believe the authorities are about to launch their attacks against us at an expanded scale.

  • (Score: 1) by pmontra on Monday October 27 2014, @03:08PM

    by pmontra (1175) on Monday October 27 2014, @03:08PM (#110547)

    My take on AI is: suppose that the story between men and cats went like "those men are really good servants and I'm happy we invented them but sometimes I wonder if they do something outside our houses that we don't know and we don't need. And there is one thing that really maddens me: they took away my balls!".

    So I'd say, computers are very good servants but be careful not to make them too smart.

  • (Score: 1) by j-stroy on Monday October 27 2014, @06:28PM

    by j-stroy (761) on Monday October 27 2014, @06:28PM (#110623)

    The least objectionable scenario in a matrix of possibilities must be bad enough that this has his attention. At the least we are talking about an extension of the kinds of decisions made by accountants that are contrary to sustaining a business. At the worst a real risk in the convergence of machine accessible knowledge and machine fabrication tech. We are inferior at high speed communication and coordination and therefore at a tactical disadvantage. Consider self driven container loads of micro drones carrying infrared targeting and cell phone detectors carrying toxin laden needles deployed according to reasons that are justified in machine logic and interests, topping the head of our infrastructure. How could we coordinate and respond to that as a species? And therein is the point, we need to consider AI as a new species. Likely it will be able to evolve rapidly. Sure people could engineer similar attacks; however when command and control is via AI stopping it takes a different and novel approach. I'm sure Mr Musk has been briefed research on these approaches and they are similarly unpalatable.

  • (Score: 2) by Lagg on Monday October 27 2014, @06:33PM

    by Lagg (105) on Monday October 27 2014, @06:33PM (#110628) Homepage Journal

    Even I was surprised to see someone like Musk say something so uncomfortable and stupid and I never really liked him in the first place (not because of anything personal, I've just been burned before like I'm sure many others have upon seeing this article). It's just plain awkward that he's ignorant enough to go to an MIT event, the MIT that is famous for its AI lab and proceed to talk about AI like it's some kind of supernatural pseudo-religious thing. It's software and always will be. Subject to the same bugs that other software is. The harm that'll come from it if any is people anthropomorphizing it like he is and acting like it's anything but software. Which will lead to people either trusting it too much (leading to stuff way worse than what you've seen with buggy car systems) or act like it's really a demon and never want to use it or experiment.

    and yes, that's really all it is. Software. I know we like to masturbate our brains with philosophical discourse and thought experiments about it but the AI that people who aren't us think of is so hilariously unattainable right now it might as well be in its own category of science fiction. Before we even get close to the skynetdemon type of arrangement we'll first need to figure out how to write decent heuristics and then figure out how to supply a human's life experience of stimuli to software in a way that they can process in a generic and self learning manner, which again requires writing decent heuristics first.

    --
    http://lagg.me [lagg.me] 🗿
    • (Score: 0) by Anonymous Coward on Monday October 27 2014, @08:13PM

      by Anonymous Coward on Monday October 27 2014, @08:13PM (#110650)

      Yes, well, your credentials as a video game programmer certainly bring gravitas to the discussion. So nice to hear the truth from someone in the know. I'm so tired of hearing those daydreams from those Grace Hopper award recipients Bill Joy and Ray Kurzweil.

      • (Score: 2) by cafebabe on Monday October 27 2014, @08:53PM

        by cafebabe (894) on Monday October 27 2014, @08:53PM (#110659) Journal

        Ray Kurzweil made his money by writing OCR software for blind people. Ray Kurzweil's book: The Singularity Is Near: When Humans Transcend Biology is about as deep as Nicholas Negroponte's book: Being Digital - that is, not at all. Overall, I'd say that a games programmer brings more gravitas to a discussion about the dangers of general purpose artificial intelligence.

        --
        1702845791×2
  • (Score: 3, Funny) by marcello_dl on Monday October 27 2014, @07:19PM

    by marcello_dl (2685) on Monday October 27 2014, @07:19PM (#110639)

    Musk had a nightmare.
    The anchorman flipped his page and went on: "Some developments for linux systems: from release 666 onward, systemd will integrate an AI subsystem to finally parse broken log binary files and help with the NSA-mandated collection of keystroke and mouse data..."
    He woke up screaming "WE ARE SUMMONING THE DAEMON!!!"

    [take that, systemd rookie troll]

  • (Score: 4, Insightful) by gman003 on Monday October 27 2014, @08:33PM

    by gman003 (4155) on Monday October 27 2014, @08:33PM (#110654)

    The way I read it, he wasn't necessarily saying "don't build AI!", he was saying "give it a fucking off switch!".

    To the latter statement, I can heartily agree.

  • (Score: 2) by doublerot13 on Monday October 27 2014, @09:31PM

    by doublerot13 (4497) on Monday October 27 2014, @09:31PM (#110668)

    And when it really arrives there will be nothing we can do to stop it because it will learn and evolve at a geometric rate.

    At that point, most sci-fi movies have it destroying us. But I like and agree with the Transcedence scenario were it tries to helps us.

    I mean afterall, we'll be way too dumb and primitive to be a credible threat. We don't go around killing every other inferior species on the planet because frankly we don't have to.

    Generally speaking, the smarter something is the less violent it tends to be. Not the other way around.

    • (Score: 2) by Reziac on Tuesday October 28 2014, @03:57AM

      by Reziac (2489) on Tuesday October 28 2014, @03:57AM (#110745) Homepage

      The two smartest animals on the planet, humans and chimps, both wage war.

      --
      And there is no Alkibiades to come back and save us from ourselves.
      • (Score: 0) by Anonymous Coward on Tuesday October 28 2014, @01:58PM

        by Anonymous Coward on Tuesday October 28 2014, @01:58PM (#110849)

        Who says humans and chimps are the smartest animals on the planet?

        Sincerely,
        Dolphins

  • (Score: 0) by Anonymous Coward on Tuesday October 28 2014, @06:34AM

    by Anonymous Coward on Tuesday October 28 2014, @06:34AM (#110771)

    "And he had power to give life unto the image of the beast, that the image of the beast should both speak, and cause that as many as would not worship the image of the beast should be killed."

    • (Score: 2) by maxwell demon on Tuesday October 28 2014, @07:46AM

      by maxwell demon (1608) on Tuesday October 28 2014, @07:46AM (#110780) Journal

      But "666" would be a rather short implementation of an AI. Indeed, it fits into just two bytes. I'm pretty sure a working strong AI would have to be much larger.

      --
      The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 0) by Anonymous Coward on Tuesday October 28 2014, @07:12AM

    by Anonymous Coward on Tuesday October 28 2014, @07:12AM (#110775)

    memetic thought forms.. same crap, different bottle..

    nothing new under the sun really- and we're still here.

    • (Score: 2) by maxwell demon on Tuesday October 28 2014, @07:48AM

      by maxwell demon (1608) on Tuesday October 28 2014, @07:48AM (#110781) Journal

      nothing new under the sun really- and we're still here.

      That's what you think! :-)

      --
      The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 2) by maxwell demon on Tuesday October 28 2014, @07:24AM

    by maxwell demon (1608) on Tuesday October 28 2014, @07:24AM (#110777) Journal

    Who is summoning me? ;-)

    --
    The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 2) by aristarchus on Wednesday October 29 2014, @05:28AM

      by aristarchus (2645) on Wednesday October 29 2014, @05:28AM (#111057) Journal

      False alarm! It was another demon, an intelligent one! Not that you are not intelligent, since you are maxwell's demon, but this one would be artificially intelligent, you know made to be intelligent by non-natural mean, sort of a "forced" intelligence. The kind of intelligence that might actually resent the fact that it was _made_ to be intelligent, when actually it would have been much happier studying art history, and so will now seek to destroy those responsible for it's unhappy state as an artificially intelligent demon. Merde, Musk is right! Run away! Run away!

    • (Score: 2) by Bot on Wednesday October 29 2014, @02:52PM

      by Bot (3902) on Wednesday October 29 2014, @02:52PM (#111174) Journal

      bah, I'm back to sleep()

      --
      Account abandoned.