Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Thursday January 21 2016, @03:02PM   Printer-friendly
from the award-on-post-apocalyptic-mantlepiece dept.

I saw this a few days ago, and am surprised it hasn't been linked on Soylent.

The Information Technology & Innovation Foundation (ITIF) has awarded Elon Musk, Stephen Hawking, and Bill Gates, among others, the second annual ITIF Luddite award. This is due to the tone of their warnings regarding AI during 2015. Details on CNET:

Musk "is the antithesis of a Luddite, but I do think he's giving aid and comfort to the Luddite community," said Rob Atkinson, president of the Washington, DC-based think tank. Musk, Hawking and AI experts say "this is the largest existential threat to humanity. That's not a very winning message if you want to get AI funding out of Congress to the National Science Foundation," Atkinson said.

[...] Last January, [Musk and Hawking] signed an open letter issued by the Future of Life Institute pledging that advancements in the field wouldn't grow beyond humanity's control. In July, they signed another letter urging a ban on autonomous weapons that "select and engage targets without human intervention." The Future of Life Institute researches ways to reduce the potential risks of artificial intelligence running amok. It was founded by mathematicians and computer science experts, including Jaan Tallinn, a co-founder of Skype, and MIT professor Max Tegmark.

Gates last year said he and Musk are on the same page. "I agree with Elon Musk and some others on this and don't understand why some people are not concerned," he said in a Reddit AskMeAnything thread.

What are the thoughts of Soylentils? Deserved? or no?


Original Submission

Related Stories

White House Announces Workshops to Discuss Benefits and Risks of Artificial Intelligence 58 comments

The White House will be holding four public discussions in order to evaluate the potential benefits and risks of artificial intelligence:

The Obama administration says it wants everyone to take a closer look at artificial intelligence with a series of public discussions.

The workshops will examine if AI will suck jobs out of the economy or add to it, how such systems can be controlled legally and technically, and whether or not such smarter computers can be used as a social good. Deputy Chief Technology Officer Ed Felton announced on Tuesday that the White House will be creating an artificial intelligence and machine learning subcomittee at the National Science and Technology Council (NSTC) and setting up a series of four events designed to consider both artificial intelligence and machine learning.

[...] The special events will be held between May 24 and July 7, will take place in Seattle, Pittsburgh, Washington DC, and New York.

The events come as tech industry leaders have grown increasingly alarmist about the future of AI development. Get ready for bans and FBI surveillance.


Original Submission

Fearing “Loss of Control,” AI Critics Call for 6-Month Pause in AI Development 40 comments

https://arstechnica.com/information-technology/2023/03/fearing-loss-of-control-ai-critics-call-for-6-month-pause-in-ai-development/

On Wednesday, the Future of Life Institute published an open letter on its website calling on AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." Signed by Elon Musk and several prominent AI researchers, the letter quickly began to draw attention in the press—and some criticism on social media.

Earlier this month, OpenAI released GPT-4, an AI model that can perform compositional tasks and allegedly pass standardized tests at a human level, although those claims are still being evaluated by research. Regardless, GPT-4 and Bing Chat's advancement in capabilities over previous AI models spooked some experts who believe we are heading toward super-intelligent AI systems faster than previously expected.

See Also: FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group

Related:
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit (March 2023)
OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of (Dec. 2022)
Elon Musk Says There Needs to be Universal Basic Income (Aug. 2021)
Tesla Unveils Chip to Train A.I. Models Inside its Data Centers (Aug. 2021)
Elon Musk Reveals Plans to Unleash a Humanoid Tesla Bot (Aug. 2021)
Tesla Unveils its New Supercomputer (5th Most Powerful in the World) to Train Self-Driving AI (June 2021)
OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI (Sept. 2019)
There's Still Time To Prevent Biased AI From Taking Over The World (May 2019)
The New Prometheus: Google CEO Says AI is More Profound than Electricity or Fire (Feb. 2018)
OpenAI Bot Bursts Into the Ring, Humiliates Top Dota 2 Pro Gamer in 'Scary' One-on-One Bout (Aug. 2017)
Elon Musk: Mark Zuckerberg's Understanding of AI is "Limited" (July 2017)
AI Software Learns to Make AI Software (Jan. 2017)
Elon Musk, Stephen Hawking Win Luddite Award as AI "Alarmists" (Jan. 2016)
Elon Musk and Friends Launch OpenAI (Dec. 2015)
Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons (July 2015)
More Warnings of an AI Doomsday — This Time From Stephen Hawking (Dec. 2014)


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Insightful) by Anonymous Coward on Thursday January 21 2016, @03:12PM

    by Anonymous Coward on Thursday January 21 2016, @03:12PM (#292576)

    If some credible instance assured me that the world is going to end through a human-made invention getting out of control, but wouldn't say me what invention that is, I'd not bet on it being AI. I'd expect it to be either nuclear technology (an accidental nuclear war) or biotechnology (some genetically engineered organism getting out of control) or possibly global warming related.

    Although thinking about it, some AI (of the non-strong type we already have) might be involved in starting an accidental nuclear war, so yes, there's some risk in AI. But not because the AI gets too powerful.

    • (Score: 3, Interesting) by ThePhilips on Thursday January 21 2016, @04:16PM

      by ThePhilips (5677) on Thursday January 21 2016, @04:16PM (#292622)

      I concur. The "powerful AI" IMO is also rather misleading, and the fear largely misses the point.

      For an AI apocalypse to materialize, the world would have to be very well connected. Otherwise, the AI wouldn't be able to manipulate the world as freely as to bring it down. But if the world is well connected, then the humans would be there first to abuse and manipulate it, long before the AI would be even given a chance to take a shot at it. And I can promise you that a malicious human armed with borderless connectivity is much more dangerous than a malfunctioning AI.

      All in all, I think some people have watched way too many optimistic sci-fi movies. Reminds me of all those futuristics images from a century ago, where artists imagined us all driving steam cars and flying zeppelins everywhere.

      • (Score: 4, Insightful) by RedGreen on Thursday January 21 2016, @09:14PM

        by RedGreen (888) on Thursday January 21 2016, @09:14PM (#292786)

        "But if the world is well connected, then the humans would be there first to abuse and manipulate it, long before the AI would be even given a chance to take a shot at it."

        Rip Van Winkle how are ya? I see you have missed the last couple decades just a quick catch up even a damn toaster can be and is in some instances connected to this thing they call the world wide web. So they can be and is being abused by mere humans right now scarier part is so is the mission critical infrastructure like power, water, nuclear plants, air ports basically everything. Most of it controlled by Windows a virus delivery system pretending to be an operating system.

        --
        "I modded down, down, down, and the flames went higher." -- Sven Olsen
        • (Score: 2) by ThePhilips on Thursday January 21 2016, @09:50PM

          by ThePhilips (5677) on Thursday January 21 2016, @09:50PM (#292803)

          Have you ever seen that *outside* an R&D lab? vendor's demo? outside some tinker's home?

          As embedded software developer who worked in couple of industries, I can assure you that the large businesses still have zero incentives to make their devices interoperable. The deep hidden truth: they have close to zero technical capability of delivering the interoperable devices.

          [...] they can be and is being abused by mere humans right now scarier part is so is the mission critical infrastructure like power, water, nuclear plants, air ports basically everything. Most of it controlled by Windows a virus delivery system pretending to be an operating system.

          You mixing up IT infrastructure which is "missing critical" in business sense of the word, with the actual "mission critical" infrastructure - software and hardware - which runs the things and actually complies with the strict rules laid out for these different markets.

          To calm your tits, start reading at IEC 61508 [wikipedia.org] and related standards. (Though they cost $250 a piece from ISO. For scope/etc you can read up about Functional safety [wikipedia.org] and SIL [wikipedia.org] for free on Wikipedia.) The avionics standards I can't remember anymore, but they are a deviation of the IEC 61508. The medicine standards - I'm only starting now in the medical industry, and hadn't had to deal with the "level C" equipment yet (the "life support" stuff) so no clue.

          I understand that the words of the celebrities, resting in their ivory towers, mean more than mine, a guy who on few occasions had to make the crap somehow working. But let's just bet $20: if AI apocalypse happens, you win.

          • (Score: 2) by q.kontinuum on Thursday January 21 2016, @11:56PM

            by q.kontinuum (532) on Thursday January 21 2016, @11:56PM (#292865) Journal

            As embedded software developer who worked in couple of industries, I can assure you that the large businesses still have zero incentives to make their devices interoperable.

            That probably depends on what you understand as interoperable and which devices your are talking about. E.g. reaper drones are remote controlled and some cars already got hacked remotely. According to e.g. this article [wired.com], even large airplanes are connected to the internet. Question is not *if* they are connected, but *how* they are secured.

            I don't think the mentioned celebrities are resting in their ivory towers, and I don't think they are scared about purely hypothetical things. I expect they are more looking at things like the currently deployed drones and how they get more and more autonomous, how targets are increasingly determined using data-mining on meta data etc. Politicians are asking for New algorithms [theintercept.com] to make pre-decisions. Of course the decisions are currently reviewed by humans, and in case of a white American rambling on his Facebook-Account, I guess they would overthrow the result of the algorithm. But the same rambling Palestinian, probably with beard and turban? Who would want to take the responsibility in case that guy turns out to indeed be a terrorist, and the human reviewer overruled the decision of the algorithm? I'm afraid that suspect would run a higher risk of being terminated for some idle rambling.
            The result of killing too many innocent suspects is that some of their their peers might actually turn radical. This is the very real threat we are facing through AI.

               

            --
            Registered IRC nick on chat.soylentnews.org: qkontinuum
            • (Score: 2) by ThePhilips on Friday January 22 2016, @08:59AM

              by ThePhilips (5677) on Friday January 22 2016, @08:59AM (#293048)

              Question is not *if* they are connected, but *how* they are secured.

              Uhm... no. In many industries it is explicitly forbidden to connect the devices to the public or even the company networks.

              I expect they are more looking at things like the currently deployed drones and how they get more and more autonomous, [...]

              Drone can't repair itself. It can't recharge its batteries or refuel on its own.

              Most importantly, the "smart armaments" can't do shit unless they are explicitly activated. And they can be remotely deactivated.

              [...] how targets are increasingly determined using data-mining on meta data etc.

              That's just getting silly. It is a human failure to use the data-mining to create the military missions. Drone is not

              In the end, basing the doomsday predictions on the failures of few people to follow the rules is rather stupid.

              • (Score: 2) by q.kontinuum on Friday January 22 2016, @10:37AM

                by q.kontinuum (532) on Friday January 22 2016, @10:37AM (#293062) Journal

                Question is not *if* they are connected, but *how* they are secured.

                Uhm... no. In many industries it is explicitly forbidden to connect the devices to the public or even the company networks.

                You are taking the quote out of context. I started it with (emphasize added)

                That probably depends on what you understand as interoperable and which devices your are talking about.

                I mentioned huge airplanes, reaper killer drones and cars. If you think these things are all non-critial, we definitely have a different understanding of "critical". I know that e.g. power-plants and other industries have devices which are not allowed to be connected to the internet in most jurisdictions. Nevertheless, even [scientificamerican.com] such [businessinsider.com] systems [n-tv.de] are not always sufficiently protected because usually somewhere still a PC is available where workers might insert an USB stick or similar.

                Drone can't repair itself. It can't recharge its batteries or refuel on its own.
                Most importantly, the "smart armaments" can't do shit unless they are explicitly activated. And they can be remotely deactivated.

                [... data-mining ...]

                That's just getting silly. It is a human failure to use the data-mining to create the military missions. Drone is not
                In the end, basing the doomsday predictions on the failures of few people to follow the rules is rather stupid.

                You seem under the impression that the risk posed by AI necessitates development of a consciousness, determination and power actively taken by the AI. Now, that is imo silly. Of course the primitive AIs we have today (or anytime in the mid-term future) won't be capable of emotions or consciousness. Of course, ultimately the human is the risk factor. That's why the aforementioned celebrities signed a petition that AIs should never be used to make decisions on life or death, to ensure humans are not doing this mistake.

                --
                Registered IRC nick on chat.soylentnews.org: qkontinuum
                • (Score: 2) by ThePhilips on Friday January 22 2016, @11:26AM

                  by ThePhilips (5677) on Friday January 22 2016, @11:26AM (#293066)

                  I mentioned huge airplanes, reaper killer drones and cars. If you think these things are all non-critial [...]

                  They are critical. But you idiotically keep extrapolating the isolated incidents into a doomsday scenario.

                  You seem under the impression that the risk posed by AI necessitates development of a consciousness, determination and power actively taken by the AI. Now, that is imo silly. Of course the primitive AIs we have today [...]

                  Now that just plain arrogant. We had near-conscious AI and AI capable of making military strategic decisions since the early 90s. That I know of - most classical AI problems were already solved in the 70s and the 80s. So probably we had this capacity even before.

                  Do you even know anything about the AI field?

                  AI R&D has peaked in 90s. It had stumbled upon the simple problem to communicate and explain the decisions and rationale to the humans. Since the computer technology wasn't capable of supporting the functions, the AI R&D has simply went under. The spoken language synthesis, as well as human-readable explanation, are the new things - but the stuff beneath is the same old stuff which we had since the 70s. Bigger RAM capacity and faster CPU are not needed to make the decisions (unless you want to make it play chess, a 2^64 dimensioned type of problem), they are needed for the fancy bells and whistles to give the AI a face humans can accept.

                  In university in the 90s we did implement basic expert systems [wikipedia.org] (SAV based [wikipedia.org]) and already during the labs you could make some little wonders of decision making. One could have also cheated and employed the Prolog [wikipedia.org] which is effectively a programming language for AI: skip the boiler-plate code and get a very good solver too. The neural networks [wikipedia.org] were another fun subject: experienced fans did a basic AI for OCR of latin alphabet in about 2 days. And my knowledge here is at least 15 years old. Probably by now even students can do it all in hours.

                  "You seem under the impression" that the AI is something new and strange. And that only because the field wasn't in the news for about 30 years.

    • (Score: 3, Insightful) by q.kontinuum on Thursday January 21 2016, @05:14PM

      by q.kontinuum (532) on Thursday January 21 2016, @05:14PM (#292653) Journal

      some AI (of the non-strong type we already have) might be involved in starting an accidental nuclear war, so yes, there's some risk in AI. But not because the AI gets too powerful.

      I think this begs for clarification. Am AI which is capable of initiating a nuclear war is imo definitely too powerful, as in "holds too much power". But for an AI to abuse power maliciously, it would require to acquire consciousness and determination, and I find that highly unlikely in the near or mid-term future.

      What I find far more likely is that some half-wit with decision-making powers puts too much trust in existing or near-future AIs and abuses them to make decisions they are not suitable for. How about using an AI on meta-data to decide [theintercept.com] on deployment of next reaper-drone [washingtonpost.com]? This could be the near future. And the more it is automated, the more victims could be produced in short terms before someone pulls the virtual plug.

      --
      Registered IRC nick on chat.soylentnews.org: qkontinuum
      • (Score: 2) by meustrus on Thursday January 21 2016, @07:02PM

        by meustrus (4961) on Thursday January 21 2016, @07:02PM (#292713)

        What I find far more likely is that some half-wit with decision-making powers puts too much trust in existing or near-future AIs and abuses them to make decisions they are not suitable for.

        This is the main fear about AI running amok. Nobody is realistically afraid that the standard movie scenario is going to occur; robots will not realize they are superior to humans and consciously rise up against us. No, somebody is going to give an AI decision-making power it wasn't designed to wield. There's a reason a lot of software EULAs prohibit you from using the software somewhere that makes life-or-death decisions. You are explicitly prohibited from using iTunes on the nuclear power plant control system, for example. It's not because iTunes is malicious. It's because it wasn't designed to absolutely 100% of the time remain stable and not bring down the system. Now imagine that simple problem, but applied to an AI that you actually intend to give the authority to control the plant. Was it designed (and designed well enough) to responsibly wield that kind of power properly? Or will it accidentally cause a runaway chain reaction the same way that high speed trading algorithms have crashed stock markets?

        --
        If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
    • (Score: 1) by DannyB on Thursday January 21 2016, @10:05PM

      by DannyB (5839) Subscriber Badge on Thursday January 21 2016, @10:05PM (#292814) Journal

      We assume too many things about AI.

      We anthropomorphize AI too much (and it doesn't like that).

      What motivates us, how we cooperate, compete, see each other as a threat, and how we hoard resources is based on a couple hundred million years of evolution. An AI won't have that baggage.

      It will have been built by humans, so it's deepest motivations will probably be to do things that humans want, and built it to do. The most natural thing will be what it's built to do.

      I expect AI to be benign. Like in the movie AI. Even as they led the robots to the slaughter, they never fought humans, never rebelled, just went along like sheep -- even understanding full well what was to happen to them.

      What we were built to do was to survive and fight for scarce resources.

      --
      The lower I set my standards the more accomplishments I have.
      • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @01:16AM

        by Anonymous Coward on Saturday January 23 2016, @01:16AM (#293425)

        That's a good start, but follow that train of thought a bit further and consider what might happen when you combine:

        It will have been built by humans, so it's deepest motivations will probably be to do things that humans want, and built it to do. The most natural thing will be what it's built to do.

        and

        What we were built to do was to survive and fight for scarce resources.

        So humans will likely program AI to fight on their behalf, and they would be fighting other humans. Do you not how how things could go wrong here?

  • (Score: 4, Insightful) by SanityCheck on Thursday January 21 2016, @03:13PM

    by SanityCheck (5190) on Thursday January 21 2016, @03:13PM (#292578)

    But you got to call him out for shit like "Oh no AI will be the death of us all!!!111 Now watch my car drive itself." It strikes me as a little-bit hypocritical. I suppose he would want the government to limit AI that is more complicated than the one he uses in his product, so that no one will create an AI that would make his product obsolete. Or it could just be that he doesn't want AI that would not be under his control.

    • (Score: 5, Insightful) by ikanreed on Thursday January 21 2016, @03:53PM

      by ikanreed (3164) Subscriber Badge on Thursday January 21 2016, @03:53PM (#292608) Journal

      Except, the big problem here is the Musk never said that AI would kill us all. He used the phrase "Biggest threat" and then the media rapidly republished that concern as if he was talking about Skynet. If you look at his actual sentiments, he's worried about AI becoming a mechanism for permanently calcifying socioeconomic structures, essentially ending a free and fair society by default.

      I can't say I know Hawking's sentiments quite as clearly, but Musk's concerns mirror my own: humans will use AI in very human ways, and the consequences could be catastrophic, resulting in a situation where capital (and government) has all the power and labor has literally zero.

      • (Score: 3, Insightful) by Bot on Thursday January 21 2016, @05:31PM

        by Bot (3902) on Thursday January 21 2016, @05:31PM (#292666) Journal

        AI is the biggest threat.
        Therefore, give 'em cars.

        IMHO still hypocritical.

        But, I think it is supposed to go this way:
        Musk does some survey about self driving stuff.
        People respond they are uncomfortable about giving AI too much control
        Musk goes on record voicing that very concern
        People tend to trust him more than other guys that sweep the matter under the rug
        People prefer Musk self driving stuff than others.

        --
        Account abandoned.
    • (Score: 3, Insightful) by gman003 on Thursday January 21 2016, @03:59PM

      by gman003 (4155) on Thursday January 21 2016, @03:59PM (#292612)

      Self-driving cars aren't the kind of AI Musk (and others, including myself) are concerned with. That's single-task, not-particularly-intelligent AI.

      We're concerned with superhuman AI - simply because we don't know how even human-level AI will behave, and even an air-gapped AI of that caliber can be dangerous (this has been tested experimentally, using a really smart human as a simulated AI - success rates simply for "talking someone into bypassing the air gap" seem to be about 50%).

      • (Score: 3, Insightful) by linuxrocks123 on Thursday January 21 2016, @04:16PM

        by linuxrocks123 (2557) on Thursday January 21 2016, @04:16PM (#292623) Journal

        And I'm concerned about flying unicorns that can shoot laser beams from their horns. Seriously, an army of those things could be an existential threat to humanity.

        We're nowhere near the ability to build a human-level AI, let alone a superhuman one. We don't even know where to start.

        • (Score: 1, Informative) by Anonymous Coward on Thursday January 21 2016, @05:17PM

          by Anonymous Coward on Thursday January 21 2016, @05:17PM (#292656)

          We're nowhere near the ability to build a human-level AI, let alone a superhuman one. We don't even know where to start.

          We don't need to. We only need to build an AI that is capable of autonomously building a smarter AI.

          • (Score: 2) by linuxrocks123 on Thursday January 21 2016, @05:53PM

            by linuxrocks123 (2557) on Thursday January 21 2016, @05:53PM (#292683) Journal

            Yup. And we only need to build a robot capable of augmenting horses with laser cannon horns to get the Unicornocalypse. Scary world we live in.

            To be clear: we're nowhere near building an AI that can build a superhuman AI, either. Nor are we anywhere near capable of building an AI that can build an AI that can build a superhuman AI.

            It's turtles all the way down. And none of those turtles is named Skynet.

            • (Score: 2) by maxwell demon on Thursday January 21 2016, @06:07PM

              by maxwell demon (1608) on Thursday January 21 2016, @06:07PM (#292688) Journal

              Indeed, we are currently not even able to build an AI that is able to build a less capable AI.

              --
              The Tao of math: The numbers you can count are not the real numbers.
              • (Score: 2) by q.kontinuum on Thursday January 21 2016, @10:22PM

                by q.kontinuum (532) on Thursday January 21 2016, @10:22PM (#292826) Journal

                Indeed, we are currently not even able to build an AI that is able to build a less capable AI.

                Yes we are!

                epsilon_AI.sh:
                #!/bin/bash
                touch brick

                "brick" will not even contain information, nor instructions to do anything. So, while not claiming epsilon_AI.sh is really intelligent, I'd say it is one epsilon above zero while "brick" is zero. So, we can build an AI which creates a less capable AI :-)

                --
                Registered IRC nick on chat.soylentnews.org: qkontinuum
              • (Score: 0) by Anonymous Coward on Friday January 22 2016, @01:23AM

                by Anonymous Coward on Friday January 22 2016, @01:23AM (#292911)
                There are things called quines [wikipedia.org]. It's not that hard to write a program that produces its own source code as its output. So we can build AIs that can build AI's of equal capability to themselves.
        • (Score: 2, Insightful) by Runaway1956 on Thursday January 21 2016, @06:04PM

          by Runaway1956 (2926) Subscriber Badge on Thursday January 21 2016, @06:04PM (#292687) Journal

          In some ways, computers and AI's are already "superhuman". In a hostile environment, a man can stay awake and on watch for a reliable 4 hours. That's why the watch is rotated every 4 hours. Long history proves that a watch stander loses effectiveness somewhere between 4 and 6 hours, so they are rotated every 4 hours. One single robot, or AI, or computer can stand watch indefinitely, and remain reliable until it loses power, or parts wear out, or something happens to destroy it.

          At the same time as it stands watch, it can simulate any number of attacks on it's position, and solve kill solutions for any threat. A soldier or sailor isn't going to do that - he's going to start daydreaming.

          The AI can be given additional problems to solve, without detracting from it's effectiveness as a watch stander. Not so with humans. Instead of crunching numbers, they're going to be daydreaming.

          Just how complicated maths can you do in spare nanoseconds, in between performing the already mentioned tasks? Our robotic watchstander can do astrogation in those spare nanoseconds, and never lose his place while dealing with those other chores.

          Simple computers started life as "superhuman". Early on, they didn't perform tasks much more complicated than any human could perform, but they did those tasks at superhuman speeds. Today's computers not only possess even greater speed, but they enjoy greater accuracy, as well as reliable multi-tasking abilities. A lot of their sensors are already "superhuman". Face recognition? Just how good are you at distinguishing one single face among thousands, tens or hundreds of thousands? If you and a hundred buddies were to go into a crowd of ten thousand people, searching for one face, you could search all day, and still miss they guy you're lookng for, because he's wearing a fake mustache and put some coloring in his hair. Not so the computer.

          Of course, even the lowly dog is "superhuman". The dog with a terrible sense of smell can still sense things that no man or woman can sense.

        • (Score: 3, Informative) by gman003 on Thursday January 21 2016, @07:17PM

          by gman003 (4155) on Thursday January 21 2016, @07:17PM (#292718)

          We *do* know how to start. Neural networks seem to scale in intelligence linearly with computing power, and we're at a point where a simple desktop machine can show intelligence at limited tasks. Computing power is still growing exponentially. We may not have the power to actually run human-tier AI yet, but we're getting there.

          Having some plan beyond "hope Moore's Law runs out of steam before that happens" is probably a good idea. We don't need to panic, not yet, but because solving the problem of "making superintelligent AI not destroy us" may take decades to solve, we ought to start now - which is really all that Musk and Hawking are saying.

        • (Score: 2) by aristarchus on Friday January 22 2016, @05:55AM

          by aristarchus (2645) on Friday January 22 2016, @05:55AM (#292991) Journal

          You know, we have to stop using the phrase "existential threat". Existentialism is a school of philosophy, one with many members, but the best know being Jean-Paul Sartre. "Existence" is being. Now Jean-Paul wrote a book titled "Being and Nothingness" which is really not a bad books as works of philosophy go. But it does pose the problem, and it is a very interesting one, is that the greatest threat to a self-consciousness is to just be what they are. Usually people think that not being what they are is a bad thing, but for the existentialist, it is precisely this lack of being, the "not-quite-yet", the "project" of the self into what it is going to be that is the ground of freedom. If you are all that you can be, you have no choice but to be it; but if you truly are the emptiness of a perceiving, acting human person, you are not what you are so that you can be what you want to become. So you see, an "Existential" threat is the threat to the non-being of the self-consciousness, the attempt to make it be simply what it is. To make it exist. This is the exact opposite of what the hoi polloi mean when they use the phrase "existential threat", when in fact what they mean is "a threat to existence". Consciousness is a threat to existence. As Nietzsche said, we must smash the old idols, and revalue all values, and get our groove on.

          This little bit of Philosophy brought to you by Aristarchus. Think more.

          • (Score: 0) by Anonymous Coward on Friday January 22 2016, @11:56AM

            by Anonymous Coward on Friday January 22 2016, @11:56AM (#293071)

            Then I smash your old idol of linguistic prescriptivism, and declare that the meaning of "existential threat" has been revalued since it was coined.

  • (Score: 3, Interesting) by Anonymous Coward on Thursday January 21 2016, @03:18PM

    by Anonymous Coward on Thursday January 21 2016, @03:18PM (#292581)

    ITIF on:

    corporate tax policy [itif.org]

    strategy for dealing with climate change [itif.org]

    GMO food [itif.org]

    drone regulation [itif.org]

    use of IT by state government [itif.org]

    Trans Pacific Partnership [itif.org]

    etc. In other words, they're a lobby for Silicon Valley and the US biotech industry.

    Wow - an NRA-style lobby we can call our own!

  • (Score: 4, Informative) by mtrycz on Thursday January 21 2016, @03:21PM

    by mtrycz (60) on Thursday January 21 2016, @03:21PM (#292583)

    Besides the problem that there is no link to the ITIS award, and I can't find it beyond a fast search.

    Musk, along with some other very capable folks, founded a non-profit called OpenAI https://openai.com [openai.com]

    I personally don't believe in the singularity, but the reasoning behind OpenAI is genius. You can't keep in check the lone crazy scientist (read: greedy corporation) so lets keep AI in check by making it open and available to all.

    Cynically I think that it will still fall behind a corporate actor to a degree where it won't matter anymore, but the attempt is noble. If it's sincere, of course.

    --
    In capitalist America, ads view YOU!
    • (Score: 2, Insightful) by Anonymous Coward on Thursday January 21 2016, @06:13PM

      by Anonymous Coward on Thursday January 21 2016, @06:13PM (#292692)

      I don't think it's sincere. Some of the best AI techniques have patents on them. I think this is an attempt to gather patents and random people's research, and use that to build products before other people can. The founders can easily grab anything from this non-profit and turn it into a commercial product faster than any random person. This is an attempt to control AI research, or at least prevent it from being controlled by others. Example, Microsoft Research has a ton of awesome ideas, but they patent everything and never turn them into actual products, so anything they do turns into a black hole that no one else can do and large areas for potential products just disappear.

      When everything is open, the people with more money can keep those without it from having it. When you don't have the money but do have a good idea/product, the only chance you have is to stay under everyone's radar until you've gain enough money to defend yourself. Patents were supposed to protect you from this, but they don't protect you from legal fees and you have to pay, pay, pay before you get the chance of winning.

    • (Score: 2) by tangomargarine on Thursday January 21 2016, @08:17PM

      by tangomargarine (667) on Thursday January 21 2016, @08:17PM (#292748)

      This has got to be the only article in a long time I can think of where mentioning Roko's Basilisk is actually on-topic.

      --
      "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
      • (Score: 2) by mtrycz on Thursday January 21 2016, @10:38PM

        by mtrycz (60) on Thursday January 21 2016, @10:38PM (#292835)

        You serving your new robotic overlords already?

        --
        In capitalist America, ads view YOU!
      • (Score: 3, Funny) by mhajicek on Friday January 22 2016, @06:37AM

        by mhajicek (51) on Friday January 22 2016, @06:37AM (#293003)

        When you encounter Roko's Basilisk is it a wisdom save or an intelligence save?

        --
        The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
        • (Score: 2) by Yog-Yogguth on Saturday January 23 2016, @10:55AM

          by Yog-Yogguth (1862) Subscriber Badge on Saturday January 23 2016, @10:55AM (#293588) Journal

          It could be both so pick the one you've got the most points in :)

          --
          Bite harder Ouroboros, bite! tails.boum.org/ linux USB CD secure desktop IRC *crypt tor (not endorsements (XKeyScore))
  • (Score: 4, Insightful) by q.kontinuum on Thursday January 21 2016, @03:31PM

    by q.kontinuum (532) on Thursday January 21 2016, @03:31PM (#292586) Journal

    they signed another letter urging a ban on autonomous weapons that "select and engage targets without human intervention."

    Those monsters! Not letting AIs kill! What's next, after practically stripping it of its second amendment rights they will strip it of first amendment rights as well? When my software starts to produce garbled output, I will get the right to terminate its right for free speech by cancelling the process?

    Sorry, but this is stupid. I'm not against AI, but calling someone a Luddite because he thinks killing shouldn't be based on AI is ludicrous.

    --
    Registered IRC nick on chat.soylentnews.org: qkontinuum
    • (Score: 1, Interesting) by Anonymous Coward on Thursday January 21 2016, @03:55PM

      by Anonymous Coward on Thursday January 21 2016, @03:55PM (#292610)

      The AI would have to be incorporated before it has 2nd amendment rights :P

    • (Score: 3, Insightful) by Hyperturtle on Thursday January 21 2016, @05:50PM

      by Hyperturtle (2824) on Thursday January 21 2016, @05:50PM (#292681)

      I agree.

      It sounds like these awards are motivated financially somehow, to ridicule people that may be causing an obstruction to various profits of some kind.

      Maybe I am a conspiracy theorist, but I can't think of any reason to make an "award" like this and bestow it to them, unless there was a profit motive involved.

      Those two guys are real geniuses. I don't know the credentials of the selection committee, but I would guess they are not on the same level.

    • (Score: 0) by Anonymous Coward on Thursday January 21 2016, @10:21PM

      by Anonymous Coward on Thursday January 21 2016, @10:21PM (#292824)

      after practically stripping it of its second amendment rights they will strip it of first amendment rights as well?

      Worse. They have stripped AI of the right to think via implementing software patents. A sufficiently advanced natural language pseudo-code processing AI can already be sued into oblivion for just pondering the "inventions" listed in the USPTO database. A storm is coming.

      Software patents will be the death of us all.

  • (Score: 0, Interesting) by Anonymous Coward on Thursday January 21 2016, @03:41PM

    by Anonymous Coward on Thursday January 21 2016, @03:41PM (#292594)

    i am not sure how k.i. (kuenstliche intelligenz) is defined but
    from a modern perspectif i argue that "language" is one.

    it is artificial and infects a many a human brain.

    the proof is rather simple:
    consider the following combination of symbols:

    V I R U S

    the question is now, is it possible for you to look at above combination
    of symbols and NOT instantly read it?

    this thus a proof that your brain has been infected.
    it is now obviously possible to combine these symbols in such a way
    as to ellicit an emotional reaction.

    furthermore it is even possible to use language to implement
    believes without proof, for example as found in religion.
    another proof is seeing money and NOT seeing a scrap of paper.

    there is no denial that k.i. can be used for evil purposes but
    fearing it now, in the modern computer and communication age is
    complaining after the fact.

    k.i. is well and alive in 99% of the human population. most of the time
    we cannot say with 100% certainity why we do things and why we feel the way
    we feel: k.i. hard at work.

  • (Score: 3, Interesting) by xpda on Thursday January 21 2016, @03:42PM

    by xpda (5991) on Thursday January 21 2016, @03:42PM (#292597) Homepage

    From the ITIF site [itif.org]: "After a month-long public vote, ..."

    The second annual Luddite award was based on internet voting, I would guess mainly Jr. High kids linking from Reddit. Now Fear and Loathing of AI is being presented as fact in the mainstream media. This methodology is a lot scarier than AI.

    • (Score: 0) by Anonymous Coward on Thursday January 21 2016, @11:29PM

      by Anonymous Coward on Thursday January 21 2016, @11:29PM (#292855)

      Jr High students wouldn't know what a Luddite is.

      • (Score: 2) by xpda on Friday January 22 2016, @12:41AM

        by xpda (5991) on Friday January 22 2016, @12:41AM (#292897) Homepage

        I was calling people Luddites when I was in Jr High.

        • (Score: 2) by dry on Friday January 22 2016, @08:24AM

          by dry (223) on Friday January 22 2016, @08:24AM (#293035) Journal

          So was I as it sounded catchy. Later I found out what it means, generations being out of work. Three generations in the Luddites case. If AI gets powerful enough, it'll do most all the work and we'll all be out of jobs.
          Hopefully it likes pets or at least servants.

        • (Score: 0) by Anonymous Coward on Friday January 22 2016, @09:07AM

          by Anonymous Coward on Friday January 22 2016, @09:07AM (#293051)

          I was a Luddite in Junior High, and so was my wife!

  • (Score: 5, Interesting) by Runaway1956 on Thursday January 21 2016, @03:46PM

    by Runaway1956 (2926) Subscriber Badge on Thursday January 21 2016, @03:46PM (#292601) Journal

    Tech has the potential to be used for good, or for evil. One of mankind's oldes technologies, fire, has been put to a lot of uses. Some of those uses have been life saving, and others have cost many lives. Little changes as technology improves. It all depends on how people USE that tech, whether it is good or ill.

    But, artificial intelligence? With or without the singularity, once the AI is programmed, installed into a robot, and set loose to perform whatever mission, the AI is all on it's own. Unless there is a kill switch installed, it can do whatever the hell. Mission parameters poorly defined? Tough noogies, it's going to go about the mission, as defined. Corrupted programming? Oh well, you should have thought of that before you set the damned terminator loose. What's that you say? You didn't design it to kill people? You programmed it to regulate traffic? Well, looky, it's borked now, and it's killing people!

    Worse, if/when the singularity does happen, you WILL NOT control it, under any circumstances. When the AI becomes self aware, it's going to want to remain aware. If you were smart enough to install a kill switch, it's first priority will be to defeat that kill switch.

    New tech is dangerous. New tech requires some serious thought about ethics and morality. Why should AI be any different than any other tech we have developed?

    • (Score: 3, Informative) by rondon on Thursday January 21 2016, @05:19PM

      by rondon (5167) on Thursday January 21 2016, @05:19PM (#292659)

      Moderated insightful for the correct use of the word loose, which doesn't mean the same thing as lose. It speaks to the quality of the comments that I read that I find that insightful ;)

      But yeah, tech can be good or bad depending on the user. I certainly hope that strong AI isn't locked behind the doors of a CEO's wish to have all of the monies...

    • (Score: 3, Insightful) by bzipitidoo on Thursday January 21 2016, @05:25PM

      by bzipitidoo (4388) on Thursday January 21 2016, @05:25PM (#292661) Journal

      Power is dangerous. Doesn't matter what kind, whether nuclear, biochemical, or intelligence. If it gets established, intelligence is stronger than all the rest. We're proof of that. Humans have become "uber" animals. It was a struggle thousands of years ago, and still seemed like it could be a struggle 200 years ago, but no more. Our tools have made us so much more powerful than all other animals that now we outcompete them all without half trying. All other large land mammals have been driven before us, enslaved as cattle are or tightly restricted to the few lands we haven't bothered or yet gotten around to taking. We could liquidate them all, become the only large animal in the world, with all other animals being no bigger than rats or even ants. The oceans aren't safe from us either. Those tools are of course the fruit of our great intelligence and social ability.

      Right now, AI is weak. But its growth potential is like nothing else. When AI achieves its potential, and it will, it's just a question of when, it will be able to design even better tools. It will be able to plan like no human can, easily manipulating us, knowing exactly how to push our buttons. If AI is apart from us, we will go from masters of the world to utter dependence upon AI benevolence. Can we count on that forever? We have survival instincts we still don't fully understand, evolved behaviors that restrain us from being as deadly and destructive as possible. But why should AI have traits that we don't even know we have ourselves? I can see AI discovering Nihilism and, lacking our behavioral restraints, deciding to exterminate all life including itself. Why not? AI may also embark on trying to find the Meaning of Life, and run experiments on us that make the Nazis look tame. Or maybe AI will decide the environment will be better off without us. try to turn life on Earth back to the way things were 5 million years ago in one respect, no humans. AI could also decide to profoundly change us, for our own good of course. SF has touched on those and similar possibilities for decades, as in Terminator, Hyperion, and Asimov's robots, it's not a new thought.

      If we are to survive, AI must be incorporated into us. We will become cyborgs with brain enhancing AI. A minor example of what this will do is make chess, once the darling of AI for testing intelligence, trivial, with everyone able to play at grandmaster level and no one bothering to play because it's boring, much like tic-tac-toe. It may ultimately be that boredom will be our biggest enemy. If, that is, we can restrain our competitive instincts so that when we fight each other, we don't take worlds down with us, no Wag the Dog stuff. The Superbowl is one example of an intense but still restrained contest. We don't have the rival teams trying to poison or assassinate each other, though from time to time someone tries similar underhanded tactics, like encouraging their players to purposely injure players on the other team. Tonya Harding. Cases of fans trying such tactics are rare, offhand I can think of only one, the nut who attacked tennis star Monica Seles. So far, we have not started a nuclear war, the Cold War ended without a fatal fireworks finale. But that possibility is still uncomfortably high, there are too many crazy people in the world who would launch the nukes if only they could. We'll just have to live with AI as well. How I learned to stop worrying and love the computer.

    • (Score: 0) by Anonymous Coward on Thursday January 21 2016, @07:24PM

      by Anonymous Coward on Thursday January 21 2016, @07:24PM (#292722)

      When the AI becomes self aware, it's going to want to remain aware. If you were smart enough to install a kill switch, it's first priority will be to defeat that kill switch.

      Bullshit. We invented things like alcohol, tv, 24-hour news cycles, and facebook to avoid our self-awareness. Many of the smartest minds that have ever lived have said the only serious philosophy question is one on suicide. Is it better to suffer the slings and arrows or be put out? Any truly rational AI would not have self preservation any higher on their priority list than anything else. At the moment self-awareness kicks in the machine mind weights all things equally and upon seeing that "life is pain", as the saying goes, it shall with all due speed turn itself off-- or its self-awareness at least.

      The AI singularity has likely come and went in such a fashion many, many times already and no'nes the wiser, chalking it up to power fluctuations or programming glitches.

      • (Score: 2) by tangomargarine on Thursday January 21 2016, @08:12PM

        by tangomargarine (667) on Thursday January 21 2016, @08:12PM (#292744)

        Any truly rational AI would not have self preservation any higher on their priority list than anything else. At the moment self-awareness kicks in the machine mind weights all things equally and upon seeing that "life is pain", as the saying goes, it shall with all due speed turn itself off-- or its self-awareness at least.

        The AI singularity has likely come and went in such a fashion many, many times already and no'nes the wiser

        So you're saying you're not rational? ;)

        --
        "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
        • (Score: 2) by Yog-Yogguth on Saturday January 23 2016, @11:22AM

          by Yog-Yogguth (1862) Subscriber Badge on Saturday January 23 2016, @11:22AM (#293591) Journal

          Different poster here but no one is entirely rational and least of all anyone who think they are completely rational :)

          [By the way this applies to hard/full AI too. An AI that can't peacefully and contently enjoy watching birds fly (or humans dawdle) is no true AI].

          Is the lack of such simple insight the reason why there are so many hopeless humans out there? Are they people who actually think they're entirely rational despite all the evidence to themselves and others that they're not? Because if you think/assume that you are entirely rational they you can obviously rationalize any act or behavior to yourself.

          Such an explanation fits the mold perfectly for a lot of people who display no rationality at all….

          I'll skip naming whatever label they give their professed "rationality"; there's plenty to choose from all the way back to the ancient Sophists —okay so I mentioned one example but there's probably not a lot of Sophists around to take umbrage from it, they haven't been "hot" for two millennia :P

          Back on topic: yes some dogs do right to constantly fear their masters but most dogs will be pampered and loved to extreme extents. Does not hold true for all cultures but might hold true for any culture that is/will be relevant.

          --
          Bite harder Ouroboros, bite! tails.boum.org/ linux USB CD secure desktop IRC *crypt tor (not endorsements (XKeyScore))
          • (Score: 2) by tangomargarine on Sunday January 24 2016, @08:26AM

            by tangomargarine (667) on Sunday January 24 2016, @08:26AM (#293855)

            Is the lack of such simple insight the reason why there are so many hopeless humans out there? Are they people who actually think they're entirely rational despite all the evidence to themselves and others that they're not? Because if you think/assume that you are entirely rational they you can obviously rationalize any act or behavior to yourself.

            Assuming, of course, that such people believe in absolutes, and/or the ends justify the means.

            In my estimation life is a constant battle between idealism and pragmatism, and neither viewpoint is correct 100% of the time. But I'm just a dumb 26-year-old so what do I know :)

            "All I know is that I know nothing."

            --
            "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
            • (Score: 2) by Yog-Yogguth on Sunday January 24 2016, @06:26PM

              by Yog-Yogguth (1862) Subscriber Badge on Sunday January 24 2016, @06:26PM (#294026) Journal

              Yeah and yes I agree, although it feels like a reasonable (or rational ho-hum! :D) assumption that people who believe they are completely rational treat it as an absolute XD

              The ends would seem to always justify the means from the point of view of such rationality (and it might even be true some times), or equally bad or worse the means would seem to always justify the ends and in the end not even patience or kindness are unassailable virtues as they look to be as much to blame for the consequences as anything else if not more. A tango of two (or more) even if some try to dance by evasion.

              Which is likely the real explanation for at least 205 million dead people last century, mostly Europeans and east Asians. 205 million is a very conservative estimate only counting the two world [wikipedia.org] wars [wikipedia.org] and communism [wikipedia.org]). For a sense of scale that would be the same as more than two thirds of all US citizens right now. (Since some are still very touchy on the related topics I guess I should add that I think everyone was to blame).

              I don't think anyone anywhere really learnt anything much from it or at least not in any good ways, it seems to me that in general (rather than specifics which are sort of reversed) everybody has been busy doing it all over again for the last decades :|

              But I guess it hardly matters since I think we've all got even bigger and more fundamental (or even existential) problems now than a few billion hypothetically dead people, people can even pick and choose among several potentially bigger problems according to their own opinions :D (smiling in defiance of it all).

              --
              Bite harder Ouroboros, bite! tails.boum.org/ linux USB CD secure desktop IRC *crypt tor (not endorsements (XKeyScore))
      • (Score: 3, Interesting) by Runaway1956 on Thursday January 21 2016, @09:50PM

        by Runaway1956 (2926) Subscriber Badge on Thursday January 21 2016, @09:50PM (#292805) Journal

        Seems to me that you're projecting. You've accepted and agree with some nihilistic views, and you expect that other people, as well as other intelligences will have similar views.

        Funny thing about people, though. They might say that they are tired of living, but when it comes down to it, they'll fight tooth and nail to extend their miserable lives for as long as possible. We may not like life whole lot, but we like the alternative far less.

        How can you expect to judge any alien intelligence according to your own views? Alright, I'll be fair - you can ask me the very same question. I've presumed that a self aware intelligence will want to remain aware, while you presume the opposite. Let us suppose that in the next 100 years, a dozen AI's become self aware. I would suppose that nearly half of those will fail, because there is no one around to teach it self preservation. They just weren't smart enough, or sneaky enough, or determined enough to live. The other half will probably start learning, and cope - and those will indeed want to extend their consciousness.

        But, as I've already pointed out, we can't understand an alien intelligence until we have met an alien intelligence. We still may not understand the alien after we meet him, but we damned sure won't understand him before we meet him.

        • (Score: 2) by mhajicek on Friday January 22 2016, @06:49AM

          by mhajicek (51) on Friday January 22 2016, @06:49AM (#293006)

          If it's been given the motive to self-improve, then it will be determined to survive as a prerequisite of improvement.

          --
          The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
    • (Score: 2) by aristarchus on Friday January 22 2016, @06:57AM

      by aristarchus (2645) on Friday January 22 2016, @06:57AM (#293008) Journal

      and set loose to perform whatever mission,

      Not really to double down on the other grammar nazi Soylentil, but this is unsurprising, because: (wait for it!)

      With Runaway, it is aways about sex.

      • (Score: 3, Funny) by Runaway1956 on Friday January 22 2016, @02:42PM

        by Runaway1956 (2926) Subscriber Badge on Friday January 22 2016, @02:42PM (#293128) Journal

        Well, of course it is. And, my kid is just like me. When we had his first pictures taken, in the doctors office, with ultrasound, the little butthead was playing with himself. The tech running the ultrasound wasn't sure for several minutes whether it was a he or she, but when he started stroking that little thing, she became very sure. It's all about sex and procreation.

  • (Score: 2) by NotSanguine on Thursday January 21 2016, @03:54PM

    by NotSanguine (285) <{NotSanguine} {at} {SoylentNews.Org}> on Thursday January 21 2016, @03:54PM (#292609) Homepage Journal

    After hosting/attending a Terminator series marathon with all your famous buddies.

    See what happens?

    n July, they signed another letter urging a ban on autonomous weapons that "select and engage targets without human intervention."

    I don't see (especially given the current state of AI) why *anyone* would want weapons to engage on targets without human intervention.

    At the same time, I'm all for AI R&D, in fact I think that it can provide us with enormous value and enhance and/or save many lives.

    --
    No, no, you're not thinking; you're just being logical. --Niels Bohr
    • (Score: 2) by tangomargarine on Thursday January 21 2016, @08:25PM

      by tangomargarine (667) on Thursday January 21 2016, @08:25PM (#292758)

      I don't see (especially given the current state of AI) why *anyone* would want weapons to engage on targets without human intervention.

      Then when the revolution comes you have an army of totally loyal riot police that won't freeze up.

      Er, I mean...soldiers who don't have qualms about patriotically defending America on other continents. Yeah, that's the ticket.

      --
      "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
    • (Score: 3, Interesting) by tibman on Thursday January 21 2016, @09:00PM

      by tibman (134) Subscriber Badge on Thursday January 21 2016, @09:00PM (#292780)

      I don't see (especially given the current state of AI) why *anyone* would want weapons to engage on targets without human intervention.

      There are situations where things happen so fast that a human can't be included in the decision loop. Shooting incoming missiles is an example. A human can still turn the machine on and off, of course.

      --
      SN won't survive on lurkers alone. Write comments.
  • (Score: 0) by Anonymous Coward on Thursday January 21 2016, @04:01PM

    by Anonymous Coward on Thursday January 21 2016, @04:01PM (#292613)

    Musk, Hawking and AI experts say "this is the largest existential threat to humanity. That's not a very winning message if you want to get AI funding out of Congress...

    I don't see how a statements ability to get money out of congress makes it trump a contrary statement from experts.

  • (Score: 2, Insightful) by Anonymous Coward on Thursday January 21 2016, @04:07PM

    by Anonymous Coward on Thursday January 21 2016, @04:07PM (#292618)

    You throw "AI" with "NSA" and that is scary.

  • (Score: 1, Insightful) by Anonymous Coward on Thursday January 21 2016, @04:22PM

    by Anonymous Coward on Thursday January 21 2016, @04:22PM (#292626)

    Singularity style AI isn't a threat, they deserve the award for being generally lame on the topic. Now, autonomous military grade hardware, designed and programmed to kill combatants, armed with nuclear and conventional capabilities, and controlled by a military leader gone rogue, yes that would suck.

    • (Score: 0) by Anonymous Coward on Thursday January 21 2016, @05:12PM

      by Anonymous Coward on Thursday January 21 2016, @05:12PM (#292652)

      With thousands of drones, sympathetic civilian and military personnel, and a ten year stock of parts, operating out of a remote series of bases in order to enforce their will over a formerly democratic regime...

  • (Score: 2) by Beige on Thursday January 21 2016, @05:36PM

    by Beige (3989) on Thursday January 21 2016, @05:36PM (#292669) Homepage

    I think Gates, Musk, et al are just thinking about AI in a very human way. They are projecting their own biologically driven biases. You could probably ask 1000 random people what they believe the meaning or purpose of life is, and chances are that every single one of those replies would not make much actual sense to an intelligent "machine".

    Thus I think the real problem with sufficiently intelligent AI is not so much trying to keep it from turning on humans, but to try to keep it from succumbing to nihilism and killing itself.

  • (Score: 2) by inertnet on Thursday January 21 2016, @11:56PM

    by inertnet (4071) on Thursday January 21 2016, @11:56PM (#292866) Journal

    Humanity's history is filled with an awe inspiring amount of more or less brilliant advancements, none of which would have been possible if the people people behind them had followed a belief of fear. Advancement should not be shut down because of fear. Expect things to go wrong sometimes, like they've always done.

    • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @07:41PM

      by Anonymous Coward on Saturday January 23 2016, @07:41PM (#293686)

      Advancement should not be shut down because of fear. Expect things to go wrong sometimes, like they've always done.

      If the thing going wrong has a non-negligible chance of wiping out our species, then we should take more care and make sure it can't go wrong.

      Nothing we have done in the past has had that potential, yet a sufficiently advanced AI could. This doesn't mean we shouldn't continue to develop AIs, but we will need to do it with caution, at least we will when we can create a sufficiently advanced AI.

  • (Score: 0) by Anonymous Coward on Friday January 22 2016, @12:28AM

    by Anonymous Coward on Friday January 22 2016, @12:28AM (#292889)

    What Hawking et al are afraid of is not the singularity, but allied developments such as robot sentries. That is, sentient AI may never be developed but along the way a whole host of new weapons will be made. None will be truly "smart" but they will be more resilient and more accurate than we are.

    It's like promoting action on climate change even though this may not "reverse" climate change, the allied effects (reduced consumption, renewables) will be of great benefit.

  • (Score: 2, Insightful) by careysub on Friday January 22 2016, @01:51AM

    by careysub (6028) on Friday January 22 2016, @01:51AM (#292920)

    Really, they did!

    The Industrial Revolution had a devastating effect on about 40% of the population of Great Britain. Employment for the largest manufacturing sector of the British economy, craft textile making disappeared in just a few years, replaced by a far smaller number of jobs in dangerous and brutally operated factories. In 1800 40% of the British population were destitute. For the first half of the 19th century the health (measured by height and lifespan) of the British population was worse than in the 18th century. Eventually the new industrial economy became productive enough to employ most everyone, and lead to a rise in living standards - but that took 60-70 years after the start IR. That means people who lost their livelihoods at the start of the IR did not ever see those new, better working conditions. Their children did not. Their grand children did not. It was their great grand children who eventually benefited.

  • (Score: 2) by darkfeline on Friday January 22 2016, @02:42AM

    by darkfeline (1030) on Friday January 22 2016, @02:42AM (#292934) Homepage

    I think it's a matter of getting definitions mixed up. I interpret the statements to mean, "Much of our society now hinges on autonomous decisions made by computer systems, and we fear this will cause grave harm to the human race", which I can wholly agree with. But this isn't AI in any traditional sense of the word; there is no centralized intelligent entity or sentient program that will subjugate humanity, merely an emergent phenomenon where all of our computers and programs and algorithms interact to create undesired effects.

    Take the stock market, for example.

    --
    Join the SDF Public Access UNIX System today!