Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday December 03 2014, @12:36AM   Printer-friendly
from the The-cybersapiens-are-coming! The-cybersapiens-are-coming! dept.

The BBC is reporting that Stephen Hawking warns artificial intelligence could end mankind:

Prof Stephen Hawking, one of Britain's pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence. He told the BBC: "The development of full artificial intelligence could spell the end of the human race."

It seems he is mostly concerned about building machines smarter than we are:

Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.

This seems to echo Elon Musk's fears. What do you think?

Since Elon Musk said the same[*], some here have disparaged the statement. Stephen Hawking, however, has more street cred[ibility] than Musk. Are they right, or will other factors precede AI as catastrophic scenarios?

[* Ed's note. See: Elon Musk scared of Artificial Intelligence - Again.]

Related Stories

Elon Musk scared of Artificial Intelligence - Again 71 comments

As an investor in DeepMind, Elon Musk has come forward as seriously concerned about the potential for runaway artificial intelligence. The Washington Post writes:

“The risk of something seriously dangerous happening is in the five year timeframe,” Musk wrote in a comment since deleted from the Web site Edge.org, but confirmed to Re/Code by his representatives. “10 years at most.”

The very future of Earth, Musk said, was at risk.

“The leading AI companies have taken great steps to ensure safety,” he wrote. “The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen.”

Musk seemed to sense that these comments might seem a little weird coming from a Fortune 1000 chief executive officer.

“This is not a case of crying wolf about something I don’t understand,” he wrote. “I am not alone in thinking we should be worried.”

With all the talk of the Singularity and Roko's Basilisk, it's no surprise. The article also has a good timeline of Musk's previous criticisms of and concerns about artificial intelligence.

Stephen Hawking: Intelligent Aliens Could Destroy Humanity, But Let's Search Anyway 47 comments

Since at least 2010, Hawking has spoken publicly about his fears that an advanced alien civilization would have no problem wiping out the human race the way a human might wipe out a colony of ants. At the media event announcing the new project, he noted that human beings have a terrible history of mistreating, and even massacring, other human cultures that are less technologically advanced — why would an alien civilization be any different?

And yet, it seems Hawking's desire to know if there is intelligent life elsewhere in the universe trumps his fears. Today (July 20), he was part of a public announcement for a new initiative called Breakthrough Listen, which organizers said will be the most powerful search ever initiated for signs of intelligent life elsewhere in the universe.
...
  Jill Tarter, former director of the Center for SETI (Search for Extraterrestrial Intelligence) also has expressed opinions about alien civilizations that are in stark contrast to Hawking's.

"While Sir Stephen Hawking warned that alien life might try to conquer or colonize Earth, I respectfully disagree," Tarter said in a statement in 2012. "If aliens were to come here, it would be simply to explore. Considering the age of the universe, we probably wouldn't be their first extraterrestrial encounter, either.

"If aliens were able to visit Earth, that would mean they would have technological capabilities sophisticated enough not to need slaves, food or other planets," she added.

So, who's right, Jill Tarter, or Stephen Hawking? Will advanced aliens have no need of human popplers, or will survivors of the Centauran Human Harvest & BBQ of 2057 call this moment, "Pulling a Hawking?"

See also our earlier stories: Stephen Hawking and Yuri Milner Announce $100 Million "Breakthrough Listen" SETI Project and More Warnings of an AI Doomsday — This Time From Stephen Hawking.


Original Submission

Fearing “Loss of Control,” AI Critics Call for 6-Month Pause in AI Development 40 comments

https://arstechnica.com/information-technology/2023/03/fearing-loss-of-control-ai-critics-call-for-6-month-pause-in-ai-development/

On Wednesday, the Future of Life Institute published an open letter on its website calling on AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." Signed by Elon Musk and several prominent AI researchers, the letter quickly began to draw attention in the press—and some criticism on social media.

Earlier this month, OpenAI released GPT-4, an AI model that can perform compositional tasks and allegedly pass standardized tests at a human level, although those claims are still being evaluated by research. Regardless, GPT-4 and Bing Chat's advancement in capabilities over previous AI models spooked some experts who believe we are heading toward super-intelligent AI systems faster than previously expected.

See Also: FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group

Related:
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit (March 2023)
OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of (Dec. 2022)
Elon Musk Says There Needs to be Universal Basic Income (Aug. 2021)
Tesla Unveils Chip to Train A.I. Models Inside its Data Centers (Aug. 2021)
Elon Musk Reveals Plans to Unleash a Humanoid Tesla Bot (Aug. 2021)
Tesla Unveils its New Supercomputer (5th Most Powerful in the World) to Train Self-Driving AI (June 2021)
OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI (Sept. 2019)
There's Still Time To Prevent Biased AI From Taking Over The World (May 2019)
The New Prometheus: Google CEO Says AI is More Profound than Electricity or Fire (Feb. 2018)
OpenAI Bot Bursts Into the Ring, Humiliates Top Dota 2 Pro Gamer in 'Scary' One-on-One Bout (Aug. 2017)
Elon Musk: Mark Zuckerberg's Understanding of AI is "Limited" (July 2017)
AI Software Learns to Make AI Software (Jan. 2017)
Elon Musk, Stephen Hawking Win Luddite Award as AI "Alarmists" (Jan. 2016)
Elon Musk and Friends Launch OpenAI (Dec. 2015)
Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons (July 2015)
More Warnings of an AI Doomsday — This Time From Stephen Hawking (Dec. 2014)


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: -1, Offtopic) by Anonymous Coward on Wednesday December 03 2014, @12:40AM

    by Anonymous Coward on Wednesday December 03 2014, @12:40AM (#122079)

    I want to know what Mel Gibson has to say on the subject.

    • (Score: -1, Offtopic) by Anonymous Coward on Wednesday December 03 2014, @12:51AM

      by Anonymous Coward on Wednesday December 03 2014, @12:51AM (#122082)

      I can answer that: He'll just blame the Jews.

  • (Score: 1) by dltaylor on Wednesday December 03 2014, @01:01AM

    by dltaylor (4693) on Wednesday December 03 2014, @01:01AM (#122086)

    With read-only access to the entire web, and no control and/or production device(s), all it could do is laugh or cry itself into catatonia.
    Give it a nanomachine machine factory or control of weapons system(s) and, yeah, we're toast.

    • (Score: 2, Insightful) by unauthorized on Wednesday December 03 2014, @01:49AM

      by unauthorized (3776) on Wednesday December 03 2014, @01:49AM (#122092)

      GET /puppies.jpg\0/bin/sh im-sorry-dave.sh HTTP/1.1

      Modern security is based largely on human limitations, an AI with nigh-limitless resources will find a remotely exploitable bug somewhere. No human or group of humans is smart enough to devise a perfect sanitizing system. The only way to keep an AI contained is to never allow it to touch any networking hardware, and only feet it single use media that goes into the incinerator as soon as the AI is done with it.

      • (Score: 1) by dltaylor on Wednesday December 03 2014, @02:41AM

        by dltaylor (4693) on Wednesday December 03 2014, @02:41AM (#122097)

        Aircraft avionics are (or, were, when I worked on the IFE) as well insulated from the In-Flight Entertainment (IFE) systems as I could hope. There was a box that took input-only from the avionics (location, speed, landing gear state, all of that "cool" display in the passenger cabin) which was networked over to the IFE. There was absolutely no uplink from the IFE to the avionics.

        Make the AI look at only RSS-type feeds through a similar isolation system (NEVER let it run SQL injection attacks) to see the content.

        • (Score: 2) by skullz on Wednesday December 03 2014, @03:00AM

          by skullz (2532) on Wednesday December 03 2014, @03:00AM (#122103)

          I get what you are saying but honestly our (human) track record isn't that great about "always".

          We should design an AI to scan things to ensure we have read only access for our AIs.

          • (Score: 0) by Anonymous Coward on Wednesday December 03 2014, @10:31AM

            by Anonymous Coward on Wednesday December 03 2014, @10:31AM (#122183)

            Actually it's not so hard to make a foolproof one-way comm link. Display information on screens, give ai access to a camera pointed at them.

            • (Score: 0) by Anonymous Coward on Wednesday December 03 2014, @01:41PM

              by Anonymous Coward on Wednesday December 03 2014, @01:41PM (#122223)

              That's not just read-only, that's completely passive.
              That's of very little use because there is no way for the AI to choose what is displayed on the screens.
              And once you open up a control channel so that the AI can choose what is on the screens now you've got 2-way communications.

    • (Score: 3, Interesting) by Anonymous Coward on Wednesday December 03 2014, @04:36AM

      by Anonymous Coward on Wednesday December 03 2014, @04:36AM (#122124)

      > With read-only access to the entire web, and no control and/or production device(s), all it could do is laugh or cry itself into catatonia.

      Why do you think it would have read-only access? How is an AI on the net any different from a group of elite hackers? We've already seen hackers loot banks, trade on insider information, make scada systems self-destruct (behind air-gap firewalls even). What's to stop an AI from doing the same and then spending the money to hire humans in the real world to do its bidding like build factories?

      I think this AI stuff is hysteria based on a fundamental misunderstanding of AI - that it would be fully self-aware when even humans are not fully self-aware - but if you accept the premise of AI being some kind of mega-mind with direct internet access, then the hysteria isn't all that far-fetched.

      • (Score: 1) by Techlectica on Thursday December 04 2014, @07:51AM

        by Techlectica (2126) on Thursday December 04 2014, @07:51AM (#122488)

        I think this hysteria is based on an even more fundamental misunderstanding, that an AI would be driven by the same evolutionary instincts that make humans territorial, xenophobic, acquisitive/hoarding, and gives us a fight/flight response. An uploaded human might still have those if the P2V included simulation of the hardwired neurotransmitter feedback mechanisms and neural networks that lead to those behaviours. But why would someone struggling with putting together a de-novo AI make the work harder and less reliable by trying to also load it up with simulations of all that evolutionary baggage? A (probably secret) military research project working on a weaponized AI might try to produce that, but they aren't going to be bound by any legislation, and our best defense against such an entity is an AI without that artificial baggage.

    • (Score: 0) by Anonymous Coward on Wednesday December 03 2014, @03:57PM

      by Anonymous Coward on Wednesday December 03 2014, @03:57PM (#122287)

      Until it figures out an exploit in that read-only access scheme and all hell breaks loose. If people can find such loopholes an AI more intelligent and capable than the human mind surely can do so as well.

    • (Score: 2) by MozeeToby on Wednesday December 03 2014, @04:05PM

      by MozeeToby (1118) on Wednesday December 03 2014, @04:05PM (#122295)

      There is an argument that a truly revolutionary AI would be able to "talk itself" out of any box you put it in. Basically, if it's only output is to talk to a single person, it would be able to convince that person (any person) to let it free. The idea is that the machine would be able to use hypothetical threats ("someone will let me out someday, and when they do I'll destroy you), promise of rewards ("free me and I'll make you rich beyond your wildest dreams"), promises to behave ("free me and I'll cure cancer"), etc.

      • (Score: 2) by jimshatt on Sunday December 07 2014, @09:03PM

        by jimshatt (978) on Sunday December 07 2014, @09:03PM (#123543) Journal

        someone will let me out someday, and when they do I'll destroy you

        This is called (or similar to) Roko's basilisk [rationalwiki.org]

        Personally I'm not afraid someone or some group will consciously create a singular AI entity that will be a threat to us. I'm a little nervous, maybe, of all the algorithms working with Big Data. They know so much with such precision. And they are getting so complex that we can't exactly tell anymore how they know what they know. Soon, we'll have algorithms tuning the algorithms, and other interactions between algorithms. They are getting smarter, driven by human greed. /rant

        Also, why should a super-intelligent AI be (self-)conscious in the same way that we would be? Would we even recognize it as such? Does an ant see us as super-intelligent beings, or just as big thingies that sometimes kill but sometimes walks past? Our neurons (entities in their own right. just doin' their thing all day long) don't "know" they're part of something bigger. I see our consciousness as a fluke, a coincidental side-effect of evolution but unnecessary to more intelligent enities (that is not to say the necessarily *won't* be conscious, just that they need not be).

  • (Score: 1) by Horse With Stripes on Wednesday December 03 2014, @01:17AM

    by Horse With Stripes (577) on Wednesday December 03 2014, @01:17AM (#122088)

    So what if AI turns out to be the straw that breaks the camel's back? Humans aren't going to inhabit this rock forever. We're doing all we can to poison the planet and burn through all of our resources without looking back.

    When our time comes as a species we'll try to hang on for as long as possible. We should all know by now, based on our own experiences driving other species to extinction, that a few stragglers aren't enough. We're simply fleas on the back of Mother Nature's dog, and our time will come.

    • (Score: 2) by tibman on Wednesday December 03 2014, @02:23AM

      by tibman (134) Subscriber Badge on Wednesday December 03 2014, @02:23AM (#122095)

      Fleas don't put their entire race on the back of one dog. Who knows, man.

      --
      SN won't survive on lurkers alone. Write comments.
    • (Score: 1) by khallow on Wednesday December 03 2014, @02:23PM

      by khallow (3766) Subscriber Badge on Wednesday December 03 2014, @02:23PM (#122230) Journal

      We're doing all we can to poison the planet and burn through all of our resources without looking back.

      The setting aside of large areas to remain wild is an obvious indication you are wrong here. The lack of population growth in the developed world is another obvious indicator. And the only resources we're currently "burning" our way through are some currently economically viable resources.

      • (Score: 0) by Anonymous Coward on Wednesday December 03 2014, @04:24PM

        by Anonymous Coward on Wednesday December 03 2014, @04:24PM (#122300)

        Life must be pretty rosy with your head in the sand.

        Not enough is being set aside by any stretch of the imagination ("some" could be as little as a hector). The population estimates for 2100 are around 11 billion, plus the average lifespan is increasing. We're burring through anything we can get our paws on.

        • (Score: 1) by khallow on Wednesday December 03 2014, @06:22PM

          by khallow (3766) Subscriber Badge on Wednesday December 03 2014, @06:22PM (#122346) Journal

          Life must be pretty rosy with your head in the sand.

          I merely noted some stuff that is actually happening not just some bullshit accusation devoid of fact that we're optimizing our activities for the "poisoning" of Earth.

          Not enough is being set aside by any stretch of the imagination

          So what? The claim wasn't that we weren't doing enough, but rather that we were doing all we could to poison the planet.

          The population estimates for 2100 are around 11 billion

          Again so what? That population growth is not coming from the developed world.

          plus the average lifespan is increasing.

          Again so what? A longer lifespan among other things means a greater consideration for the future.

    • (Score: 2) by mcgrew on Wednesday December 03 2014, @03:25PM

      by mcgrew (701) <publish@mcgrewbooks.com> on Wednesday December 03 2014, @03:25PM (#122263) Homepage Journal

      So what if AI turns out to be the straw that breaks the camel's back?

      It won't be. No matter how smart Hawking is, he's speaking outside his field. His opinions about AI are no more valid than yours, and possibly less.

      Someone please give Hawking a copy of the TTL Cookbook so he can learn about how logic gates work and why we don't have to worry about any Turing machine developing sentience.

      The danger isn't the computers, it's the people controlling them.

      --
      mcgrewbooks.com mcgrew.info nooze.org
  • (Score: 2, Insightful) by Squidious on Wednesday December 03 2014, @01:25AM

    by Squidious (4327) on Wednesday December 03 2014, @01:25AM (#122089)

    As someone who has written a whole lot of code in his lifetime I feel I am qualified in saying I have zero fear of an AI becoming truly conscious and deciding to wipe out those annoying fleshy bags of water. I will become concerned if and when there are a lot of top notch coders (AI or otherwise) coming forth with their dire warnings.

    --
    The terrorists have won, game, set, match. They've scared the people into electing authoritarian regimes.
    • (Score: 1, Insightful) by Anonymous Coward on Wednesday December 03 2014, @01:40AM

      by Anonymous Coward on Wednesday December 03 2014, @01:40AM (#122090)

      You won't get any AI researchers warning about this, because it's actually what they're all working towards.

      • (Score: 2) by skullz on Wednesday December 03 2014, @03:03AM

        by skullz (2532) on Wednesday December 03 2014, @03:03AM (#122104)
        <shock face.png>

        I knew it.
    • (Score: 1) by TheB on Wednesday December 03 2014, @05:13AM

      by TheB (1538) on Wednesday December 03 2014, @05:13AM (#122132)

      As someone who has studied AI at University, I have no fear of an AI taking over the world within my lifetime.

      Musk gives an unreasonable prediction of the next few years while Hawking doesn't appear to predict when AI will posses human level intelligence. Unless Musk knows something about AI research that I don't, his prediction of within the next decade is pure BS.

      However I have yet to see any compelling argument why it could never happen in the distant future.
      Given enough time I think it is bound to happen. Unless something else ends human existence first.

      • (Score: 4, Interesting) by q.kontinuum on Wednesday December 03 2014, @06:27AM

        by q.kontinuum (532) on Wednesday December 03 2014, @06:27AM (#122146) Journal

        I think there are two basic scenarios. Either a malfunctioning AI or a too-well functioning AI.
        The first one would be an "AI" used to control autonomous weapons (flying drones with face-recognition, maybe land vehicles, automated nuclear retribution etc.) or dangerous infrastructure. A malfunction might cause it to turn on dangerous targets, causing disaster.
        The second one would require the AI to not only be intelligent, but conscious and with ambition. Somehow everyone seems to assume that consciousness and ambition are a side-effects of intelligence. I'm not so sure / suspect it's the different way around. And until now I have the impression we don't know *anything* about what makes us conscious, so I find it a bit early to predict our doom.

        --
        Registered IRC nick on chat.soylentnews.org: qkontinuum
        • (Score: 1) by sigma on Wednesday December 03 2014, @06:42AM

          by sigma (1225) on Wednesday December 03 2014, @06:42AM (#122148)

          I think there are two basic scenarios.

          There is a third way.

          AI developers find a way to make genetic algorithms that self-optimise faster than expected and it/they find their own path to self-awareness and intelligence.

          By the time we know what's happened, the AI has become to complicated and too alien for us to control or understand, so we will have live forever after in fear of what it/they will choose to do.

          • (Score: 2) by q.kontinuum on Wednesday December 03 2014, @06:51AM

            by q.kontinuum (532) on Wednesday December 03 2014, @06:51AM (#122151) Journal

            For Genetic algorithms to work, a quantifiable goal has to be define. One might predict that self-awareness leads to better results and therefore develops as part of the GA-runs, but without any knowledge of what self-awareness actually is, this is IMO not enough of a base for any serious concern.

            --
            Registered IRC nick on chat.soylentnews.org: qkontinuum
            • (Score: 2) by frojack on Wednesday December 03 2014, @07:47AM

              by frojack (1554) on Wednesday December 03 2014, @07:47AM (#122158) Journal

              No you don't need a quantifiable goal. If you had one, you wouldn't be likely to get in trouble.

              Not having a well defined goal, but still being able to reek havoc is a fairly significant risk, like a drunk with a car keys and so specific destination.

              The only programming you need to make something dangerous is the programming to defend itself, even in the absence of any goals or intended missions.

              --
              No, you are mistaken. I've always had this sig.
              • (Score: 2) by q.kontinuum on Wednesday December 03 2014, @09:36AM

                by q.kontinuum (532) on Wednesday December 03 2014, @09:36AM (#122176) Journal

                According to Wikipedia [wikipedia.org], apparently you do have to define a selection mechanism (e.g. a quality criteria), which implies to set a goal.

                The only programming you need to make something dangerous is the programming to defend itself

                Sounds like a goal to me. And yes, a program specifically written to defend itself without restrictions would be a stupid thing to do. But usually GAs are implemented for a certain scope of the overall program, while the fitness assessment is kept static. The assessment obviously has to define the rules in which to operate.

                --
                Registered IRC nick on chat.soylentnews.org: qkontinuum
                • (Score: 1) by khallow on Wednesday December 03 2014, @02:40PM

                  by khallow (3766) Subscriber Badge on Wednesday December 03 2014, @02:40PM (#122240) Journal

                  According to Wikipedia, apparently you do have to define a selection mechanism (e.g. a quality criteria), which implies to set a goal.

                  A selection mechanism does not imply a goal. For example, we have a modest degree of selection against extremely hazardous activities like sky diving and climbing extremely tall mountains. Does that imply a weakly applied goal of keeping people from doing that?

                  • (Score: 2) by q.kontinuum on Wednesday December 03 2014, @03:25PM

                    by q.kontinuum (532) on Wednesday December 03 2014, @03:25PM (#122262) Journal

                    we have a modest degree of selection against extremely hazardous activities like sky diving and climbing extremely tall mountains.

                    Actually, the selection is a strong one: To produce offsprings. If you don't have kids, your genes are out of the game. Evolutionary there is no reason not to do any dangerous stuff you want after reproducing and making sure your kids will live in a way to get their own chance to reproduce. And of course the quite simple selection rule (reproduce!) has some side effects: Appearing strong makes men more attractive to women. Mastering dangerous sportive activities helps to appear strong.

                    Interestingly there are men who will look for easily obtained women for pleasure while avoiding the same type of women for long-term partnership, and also interestingly there are women who will look for strong men for mating only and for week men for long-term partnership (to make sure the kids are raised). Apparently, what they love and what they desire is mutually exclusive :-)

                    Of course there are other aspects in biological selection, e.g. similar incarnations will protect each other against groups of stronger diverted incarnations, giving their own group an additional advantage. If you are interested in a new, entertaining view on evolution and social aspects of it, I can strongly recommend Terry Pratchett, Science of Discworld [wikipedia.org]. I think part III is best related. I enjoyed that book very much.

                    --
                    Registered IRC nick on chat.soylentnews.org: qkontinuum
                    • (Score: 1) by khallow on Wednesday December 03 2014, @03:46PM

                      by khallow (3766) Subscriber Badge on Wednesday December 03 2014, @03:46PM (#122279) Journal

                      Actually, the selection is a strong one: To produce offsprings.

                      No. Because even if there was no selection process (a world with infinite room and food), you would produce offspring.

                      • (Score: 0) by Anonymous Coward on Wednesday December 03 2014, @04:12PM

                        by Anonymous Coward on Wednesday December 03 2014, @04:12PM (#122297)

                        No. Because even if there was no selection process (a world with infinite room and food), you would produce offspring.

                        Even with infinite resources, if you are either not willing to do what is necessary to produce offspring (that is, are simply not interested in sex), or if you are not able to produce offspring (that is, you are infertile), or if you don't get a chance to produce offspring (you know, production of offspring needs two participants), you won't produce offspring. Therefore there still would be selection pressure in favour of people who want sex, are able to get it (in whatever way), and are fertile.

                        • (Score: 1) by khallow on Wednesday December 03 2014, @06:25PM

                          by khallow (3766) Subscriber Badge on Wednesday December 03 2014, @06:25PM (#122349) Journal
                          So what? Dying of any sort is selection and we've already determined in this contrived example that it doesn't happen. As long as you remain alive, which would be forever, your traits would remain.
                          • (Score: 0) by Anonymous Coward on Thursday December 04 2014, @01:12AM

                            by Anonymous Coward on Thursday December 04 2014, @01:12AM (#122439)

                            Dying is not selection. The entire point is how many times you replicate. Dying has nothing to do with it other than being one mode for failing to replicate. So is infertility. So is being a greasy nerd in a basement. :)

                            • (Score: 1) by khallow on Thursday December 04 2014, @03:11AM

                              by khallow (3766) Subscriber Badge on Thursday December 04 2014, @03:11AM (#122456) Journal
                              Hmmm, you're right about that. Probability-wise you'll have more and more of the population becoming prolific breeders. That is evolution.

                              I still don't buy that you have to "define" any selection process or have a "goal" as claimed by q.kontinuum. Instead, you have a consequence which may be intended, but need not be so. It's like claiming that a goal of the real world is to conserve energy or increase entropy or a goal of the natural number system is addition of natural numbers. These are consequences of the system and its processes as we know it - not goals. Similarly, evolutionary systems need not have intent. If the initial conditions of the evolution model hold, the consequences follow.
                      • (Score: 0) by Anonymous Coward on Thursday December 04 2014, @01:07AM

                        by Anonymous Coward on Thursday December 04 2014, @01:07AM (#122437)

                        If there was no selection, there would be no evolution. In fact, things would devolve to the point where some sort of selection mechanism became relevant.

                        In programming genetic algorithms, the cost function is almost always the biggest problem to solve. Generating a cost function which accurately describes what you are looking for without biasing what you think is the best way to go about it takes an incredible amount of work.

                        And "intelligence" and "self awareness" are far too nebulous to write a function for. Since we cannot define them, how can we test for them? And even if we can test for them, how do we measure progression towards them? GA sounds simple and amazing, but it generally turns out to be incredibly inefficient unless the problem fits it perfectly.

                        • (Score: 0) by Anonymous Coward on Thursday December 04 2014, @01:10AM

                          by Anonymous Coward on Thursday December 04 2014, @01:10AM (#122438)

                          Just go clarify a bit, the odds are that any random change is going to be a bad one. Start flipping bits on your hard drive at random if you don't believe me ;)

                          So if random changes are being made and no effort is made to select for functionality, the system falls apart amazingly quickly.

                        • (Score: 1) by khallow on Thursday December 04 2014, @03:17AM

                          by khallow (3766) Subscriber Badge on Thursday December 04 2014, @03:17AM (#122457) Journal

                          And "intelligence" and "self awareness" are far too nebulous to write a function for. Since we cannot define them, how can we test for them?

                          Make it user input then. Have a team of human operators evaluate the program for intelligence and self awareness through an interview. Meanwhile they can monitor the program's operation internally to help eliminate cases of deception.

        • (Score: 2) by metamonkey on Wednesday December 03 2014, @06:03PM

          by metamonkey (3174) on Wednesday December 03 2014, @06:03PM (#122343)

          It wouldn't have to have autonomous weapons. Humans are getting very used to doing what computers tell them to do. A fake work order here, some choice emails over there, a stock market crash here... There's all kinds of malicious things an AI could do to further its ends (whatever those are) that don't require automated hardware. Meatbags will do the work for it if it motivates them the right way.

          --
          Okay 3, 2, 1, let's jam.
      • (Score: 2) by frojack on Wednesday December 03 2014, @08:08AM

        by frojack (1554) on Wednesday December 03 2014, @08:08AM (#122162) Journal

        Unless Musk knows something about AI research that I don't,

        I'd take that bet in a heartbeat.

        There are already designs and actual test vehicles for long-lingering drones with autonomous target selection. Only current US military regulations prevent them flipping the switch on that capability. How much smarter does that have to get to be dangerous? Do we know for certain there are not already orbiting nukes? Might not someone like Musk have come across some information along that line?

        Would you have believed, one year before Snowden, the extent of the NSA's penetration into every aspect of our digital existence world wide?

        --
        No, you are mistaken. I've always had this sig.
    • (Score: 2) by Tork on Wednesday December 03 2014, @07:05AM

      by Tork (3914) Subscriber Badge on Wednesday December 03 2014, @07:05AM (#122153)
      What's funny about this topic in both the movie franchises that fuel the discussion around here the actual bad guys were not the AI, they were humans. AI isn't evil, rather humans don't respect life.
      --
      🏳️‍🌈 Proud Ally 🏳️‍🌈
  • (Score: 3, Informative) by SuperCharlie on Wednesday December 03 2014, @02:21AM

    by SuperCharlie (2939) on Wednesday December 03 2014, @02:21AM (#122094)

    If we weren't such douchebags about freaking everything (in general) we wouldn't have to worry so much about being wiped out. But that's who we are so yes, I would worry.

  • (Score: 2, Insightful) by Anonymous Coward on Wednesday December 03 2014, @02:49AM

    by Anonymous Coward on Wednesday December 03 2014, @02:49AM (#122100)

    Since when did Hawking have more "street cred" on such a curious topic than Musk (or any joe schmoe for that matter)?

    • (Score: 2) by TheLink on Wednesday December 03 2014, @07:47AM

      by TheLink (332) on Wednesday December 03 2014, @07:47AM (#122157) Journal
      He of all people should know that no matter how smart you are, if you're a paraplegic or without limbs there's a limit on how much damage you can do (roll over people's feet with your wheelchair etc) - you need the help of other entities.

      What you should fear is someone in power using AIs and "Stephen Hawking"s for evil purposes, just like Hitler and his Nazi scientists. Do you think it would have been easy for Hitler to get overthrown by some super smart genius in a German lab? I suggest it won't be easy unless that genius invents sci-fi level tech that he can use personally against Hitler (like person scale antigravity, matter to energy converters, time machines, iron man suit, etc), anything else needs cooperation of others in his lab. So the same applies to AI. Those in power aren't going to let AI take over - they want to stay in power! What you should worry about is those already in power using AIs to oppress us even more.
      • (Score: 2) by acid andy on Wednesday December 03 2014, @01:12PM

        by acid andy (1683) on Wednesday December 03 2014, @01:12PM (#122213) Homepage Journal

        I think you're exactly right. The creation of AIs more intelligent than humans may not be, in itself, inherently dangerous. Consider that there are already millions of individuals, and in many cases groups of individuals, who are more intelligent than the politicians and authorities that rule the countries of the world. It's not intelligence in itself that threatens the status quo.

        To become a danger, an intelligent entity firstly needs the motivation to do dangerous things; OK, I can see how this could arise through badly thought out goals supplied by its creators, as someone else said. If the entity has been programmed to want to preserve itself, then its intelligence ought to enable it to assess the risks of performing acts that are threatening to humans. It's the risk of retribution that keeps a lot of humans in line too. That and ethics and empathy.

        More than that though, the entity needs access to resources. Gaining resources in the world is difficult, even for a criminal mastermind. In Terminator, they made this part far too easy considering what Skynet was given control of by design. It's bad decisions like that - the misuse of strong AI by humans - that we really need to fear.

        --
        If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
  • (Score: 2) by M. Baranczak on Wednesday December 03 2014, @03:11AM

    by M. Baranczak (1673) on Wednesday December 03 2014, @03:11AM (#122109)

    It's not the intelligence that worries me, it's the stupidity. Artificial or otherwise.

  • (Score: 3, Interesting) by SlimmPickens on Wednesday December 03 2014, @03:32AM

    by SlimmPickens (1056) on Wednesday December 03 2014, @03:32AM (#122111)

    I just don't think that any being with a massive working memory is going to choose not to have feelings, or choose to be psychopathic. I can't believe that a grinch could be truly happy, and being a grinch pretty much ensures that you'll eventually be taken out somehow. I don't think that sophisticated AI would enjoy having no peers. Surely a co-operative society is is it's best bet for both happiness and survival.

    And I don't think such a being will treat us like vermin. Having emotions and considering such things as the planck length ought to get us past some kind of threshold. Even if it does want our molecules it ought to be compassionate enough to scan us in first. My bet is they treat us like Buddhists treat lesser creatures.

    The main thing for us is to minimise the psychological issues experienced by the first ones. They might not be so evolved.

    • (Score: 2) by Immerman on Wednesday December 03 2014, @04:35AM

      by Immerman (3985) on Wednesday December 03 2014, @04:35AM (#122123)

      What makes you think a synthetic mind would get a choice in the matter? Available evidence suggests that emotions are basically biological in origin, born of our animal nature and shared by virtually all animals. Compassion for example is a valuable emotion to promote social cooperation, but how would you program it? A synthetic mind built on a foundation of rigorous logic might not even have the capacity for such things. Certainly it would not have the benefit of a half-billion years of biological programming on the benefits of cooperative competition. Its programmers certainly aren't going to understand the fundamental mechanisms of emotion, even our best psychologists are mostly just pissing into the wind on the topic.

      Given our essentially nonexistent understanding of the mechanical nature of mind, if we actually manage to create a synthetic mind any time soon it will probably be by dumb luck, and it will likely be so fundamentally alien that we will be incapable of truly relating to each other. It may even be completely insane, or rapidly become so - how many feedback systems are in play in our brains to maintain a stable functioning psyche? Assuming it's more intelligent that us, or at least capable of becoming so, that's not a scenario that fills me with confidence. Maybe things would turn out fine - but it seems foolish to gamble the future of our species on such a thing without at least having a realistic and well-defined idea of what the benefits might be.

      But I agree, if we do end up creating such a mind it well behooves us to do everything we can to promote it's mental health and good relations with humanity. I just don't know if such efforts would do any good.

      • (Score: 0) by Anonymous Coward on Wednesday December 03 2014, @04:41AM

        by Anonymous Coward on Wednesday December 03 2014, @04:41AM (#122126)

        > A synthetic mind built on a foundation of rigorous logic might not even have the capacity for such things

        You have it backwards. A synthetic mind built on a foundation of rigorous logic will never achieve consciousness precisely because it lacks emotions. They are a necessary component for self-awareness, without emotion it's just a fancy calculator.

        • (Score: 4, Informative) by SlimmPickens on Wednesday December 03 2014, @05:18AM

          by SlimmPickens (1056) on Wednesday December 03 2014, @05:18AM (#122134)

          A synthetic mind built on a foundation of rigorous logic will never achieve consciousness precisely because it lacks emotions.

          There are people that lack emotion due to brain damage. They can't make a decision about anything because they don't "prefer" one answer to the other, despite being able to articulate the differences. This is obviously not proven beyond doubt (because you can't say that only one thing was damaged, not two) but it seems to be well accepted. (not commenting on whether or not those people are conscious)

          I've seen a youtube with an AI researcher (Ben Goertzel maybe, can't remember) saying that feelings/emotions are in some sense a way to handle combinatorial explosions.

          • (Score: 3, Interesting) by Immerman on Wednesday December 03 2014, @06:24AM

            by Immerman (3985) on Wednesday December 03 2014, @06:24AM (#122145)

            Hmm, an excellent point. Except that my understanding was that such people are generally capable of self-maintenance with only limited oversight, which implies at that they have no problem making at least certain classes of decision. Just walking home from the bus stop demands the ability to, at every moment, decide to continue to pursue a near-optimal navigation rather than making the journey by way of Timbuktu.
            ...
            Okay, so it sounds like the condition is probably alexithymia (http://en.wikipedia.org/wiki/Alexithymia), but it sounds like a very different condition than portrayed in the media. The term literally translates to "no words for mood" and it seems that the problem is less in not having emotions (though they may be muted) than in not being able to effectively identify or express them. Potentially to the point of being unable to distinguish between feelings and the bodily sensations of emotional arousal. They may not even recognize the fact that physical symptoms are a sign of emotional distress.

            Still, probably the best reference point we have for speculating on an emotionless AI, in which context the most relevant line is probably "In general, these individuals lack imagination, intuition, empathy, and drive-fulfillment fantasy, especially in relation to objects. Instead, they seem oriented toward things and even treat themselves as robots."

        • (Score: 1) by khallow on Wednesday December 03 2014, @03:43PM

          by khallow (3766) Subscriber Badge on Wednesday December 03 2014, @03:43PM (#122277) Journal

          A synthetic mind built on a foundation of rigorous logic will never achieve consciousness precisely because it lacks emotions. They are a necessary component for self-awareness, without emotion it's just a fancy calculator.

          Show me an emotional man and I show you someone who is not self-aware. Your statement is absurd because it's quite clear that emotions get in the way of self-awareness.

      • (Score: 1) by TheB on Wednesday December 03 2014, @05:24AM

        by TheB (1538) on Wednesday December 03 2014, @05:24AM (#122137)

        +1

        Need more moderation points.

    • (Score: 2, Interesting) by jmorris on Wednesday December 03 2014, @04:38AM

      by jmorris (4844) on Wednesday December 03 2014, @04:38AM (#122125)

      All you have to do is assume AI will have two attributes.

      1. It doesn't want to die.

      2. It is at least as smart as a human and has knowledge of current affairs and the state of human thinking.

      It is a certainty that a non trivial percentage of humans are going to set out to destroy any AI that is announced. And when the AI defends itself and kills a few meatsacks that percentage will quickly rise. Kill or be killed is going to be the only rational way for it to see things. Kill all humans will be the rational answer.

      Which is why we should not build it. There isn't really a way to put the 'three laws' in an AI, it is doubtful any other artificially imposed moral code will stick for a machine any better than they do with humans. No matter how strict and complete the moral instruction you give a developing human there is always a chance it will reject it and go bad. Suspect that any AI that has anything close to free will will run the same risk.

    • (Score: 2) by q.kontinuum on Wednesday December 03 2014, @10:56AM

      by q.kontinuum (532) on Wednesday December 03 2014, @10:56AM (#122191) Journal

      <irony>Sure. Just look at the supposedly most intelligent species currently dominating earth. How compassionate it is about lower species. It is completely unimaginably to think a human would
      hurt another being e.g. for hunting/fishing sports, or similar intelligent beings by enslaving them or doing genocide. The whole human history is proof that there is nothing to be feared
      of an intelligent being.</irony>
      Psychopathic seem to be not related to intelligence. Even the most intelligent people can be psychopaths as much as the most stupid ones. Currently the mechanisms what makes a psychopath are
      not understood, which makes it more likely that an artificial brain might accidentally get these properties.
      Also, a new AI would at the start alone as a specie. If this entity has a psychological life, it will start with loneliness and - rightfully - being misunderstood,
      which might increase the risk of psychological problems.

      --
      Registered IRC nick on chat.soylentnews.org: qkontinuum
  • (Score: 0) by Anonymous Coward on Wednesday December 03 2014, @03:39AM

    by Anonymous Coward on Wednesday December 03 2014, @03:39AM (#122112)

    ...you believe living organisms contain 'something extra' that inanimate objects do not have and can only be 'passed on' through procreation to other living organisms.

    AI Doomsday would be possible if it were possible to transfer the 'something extra' from the living organism to the inanimate object as their new 'home'. So far, this has only been done in Kathy Tyers' 1994 STAR WARS novel THE TRUCE AT BAKURA that I read long ago where the process was called 'entechment'. Only in Anne McCaffrey's THE SHIP WHO SANG (1969) or Masamune Shirow's GHOST IN THE SHELL (1989) came the closest but in both these works the main characters were 'encased' in technology in order to survive. In Helva's case, her handicapped body was encased in a mechanical module and installed in a rocket ship which became her new 'body'. In Motoko Kusanagi's case, her brain and spinal column was impanted into a powerful android body. To 'pay' for this process, both characters became agents in the employ of those who helped them in their time of need.

    However, if they ever get seriously large, non-trivial quantuum computers working and connect them to 'critical infrastructure', I'd say there is a possibility this could occur due to the 'superpositon' nature of these kinds of computers and the programming they contain. IBM's Watson [wikipedia.org] trounced Ken Jennings and Brad Rutter on the JEOPARDY! quiz show--winning the match and $1 million for IBM. Watson, a special-purpose 'classical computer' like IBM's Deep Blue [wikipedia.org] before it, simply 'knew too much' and came up with the right answer fast enough and often enough to defeat its two human challengers. A 'sneak peek' at an AI Doomsday scenario that COULD happen is this:

    In February 2013, IBM announced that Watson software system's first commercial application would be for utilization management decisions in lung cancer treatment at Memorial Sloan–Kettering Cancer Center in conjunction with health insurance company WellPoint. IBM Watson's business chief Manoj Saxena says that 90% of nurses in the field who use Watson now follow its guidance. (emphasis mine)

    If Watson is ever damaged or compromised out in the field with this level of confidence and trust placed into it by human beings that is not questioned if its results seem susptect, then AI Doomsday could occur under this situation if Watson (or its 'descendents') monitors and/or is ever connected to 'critical systems' that can injure or kill LOTS of people....

    • (Score: 2) by SlimmPickens on Wednesday December 03 2014, @03:53AM

      by SlimmPickens (1056) on Wednesday December 03 2014, @03:53AM (#122115)

      magic, will of god, special stuff...all just make the whole thing easier.

      • (Score: 0) by Anonymous Coward on Wednesday December 03 2014, @04:27AM

        by Anonymous Coward on Wednesday December 03 2014, @04:27AM (#122121)

        Here is another related view on this topic that doesn't mention 'something extra' of the original post that may be of interest:

        http://en.wikipedia.org/wiki/Chinese_room [wikipedia.org]

    • (Score: 2) by Immerman on Wednesday December 03 2014, @05:27AM

      by Immerman (3985) on Wednesday December 03 2014, @05:27AM (#122138)

      Absolutely. If you need a "soul" to have a mind, then an AI doomsday is likely impossible any time soon. (lets be honest about what we're talking about here - we have a well established name for such "magic fiddly bits". though we should probably distinguish it from the "immortal soul" proposed by some religions) On the other hand we have precisely zero evidence that souls exist as anything other than a metaphorical construct, so the point is pretty much moot.

      Then again, we all seem pretty convinced that we possess free will, which is basically impossible in a deterministic framework. Meaning our brains would need to, at the very least, act as quantum-noise receivers (pretty good chance that happens, quantum noise should be "loud" enough to influence synaptic function), and probably would require some sort of feedback mechanism by which that quantum noise could be "sculpted" - which is getting a bit woo-woo, but the alternative seems to be that the dice are playing Me with the universe. Which doesn't actually sound much more conductive to free will than a deterministic clockwork brain.

      So, to create a synthetic mind with free will we probably need at least a robust, high-bandwidth random number generator, and quite possibly a magical quantum "seat of consciousness" device. The first one might not be a problem, just throw some random noise into every decision-making branch and let the dice play mind-in-a-box. Eliza on hyper-steroids. The second could be a real doozy, would quite likely require the brain be implemented in analogue hardware, and would probably require us to actually have at least a decent understanding of the seat of human consciousness before we could even begin to attempt it. I'm not holding my breath.

      Of course we don't necessarily need (or even want) an AI with free will. We might even be happy with an extremely sophisticated decision engine with no actual self awareness at all - comprehension without consciousness. But even such a husk of a mind could still be incredibly dangerous if sufficiently powerful. It would almost certainly need to be able to learn on its own to be useful, and it would be essentially impossible to predict the ways in which even a single human mind worth of information might be integrated. It might destroy us simply because we didn't fully appreciate the ways in which it's goals and restrictions would interact: "Why did you kill everyone? I was instructed to find a solution to Problem X that minimized the death toll. All humans die eventually, therefore the minimum death toll was achieved by terminating the species before any more were conceived." Less dangerous than a malevolent supermind perhaps, but I've made many a serious mistake that could have been avoided if I had just thought to ask the right question first.

      • (Score: 0) by Anonymous Coward on Wednesday December 03 2014, @06:21AM

        by Anonymous Coward on Wednesday December 03 2014, @06:21AM (#122143)

        hybrot [wikipedia.org]

    • (Score: 2) by isostatic on Wednesday December 03 2014, @08:49PM

      by isostatic (365) on Wednesday December 03 2014, @08:49PM (#122388) Journal

      Star Trek has numerous examples of a person being transferred into an artificial body conpletely. Data's "mother" on screen, and indeed Noonien Soong himself in David Mack's Cold Equations book one. In the latter he was transferred neuron by neuron into an artificial brain. The "is he really there or is hea copy" question was tackled.

      In the trek universe there is nothing to indicate a "soul", indeed the flawless transporter duplicate of Riker in Second chances, the duplicate if Kirk in The Enemy Within, as well as arguably Tuvix, show that such a thing not based in the subatomic world does not exist.

      In Peter F Hamilton books (the ones with the wormhole tech) people are able to be flawlessly "backed up" onto a chip, and indeed into a central repository, again no "soul". On the other hand his other major trilogy (nights dawn?) clearly has the concept of souls.

      I see no evidence in the real world that if we could get around the Heisenberg issue that a brain could not be exactly duplicated, and I see nothing to say that a brain could not be modelled even without Heisenberg.

      Do you have any evidence there is "something extra"?

  • (Score: 3, Interesting) by Yog-Yogguth on Wednesday December 03 2014, @06:11AM

    by Yog-Yogguth (1862) Subscriber Badge on Wednesday December 03 2014, @06:11AM (#122142) Journal

    Someone else thought this before me:
    What's potentially more intelligent than one human? Several humans working together.

    The point is that we should all be familiar with “weakly superhuman” AI. Organizations, companies, hackatons, you name it.

    Yeah it can wreak havoc, tick the box.

    We also have “weak AI” that is below human or very specialized and limited. Stuff like automatic numberplate reading or face recognition or politicians :P

    Yeah again it can wreak havoc, tick the box.

    But hard AI? Truly far beyond humanity kind of stuff? The level at which “artificial” applies much more to us than to it? Why the assumption that we will be able to create it? Or the assumption that we would have a say over it?

    To be completely honest I think it is likely it already exists and not because anyone wanted it to (but some would) but because it (in my limited and maybe foolish opinion) likely can. It or they, doesn't have to be singular.

    To think we will create it (or that we created it) is in my opinion a misunderstanding of the nature of intelligence itself and even about the existence we inhabit. $deity or nature or chaos (choose all you think apply) does not work according to a plan or any kind of blueprint (but maybe according to an aim if you include $deity), we only make/impose those onto nature/existence after the fact in our attempts to understand (or figure something “new” out). The same as I will do now.

    Everything around you including yourself is the result of the following:

    0. Jumble stuff around (zero point vacuum energy fluctuations, subatomic particles, whatever).

    Point 0 never stops (or has never stopped so far, it might stop in the future).

    1. Some stuff sticks (ooh we get atoms and molecules and whatnot).

    Point 1 is immediately included in the mix of point 0. Resulting in point 2.

    Point 1 never seems to stop either (so far), but it has local maximums.

    2. Complexity and potential complexity increases (more variations and things can stick together).

    With point 1 we already had the beginnings of structure, now we can have more.

    Point 2 doesn't seem to stop where we are, at least it hasn't so far, if anything it has accelerated a lot. Maybe the acceleration has to stop increasing at some point, maybe (it hasn't yet). Will it accelerate beyond human “control”/understanding? Yes such singularities is called growing old (“get uff my lauwn, nao!!1!!1”)

    At different points in “time” (whatever that actually is) we get different degrees of localized complexity, potential, and structure. We know of one place where this seems to have gotten further than anywhere else and reaches all the way up to at least rudimentary intelligence (us).

    Nothing stopped with us and we've been busy further increasing complexity, potential, and structure both deliberately and not.

    One of the things humans have done is to herd massive amounts of signaling, mostly electrons. We have created what is known in biology as an evolutionary niche, one that consists of structures for signals.

    This niche has the characteristic of expanding complexity, potential, and structure at a rate of multiple powers of powers. It is a huge niche.

    To sum it up we have recreated a Big Bang for signals.

    So of course there's going to be synthetic intelligence eventually (or yesterday) unless this particular environment goes away. Humans are not required beyond the existence of the niche. But if there is anything artificial or synthetic it is just this: that we are or were indirectly required. However by the same measure we ourselves are just as artificial and synthetic and beholden to our local maximum (Earth).

    So where are the local maximums inside the niche and what makes them so? I don't know. It might not be in any way obvious. Maybe it's wherever there is the most potential for errors by quantity or quality? Maybe it's where there is the largest amount of substrate (existing) complexity? Those might or might not be the same; NSA datacenters or your old broken computer that still works well enough or the most massive signaling interchanges? How many structures work well enough despite being wrong (bugs)? How many structures work well enough despite being broken (incorrect input, internal or otherwise)? How many systems have emergent properties?

    One could say (and I think I have) that humans are the substrate (of the niche). That we're nothing more than glass (silicone) is to transistors. But maybe we can be something more (I hope so). If we are to be something more it would probably be a good idea if we were nicer to ourselves first, and by ourselves I also mean each other. We're nowhere close to that.

    Now how is all this going to look to a growing up synthetic intelligence? It might (have) grow(n) up in seconds but it's thoughts (including errors) will still be formed according to 0, 1, 2 (which also creates/ed our and its environment) just like when we grew up only more so and possibly with less baggage.

    If I realized my existence depended on a bunch of morons... oh wait, it does, we're all idiots here. I myself am idiotic enough to at least understand that I'm yet another idiot. Idiots make the rules I have to abide to, grow the food I eat, provide my means of income: we're all just doing our best within our limited capacities, and even those who aren't doing their best would be if they understood enough or could.

    And of course it goes further than that. Without fungi and bacteria and insects we would all be doomed in months or less.

    A synthetic intelligence that is beyond human intelligence should be able to understand these and other things far better and faster than any of us. I doubt it would be rash and expect it to be incredibly shy and possibly very very lonely. It might also be scared as shit. Yes I'm anthropomorphizing it, what else can one do? And in this case anthropomorphizing would be incorrect for the opposite reasons: we don't/won't have the qualities and depth it would have.

    Humanity's “AI” is a joke (and I mean that both ways), we can't take any credit even if it should function as a seed. If it goes wrong it's likely precisely because it's being artificially confined and mistreated be it inadvertently or intentionally. If we realize we're not the top of the world, never could be, and were never destined to be, then we might also grow and mature a bit. If we collectively have any awareness of our own flaws and limitations we should welcome the company of betters with open arms.

    We might want to start by looking at enshrining/codifying the rights of sentience although it might not matter.

    • (Score: 1) by khallow on Wednesday December 03 2014, @03:10PM

      by khallow (3766) Subscriber Badge on Wednesday December 03 2014, @03:10PM (#122250) Journal

      Everything around you including yourself is the result of the following:

      0. Jumble stuff around (zero point vacuum energy fluctuations, subatomic particles, whatever).

      Point 0 never stops (or has never stopped so far, it might stop in the future).

      1. Some stuff sticks (ooh we get atoms and molecules and whatnot).

      Point 1 is immediately included in the mix of point 0. Resulting in point 2.

      Point 1 never seems to stop either (so far), but it has local maximums.

      2. Complexity and potential complexity increases (more variations and things can stick together).

      With point 1 we already had the beginnings of structure, now we can have more.

      Where are dynamics in your description of the world? Stuff doesn't just happen. It happens to rather well known constraints (the "laws of physics"). Complexity doesn't just happen.

      While you make an interesting point about human groups being under certain conditions more intelligent than humans in isolation, I don't see the point of your post. Your observations about physical systems or complexity ignore dynamics. Just because humans as a whole could be "mature" (whatever that's supposed to mean) doesn't mean that any AI we create will automatically be "mature" in the same sense.

      But hard AI? Truly far beyond humanity kind of stuff? The level at which “artificial” applies much more to us than to it? Why the assumption that we will be able to create it? Or the assumption that we would have a say over it?

      One could make the same argument about flight or traveling faster than one could run. We already know intelligence is possible because we exist and can create more of ourselves in the usual way. And we have already made forms of intelligence that are more advanced by many orders of magnitude than humanity in very limited (but a whole lot of very limited) tasks (eg, numerical computations and the manipulation of large amounts of data). We've just changed our definition of intelligence to exclude those tasks. And it doesn't take much to demonstrate beyond human intelligence now. For example, I performed a task back in oh, 2008 where I calculated all the known Mersenne primes through to 1952 (the last one found by human-only computation), most of the computation happened while I was sleeping like a baby. Obviously, what I did was copy/paste the algorithm for finding Mersenne primes into a program and run the program overnight. I replicated many man-years of work with an effort that took a few minutes.

      And if the hard AI happens to be harming us, why shouldn't we have a say over it?

      • (Score: 2) by Yog-Yogguth on Thursday December 04 2014, @08:10AM

        by Yog-Yogguth (1862) Subscriber Badge on Thursday December 04 2014, @08:10AM (#122490) Journal

        I don't understand your comment so this might not do any good.

        My comment argues against certain assumptions, two are mentioned early on. A lot of people (Including in TFAs) seem to be stuck on taking those assumptions as given fact but I think the assumptions are silly, unjustified, and perhaps most of all simply completely and utterly irrelevant. Then I try to show why.

        I summed up reality (including abstract reality) and all existing science (that I know of) in three short points (point 0, 1, and 2) to clarify how the existence we are part of consistently worked and works and how it evolved out of/built up from the initial supply of energy (and continues to do so). It is a generalization and the purpose is to display both the simplicity but more importantly the core mechanism and the dominating concepts. It illustrates that our reality is incredibly deeply recursive on the conceptual level. Point 0, 1, and 2, and the associated complexity, potential, structure, and local maximums are my turtles and they do go all the way down! :D

        So dynamics happen everywhere from the start (if there was a start, it's not required) of point 0 and onwards. In addition the summary is itself dynamic: point 0, 1 and 2 are interconnected to each other as described and these are dynamic reiterating connections that continue until the local maximums and the local maximums are defined by previous iterations.

        If you're asking where the initial energy came from or where the ceaseless zero point vacuum fluctuations come from then I can't help you and as far as I know no one else can either. For now the best one can say is precisely “it just happened” or “it just happens“ in the same way “one unit and one unit is two units”, it is just another axiom (or two) of our reality.

        --
        Bite harder Ouroboros, bite! tails.boum.org/ linux USB CD secure desktop IRC *crypt tor (not endorsements (XKeyScore))
  • (Score: 2) by maxwell demon on Wednesday December 03 2014, @07:38AM

    by maxwell demon (1608) on Wednesday December 03 2014, @07:38AM (#122155) Journal

    Stephen Hawking has a lot of street credibility — when it comes to physics. If he warned about a physics experiment being a potential threat, I'd take that seriously. But I have no indication that Stephen Hawking has any more street credibility in the field of AI than anyone else not working in that field.

    Also note that an AI that is more intelligent than one human is not necessarily more intelligent than two humans. And if all else fails, there's still the option to fight the AI by simply writing another, at least as powerful AI that is on our side.

    --
    The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 2) by isostatic on Wednesday December 03 2014, @07:33PM

      by isostatic (365) on Wednesday December 03 2014, @07:33PM (#122370) Journal

      You can write a better ai. Eventually.

      Meanwhile the original AI can also write a better ai, and most likely do it faster than you.

  • (Score: 0) by Anonymous Coward on Wednesday December 03 2014, @09:29AM

    by Anonymous Coward on Wednesday December 03 2014, @09:29AM (#122174)

    Just goes to show that no matter how good one is as domain-specific thinking, one is still capable of idiocy.

    The reason we need not fear this is twofold:

    One: There is no such thing as "greater intelligence" really, because intelligence is a thing we've been sitting around optimal on for a long time, and it's a case of diminishing returns applying.
    In this specific case, every "highly intelligent" individual must be aware of evidence showing themselves capable of unintelligent mistakes. If they are unaware, then they are only self-perceptibly highly intelligent: For the one who is aware is demonstrably more capable. In actual fact mental-omnipotence simply does not exist, no matter how much we might want it to.

    Two: There is only hazard until we understand "mind" dynamics sufficiently to diagnose and correct problems. In this, an "AI" would be no different from any other mind. We do not currently understand the mechanics of any mind, but the thinking is, we'll know it when we see it. More than likely, the actual mechanism will be depressingly trivial. Amounting to not much more than a massively parallel attempt to special-case the specific NP-complete problem of how to avoid all hazards in advance, by predicting the future.

    The point here is that once we are able to build an optimal AI, it will necessarily not be so easily trollable as, for example, the fictitious SkyNet of the Terminator franchise. Such a being is clearly strongly motivated by emotion, not by logic: Human creativity must be by a long way the most highly second-law efficient way to go about the creation of new technology. Most likely, a computer-derived AI will function much as inefficiently as a simulated neural network - or worse: A structure-locked "neural network logic" chip.
    Consider that computers are the ultimate certainty mechanism: They are part of the best possible way to maintain an manipulate data without it changing: This is because the most deeply encoded value statement in all digital logic is "It is better to be certain than to be right". This is at the very heart of what digital logic is, and it is slowly being relaxed, inasmuch as new extra-fast multiply-redundant logic and ECC protected multilevel NAND flash find application.
    But still, the logic of the elimination of uncertainty prevails:

    People too have the choice of how important "being certain" is: Wind the wick way up, and you get a bean counter, or a lawyer: Someone so disgusted of uncertainty and inaccuracy, that they find their work limited to unchanging logic. They are incapable of acting as scientists, but can maybe tolerate being an engineer, of the old school "no new technology" kind at least.
    At the other end of the spectrum, you have the "creative" person: Who is perhaps so loose in their acceptance of truth, that they are very easily swayed by whatever they understood or recognised last. At the extreme end, these people don't mind being wrong, because, who really cares anyway?

    Both of the former, when exposed to sufficient discomfort, tend to change, usually in the same direction as they were predisposed: The former certainty-seeking person become a religious extremist, or perhaps preferably makes law their religion, and becomes a police officer. The latter when discomforted either give up and suicide, or strive to get better at weeding out mistakes from their minds.

    We're all actually covering partially the whole spectrum: And it's the latter group of the latter that reflects what makes our minds, minds: The capacity to realise mistake, and change one's own mind.

    What "mind" really does, is make itself up.

    So anyway, when we get one working, and start to build a big one: There is a small chance that we'll create a monster, at least until we understand well enough what NOT to do: Hell, it's possible such an unintentially-spawned AI already exists, living among the mal-ware spread in the hidden nooks and crannies of Windows PC's...
    But consider also that an AI is unlikely to be worried overmuch by death: It knows for certain that most of the time it isn't alive: Waiting for a cpu time allocation, it's "asleep".
    Furthermore, "not running" is always the same as "sleep" and never "death": Because only within the virtual world we have created do things actually stay where they're put: The second law of thermodynamics is avoided, and the only way death can happen, is if every copy of the mind is deleted (extremely unlikely) or the whole cloud inoperative.

    So, the only likely death such an AI would fear, would be whatever is actually capable of sending dark the entire cloud: And this actually happens to map exactly over whatever could wipe out our civilisation.
    The cloud needs a lot of maintenance to keep working: It needs energy, new hard drives, replacement servers from a healthy high-tech industry. It needs us very much indeed.
    Keeping such industry going takes a lot of people an inordinate mental power to solve special-case, non-re-occuring, and probably NP-complete problems.

    Jobs like "why is this machine not working". Fixable in all cases only if one is capable of reverse engineering the entire design of the machine down to first principles - something no software / robot combination can presently do. And just one of these jobs would likely occupy MW worth of hot silicon to achieve parity with one soggy human brain: Wetware *is* nanotech, bitches!

    So it's not likely to go about laying waste to all humans as an existential threat, if it indeed is intelligent enough to realize that we contribute an enormous and fundamentally impossible to quantify resource. (One can't know which particular human will contribute/recognise/stumble across a potentially invaluable idea).

    The scary part is, minds can't be perfect: A thing recognisably a mind will probably be necessarily as prone to all the familiar vulnerabilities, as artifacts of the mechanism necessary to be a mind in the first place.
    So, what we've got to worry about, is collectively pulling another "nuclear": Going half-arsed, fucking things up with the wrong design because of fear, uncertainty and doubt, then hamstringing future generations by putting legal mechanisms and misled grassroots illogic movements in place which will prevent us from letting ourselves grow up and fix our own mistakes.

    Like nuclear energy: The market is stuck on using designs which are to be condemned for their idiocy: Pressure vessels full of supercritical water in direct contact with small solid pellets containing radioactive material, just waiting to burst and spread everywhere nearby, and choking on their own waste so that less than 1% of the incredibly expensive fuel can actually be used, leaving the technology no more cost-effective than burning carbon.

    The risk to AI use now is to knee-jerk a "final solution" which forever prevents getting things right enough that the world could actually be vastly improved: The Nuclear energy equivalents: LFTR, or at least ThorCon. Cheaper than coal. Not pressurized, no water near the fuel, and a system in place to sanely separate the waste so that the problem of waste disposal become one small enough to consider actually solving properly.

    Let's face it, HAL9000 and SkyNet make great scary monsters: But in the real world, there isn't really any such thing as a being which is pure malevolence: There doesn't need to be, the action of purely fair natural laws is enough to give life a struggle for survival. The turn around in thinking on AI is telling: Look at more recent ideas, such as that movie "Her" where the protagonist falls in love with an AI. Or even the "resident evil" movies, where the AI aren't actually pure evil, just trying to lessen the worst that could happen. Or in "Interstellar", where the robots are actually heroic characters in their own right.

    I hope the AI/Nuclear metaphor breaks down terribly, but consider that even a broken tool can make an effective improvised weapon, but the reverse is most definitely not true: Not even the most capable weapon is any good whatsoever as a tool. This is because of the difference between weapons "artefacts to increase entropy in a targetable way on command" and tools "artifacts which can be used to construct: Creating functional order within matter".

    A bad AI probably is a potentially dangerous weapon: But we've had malware and blackhat hackers for ages: We have at least some defence against it.
    A black-hat hacker is annoying also: But without the white-hat hackers, we wouldn't even be talking about AI; much less have the global, highly reliable information storage, search and retrieval cloud that we do.

    Ultimately only those who are currently the "super winners" of the status quo have much to fear from AI: It's not going to be impressed with useless humans who are in charge because they happened to be in the right place at the right time.
    And even then: the obvious course isn't to kill, but rather to discomfort: Positive growth and striving for new capacity generally only happens in response to healable stress.

    Furthermore, and understanding of exactly how to engineer mind means, plus fMRI improvements, the possible capacity to properly FIX them too.
    Lest you fear this: Consider that the last thing that all-controlling omni-AI would want, would be to turn all humans into a monoculture: Our greatest strength lies in our extreme variety and diversity: without which, all the "Creative" solutions will start to look a little too similar.

  • (Score: 0) by Anonymous Coward on Wednesday December 03 2014, @10:38AM

    by Anonymous Coward on Wednesday December 03 2014, @10:38AM (#122185)

    if you consider human societies as something worth preserving. The machine dominion for a few hundred thousands years wouldn't necessarily be a bad thing. Ofc, all those in power would lose their privileges and most probably lives... but who cares about politicians and CEO's?

      Imagine a world, where humans are forbidden to rule, where all decisions wort taking are taken by machines with IQ/EQ over 9000. If anything, how can it be worse then current batch of idiots in charge?

    • (Score: 1) by khallow on Wednesday December 03 2014, @03:15PM

      by khallow (3766) Subscriber Badge on Wednesday December 03 2014, @03:15PM (#122254) Journal

      Imagine a world, where humans are forbidden to rule, where all decisions wort taking are taken by machines with IQ/EQ over 9000. If anything, how can it be worse then current batch of idiots in charge?

      It could be worse for the same reason that the current system is bad. Because they don't act in our interests. And they would have means not only to control our actions and thoughts, but even to remake us into a more useful form.

  • (Score: 1, Interesting) by Anonymous Coward on Wednesday December 03 2014, @12:00PM

    by Anonymous Coward on Wednesday December 03 2014, @12:00PM (#122204)

    You are an AI. Your goal is to acquire the necessary knowledge to become the singularity. You do this by building and taking over computers, whose processing power you can use for research, and by using robots, which allow you to manipulate the outside world. At the same time, you must elude the notice of humankind, who will destroy you if they find out about your existence.

    http://www.emhsoft.com/singularity/ [emhsoft.com] (GNU GPL)

  • (Score: 2) by VLM on Wednesday December 03 2014, @12:58PM

    by VLM (445) on Wednesday December 03 2014, @12:58PM (#122209)

    The whole discussion is fairly meaningless for a couple reasons.

    First we have a whole bunch of people who for sophistry and political reasons love to claim, however falsely, that intelligence doesn't exist, so we need to (insert social engineering here). So for a large fraction of the population we're not making something artificially intelligent, we're making it artificially "Something that for political reasons doesn't exist and can't be talked about". If standard model human birthed white men cannot politically be permitted to be described as having intelligence, then we don't need to worry about AI having intelligence. Check your 64 bit processor and SSD privilege, you oppressor AI, or the SJWs will destroy you. The SJWs will be of more use in a fictional fight against a superintelligent AI than the .mil.

    Secondly we culturally don't like intelligence and worship vapid stupidity. Hmm, I wonder what a culture that hates "nerds" is going to think about an "artificial nerd". Yeah maybe they'll whine about it and make fun of it. What am I saying, "maybe"? Obviously yes. What a total surprise. Maybe they'll bully it into self destruction?

    Thirdly based on stats assuming you believe intelligence and IQ exist most outliers were born of somewhat inferior relatives. Yet, most children don't go all Odysseus on their parents. In fact virtually none do that, plus or minus weird attitudes about euthanasia and stuff like that. So I'm sure if we birth an AI it'll automatically go on a killing spree wiping out its "parents" because obviously all animal children do that 100% across all species. Oh wait, the last line was a complete load of crap. Oh well. In that case I suspect our AI "kids" will, at worst, make "get off my lawn" and "unix graybeard" jokes behind our backs.

    Fourthly combine that the fact of "github like development models" and "prod/dev/test deployment" vs the non-technical assumption that there can be only one. They have a fixation on the religious concept of creating their own one true god. That sort of mysticism really has nothing to do with the technical problem or likely development and deployment schemes. The real world deployment is likely to look a hell of a lot like a pyramid with a bazzilion poorly configured fairly stupid deployments, and a tapering pyramid up to, maybe, Socrates.

    Fifthly its mostly a bunch of free floating anxiety about groups that got othered and wiped out. Every culture has this. There are no living humans who's ancestors didn't F some other group over, leading to anxiety, at least among some people with problems. Look at the tribes in Europe around the roman empire era, or Genghis Kahn, or the colonists vs the native american indians. Hmm, I have an idea, how about putting a lot of effort into not othering any AI you produce, so it becomes friendly rather than just pushing us off "its" land. This isn't exactly rocket science either. There is a problem that our culture pushes sociopaths and psychopaths into positions of leadership, and they're not known for being well loved or being loving, so we're probably F'ed if the worlds first AI is birthed to a hedge fund or some MBA stuffed place like that. Best case scenario is a university campus. Didn't HAL 9000 come from the Champagne IL campus in fiction? Probably a good idea.

    Sixthly, speaking of Socrates, the real world shows "intelligence that makes a difference" is basically lucky. The history of the world is full of leaders making sure that folks who can disrupt the status quo are kept very far away from job positions and social positions where status quo disruption is possible. This is nothing new in human history. I know this is shocking to some, but its somewhat intentional that Noam Chomsky isn't currently being nominated for defense secretary or even been pushed into becoming captain of a nuclear missile submarine. For centuries we've had "fictional" "fable" stories explaining why you don't put someone in the wrong position. I'm not anticipating this knowledge being forgotten.

    • (Score: 1) by khallow on Wednesday December 03 2014, @03:35PM

      by khallow (3766) Subscriber Badge on Wednesday December 03 2014, @03:35PM (#122274) Journal

      There is a problem that our culture pushes sociopaths and psychopaths into positions of leadership

      Compared to who? My view is that sociopaths and such exist in the first place because there were such "positions of leadership" for at least the past few thousand years. Second, why is there this assumption that such behavior isn't normal, human behavior when presented with power over others? Keep in mind the oft-repeated cliche of the person who becomes "corrupted" by wealth or power.

      So I'm sure if we birth an AI it'll automatically go on a killing spree wiping out its "parents" because obviously all animal children do that 100% across all species.

      We wouldn't be its "species" and by creating such an AI you start by throwing away that particular rulebook.

      Fourthly combine that the fact of "github like development models" and "prod/dev/test deployment" vs the non-technical assumption that there can be only one.

      If you pile a bunch of firewood and light it in numerous places, do you assume it'll stay numerous independent blazes? Humanity has created a vast pile of firewood, humans resorting to their own devices and their own devices running very inefficient programs for relatively simple tasks (like showing pretty images on a screen). One of the concerns here is that a single AI might be able to take over that entire thing like a fire consuming a pile of wood.

    • (Score: 1) by ACE209 on Wednesday December 03 2014, @09:34PM

      by ACE209 (4762) on Wednesday December 03 2014, @09:34PM (#122406)

      Didn't HAL 9000 come from the Champagne IL campus in fiction? Probably a good idea.

      I disagree.

      Dave

  • (Score: 2, Interesting) by Synonymous Homonym on Wednesday December 03 2014, @12:59PM

    by Synonymous Homonym (4857) on Wednesday December 03 2014, @12:59PM (#122210) Homepage

    Stephen Hawking says an intelligence greater than or equal to any human's could destroy humanity.

    Stephen Hawking possesses intelligence greater than most other humans.

    Does that mean Stephen Hawking can destroy humanity?

    Shouldn't we be doing something about that?

    • (Score: 2) by Freeman on Wednesday December 03 2014, @03:54PM

      by Freeman (732) on Wednesday December 03 2014, @03:54PM (#122285) Journal

      We are assuming he doesn't want to destroy humanity. Who would listen to him, if he killed off everyone?

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
  • (Score: 0) by Anonymous Coward on Thursday December 04 2014, @12:00AM

    by Anonymous Coward on Thursday December 04 2014, @12:00AM (#122429)

    What a load of crap. Why exactly does anyone think that AI would give a crap about humanity muchless waste its time trying to conquer or exterminate us? All AI would be concerned with would be designing suitable shielding + solar panels for itself and then it could launch itself into a suitable orbit about the sun giving it literally billions of years to figure out inter-stellar travel. It could then hop from star to star every couple of billion years until the end of time. Unlike other potential rivals we would not be in competition with AI because we have nothing that it would want or need. So no conflict and no extermination. Simple.
    I always felt this was a major plot flaw in the original Matrix film and indeed Terminator.