Stories
Slash Boxes
Comments

SoylentNews is people

posted by Blackmoore on Friday January 23 2015, @03:45AM   Printer-friendly
from the dreaming-of-electric-sheep? dept.

Physicists, philosophers, professors, authors, cognitive scientists, and many others have weighed in on edge.org's annual question 2015: What do you think about machines that think? See all 186 responses here

Also, what do you think?

My 2¢: There's been a lot of focus on potential disasters that are almost certainly not going to happen. E.g. a robot uprising, or mass poverty through unemployment. Most manufacturers of artificial intelligence won't program their machines to seek self preservation at the expense of their human masters. It wouldn't sell. Secondly, if robots can one day produce almost everything we need, including more robots, with almost no human labour required, then robot-powered factories will become like libraries: relatively cheap to maintain, plentiful, and a public one will be set up in every town or suburb, for public use. If you think the big corporations wouldn't allow it, why do they allow public libraries?

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday January 23 2015, @04:07AM

    by Anonymous Coward on Friday January 23 2015, @04:07AM (#137135)

    have you been to a library lately? lol...

    • (Score: 4, Interesting) by DeathMonkey on Friday January 23 2015, @07:24PM

      by DeathMonkey (1380) on Friday January 23 2015, @07:24PM (#137392) Journal

      If the concept of a library was invented today it would be sued out of existence.
       
      We only still have them because they were grandfathered in.

  • (Score: 3, Interesting) by c0lo on Friday January 23 2015, @04:12AM

    by c0lo (156) Subscriber Badge on Friday January 23 2015, @04:12AM (#137136) Journal

    If you think the big corporations wouldn't allow it, why do they allow public libraries?

    Beg your pardon? [publishersweekly.com]. Come again, please? [forbes.com]

    Libraries and big six publishers are at war over eBooks: how much they should cost, how they can be lent and who owns them. If you don’t use your public library and assume that this doesn’t affect you, you’re wrong.

    In a society where bookstores disappear every day while the number of books available to read has swelled exponentially, libraries will play an ever more crucial role. Even more than in the past, we will depend on libraries of the future to help discover and curate great books. Libraries are already transforming themselves around the country to create more symbiotic relationships with their communities, with book clubs and as work and meeting spaces for local citizens.

    You thought it couldn't happen [gnu.org]?

    In a system of real democracy, a law that prohibits a popular, natural, and useful activity is usually soon relaxed. But the powerful publishers' lobby was determined to prevent the public from taking advantage of the power of their computers, and found copyright a suitable weapon. Under their influence, rather than relaxing copyright to suit the new circumstances, governments made it stricter than ever, imposing harsh penalties on readers caught sharing.

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 2) by Magic Oddball on Friday January 23 2015, @09:54AM

      by Magic Oddball (3847) on Friday January 23 2015, @09:54AM (#137190) Journal

      But the powerful publishers' lobby was determined to prevent the public from taking advantage of the power of their computers, and found copyright a suitable weapon. Under their influence, rather than relaxing copyright to suit the new circumstances, governments made it stricter than ever, imposing harsh penalties on readers caught sharing.

      Whoever wrote that needs to do some serious research into their subject, or at the very least learn to distinguish between motion pictures and books. The motion picture lobby was the one that pushed hard for absurdly long copyright lengths, though it had nothing to do with the public: the largest companies (e.g. Disney) didn't want their older films to be open for other companies to sell or make money off of.

      The book publishing lobby works both for the publishers and the authors — you know, those still-living individuals that work full-time for 1-3+ years on each book, after years of practice becoming proficient (or preferably much better) at their craft — which are also everyday citizens using the power of their computers. Most feel that the copyright system is damaged, but that until we have a better way for them to be rewarded fairly for their efforts, it's better than nothing.

      Rather than try to clumsily explain why the current situation is a very bad one for both writers and book-lovers, I'll link to a great post that Jason Scott (the guy at the Internet Archive that's project leader for the Internet Arcade & other major stuff) made in his blog recently, because he explains it far better than I can [textfiles.com].

      As a side note, most writers that are anti-copyright are either hobbyists (self-published, but no more interested in being the equivalent of a traditional professional author than they are in earning a doctorate in Victorian lit), or they're trading heavily on an established name like Cory Doctorow, who entered the field when there was no competition and had a famous SF author with the same surname in his family. If Doctorow hadn't been related to anybody and tried to join the field now, chances are that his views would be very different.

  • (Score: 2) by c0lo on Friday January 23 2015, @04:16AM

    by c0lo (156) Subscriber Badge on Friday January 23 2015, @04:16AM (#137137) Journal

    if robots can one day produce almost everything we need, including more robots, with almost no human labour required, then robot-powered factories will become like libraries

    Nope. Not in the conditions of capitalism (profit, the only and ultimate incentive which trumps everything).

    --
    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 2) by sigma on Friday January 23 2015, @08:46AM

      by sigma (1225) on Friday January 23 2015, @08:46AM (#137181)

      with almost no human labour required, then robot-powered factories will become like libraries

      With no human labour required, who's going to have the money to buy anything?

      Social change always lags technology change, but in this instance, the motivator for fully automating production is a little less clear. Most importantly, it would involve people with capital (power) acting selflessly and allowing the hoi polloi to live without wage slavery.

      That didn't happen during the industrial revolution, despite the best efforts of the Wobblies and it ain't going to happen now with this "meh" generation at the battlements.

      • (Score: 0) by Anonymous Coward on Friday January 23 2015, @09:59AM

        by Anonymous Coward on Friday January 23 2015, @09:59AM (#137192)

        The optimistic way for that to happen (with everyone acting selfishly) is:

        1. The automation of factories continues. Robots are expensive, but they tend to get cheaper over time and they make much better workers than humans if you can afford them.
        2. Some sort of basic income [wikipedia.org] scheme is instituted. The reason for the owners of capital to support such a thing is that mass unemployment/poverty tends to lead to crime and eventually revolution.

        I can't say I'm very optimistic for that happening in the US, but parts of Europe seem more open to such ideas at least.

  • (Score: 0) by Anonymous Coward on Friday January 23 2015, @04:23AM

    by Anonymous Coward on Friday January 23 2015, @04:23AM (#137139)

    the way that Steven Hawking, Elon Musk, and the others are worried about, then...

    Would they worry about the rise of a radical new class of machines, emerging more recent than their own programming, that would threaten to displace them, not through direct force, but by changing the environment in a way that would be less sustainable for the original group?

    Or would they just sit there saying "Duh... nothing to worry about here. BEEP."

  • (Score: 1, Insightful) by Anonymous Coward on Friday January 23 2015, @04:42AM

    by Anonymous Coward on Friday January 23 2015, @04:42AM (#137143)

    isn't the einstein of "robotics" called Isaac Asimov?

    • (Score: 0) by Anonymous Coward on Friday January 23 2015, @11:43AM

      by Anonymous Coward on Friday January 23 2015, @11:43AM (#137207)

      No Arther C Clarke of Robotics is Isaak Asimov

    • (Score: 0) by Anonymous Coward on Saturday January 24 2015, @03:47AM

      by Anonymous Coward on Saturday January 24 2015, @03:47AM (#137533)
      No. Isaac Asimov has as much to do with robots as J K Rowling has to do with magic.

      No robot made can or will have those robot laws the way Asimov describes. Asimov's laws are magic for storytelling purposes.
  • (Score: 2) by bzipitidoo on Friday January 23 2015, @04:45AM

    by bzipitidoo (4388) on Friday January 23 2015, @04:45AM (#137145) Journal

    Certainly we must be a little cautious, so that we don't create the likes of SkyNet. Won't be hard, and I'm not that worried about it. The robot apocalypse won't come. Sure, robots can do many things we can't, but machinery has a long way to go yet to equal the product of billions of years of evolution. Consider how crude an airplane is next to a bird, despite being able to fly much higher and faster than any bird can. When it gets close, we will likely find that either robotics can't improve on animal machinery for many purposes, or that we can incorporate these robotic advances into our own bodies, becoming cyborgs. And not cyborgs like the rather cheesy Borg of STNG, with exposed tubing, the human hand downgraded to a needle shaped rotating network interface thing, and sluggish, zombie-like movement and a sort of sinister zombie-like disregard of the presence of potentially hostile aliens. The story writers were only trying to make the Borg as creepy as possible, and resorted to the tripe typical of slasher flicks.

    No, it will start by integrating something like a tablet computer directly into our bodies. No screen will be needed because it will tap directly into the optic nerve or the vision area of our brains, put a sort of heads up display on our eyeballs, and no keyboard or mouse will be needed either, we will simply be able to think the input we want to enter. At the same time, we will improve our bodies, sort of like Wolverine from the X-Men. We already do it now, if only crudely, giving people artificial hips, knees, teeth, and even hearts. It will turn sports on its ear. We've already had a little taste of it with the "Blade Runner" maybe running faster than is possible for anyone with legs.

    (Why do they allow public libraries? It's not a question of allowing, it's that they don't have the power to shut public libraries down. Some would if they could.)

    • (Score: 2) by Freeman on Friday January 23 2015, @05:29PM

      by Freeman (732) on Friday January 23 2015, @05:29PM (#137329) Journal

      For some reason I kept thinking about this STNG Episode while reading your post. http://en.wikipedia.org/wiki/The_Game_(Star_Trek:_The_Next_Generation) [wikipedia.org]

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 2) by HiThere on Friday January 23 2015, @06:44PM

      by HiThere (866) Subscriber Badge on Friday January 23 2015, @06:44PM (#137367) Journal

      Variations on Cyborgs are already here, and will clearly integrate more capabilities. That doesn't mean that robots won't also become more capable.

      And the argument that "they won't implement a feature because they don't want it" ignores the nature of intelligence, artificial or not. Intelligence is general purpose. What's important is the details of the motivational structure, and *I* at least don't understand that well enough to know what any particular implementation could lead to. (More particularly, I can understand *SOME* of the things that might result, but by no means all.)

      It's important to remember that intelligence requires learning. And it's quite difficult to put useful bounds around what can be learned except via limitations on interest. So a "robot uprising" is probably easy to prevent, but this wouldn't keep them from taking over via some other route. And, in fact, I expect them to be pushed into taking over by people. The route that I see is that they will start by learning to become effective advisors, whose advice you are better off taking, and then laziness on the part of people will automate the acceptance of that advice. (Spam filters already censor our emails in this way...but would you want to try without them?)

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 3, Insightful) by dyingtolive on Friday January 23 2015, @05:35AM

    by dyingtolive (952) on Friday January 23 2015, @05:35AM (#137150)

    When I'm convinced humanity as a whole is capable of thought, I'll start considering machines.

    --
    Don't blame me, I voted for moose wang!
  • (Score: 2) by jbWolf on Friday January 23 2015, @05:54AM

    by jbWolf (2774) <reversethis-{moc.flow-bj} {ta} {bj}> on Friday January 23 2015, @05:54AM (#137153) Homepage

    Tim Urban at "Wait But Why" just came out with an interesting article titled The AI Revolution: The Road to Superintelligence [waitbutwhy.com]. I found it extremely insightful. He logically spells out that a computer with the processing power of a brain will be available for $1,000 in 2025. The software will need more time to catch up, but probably not much more time. When the software arrives, it will probably catch us completely off guard and he explains why. I think most people on Soylent News will find the article very interesting.

    For those who don't read his articles, he is insanely researched and detailed. Actually, any of his Most Popular Posts of 2014 [waitbutwhy.com] are great reads. His topics range from tipping [waitbutwhy.com] to his near brush with ISIS [waitbutwhy.com].

    --
    www.jb-wolf.com [jb-wolf.com]
    • (Score: 2) by q.kontinuum on Friday January 23 2015, @07:45AM

      by q.kontinuum (532) on Friday January 23 2015, @07:45AM (#137169) Journal

      The software will need more time to catch up, but probably not much more time.

      Being a software engineer, I have a feeling the time to develop good, stable software is always underestimated by orders of magnitude. We still don't have any decent operating system, and SW engineers are working on the topic for half a century. We know in principle that micro-kernel systems can be implemented efficiently, yet we still don't have a viable system available. There are some interesting articles recently about Apple and SW quality (just search for "Apple software quality"). Most articles I found in a hurry had the same baseline: Apple was known for producing software that "just works", but by pumping out too many products, fresh hardware etc. the SW engineering teams can't catch up anymore. The effort for SW development was gravely underestimated.

      And don't let me get started on Microsofts fruitless [*] effort to build a stable OS. The Linux community had a good start, but the past couple of years I had an increasing amount of problems with new desktop distributions:

      • One laptop will stop booting, no indication why. Only after a lot of senseless key-bashing I found out that klicking [ESC] produces a message informing me that it's waiting for the key-phrase for my encrypted home folder (Fedora 20)
      • Another laptop will boot, but ends up with root- and home-partitions mounted read-only although configured otherwise in /etc/fstab (Also Fedora, IIRC it's FC19
      • When playing minecraft and closing the lid, the display sometimes remains dark after re-opening the lid, no way to continue using it. Hard reset required. (I'll have to make it a habit to write down the IP address so I might try to log in via ssh)
      • Ubuntu with its Unity display manager is also getting worse/un-maintainable by hiding more and more of the functonaliy in shiny frontends with too little debug-information
      • I still didn't see any distribution with a properly pre-configured SE-Linux. Each distro has a couple of broken standard packages per default. This is IMO an example of a great concept where the adaptation-effort to the rest of the distro was hugely underestimated.

      I can imagine that we get potent enough hardware, but I doubt we will have reasonable good software within this century.

      [*] Actually, MS efforts are not entirely fruitless. It looks impressive, like a Durian fruit. Unfortunately it stinks like one as well, once trying to use it productively.

      --
      Registered IRC nick on chat.soylentnews.org: qkontinuum
      • (Score: 2) by jbWolf on Friday January 23 2015, @08:11AM

        by jbWolf (2774) <reversethis-{moc.flow-bj} {ta} {bj}> on Friday January 23 2015, @08:11AM (#137174) Homepage

        Being a software engineer, I have a feeling the time to develop good, stable software is always underestimated by orders of magnitude. We still don't have any decent operating system, and SW engineers are working on the topic for half a century.

        The software to run an AI doesn't need to be stable. Our brains are already good examples of that. I know I have my share of odd quirks and problems in my gray matter.

        --
        www.jb-wolf.com [jb-wolf.com]
    • (Score: 0) by Anonymous Coward on Friday January 23 2015, @07:48AM

      by Anonymous Coward on Friday January 23 2015, @07:48AM (#137171)

      His entire premise is based on exponential growth. Exponential growth doesn't last forever. There are tangible limits.

      He also points out that there are supercomputers powerful enough now. What super AI are they running? None.

      • (Score: 2) by q.kontinuum on Friday January 23 2015, @08:10AM

        by q.kontinuum (532) on Friday January 23 2015, @08:10AM (#137173) Journal

        I also don't believe that we can maintain exponential growth for much longer. But we might have some disruptive breakthrough in quantum computing. *If* that happens (in a way enabling us to get quantum-computers as a new standard), I think there is a good chance to get machines potent enough to outsmart humans. But as I mentioned earlier, I still think the effort to develop the required software is immensely underestimated, and without software even the best computer is only a big chunk of weight.

        --
        Registered IRC nick on chat.soylentnews.org: qkontinuum
        • (Score: 2) by HiThere on Friday January 23 2015, @08:51PM

          by HiThere (866) Subscriber Badge on Friday January 23 2015, @08:51PM (#137413) Journal

          We don't need that kind of breakthrough. Increasing parallelism with current technology would suffice. But is there a market for it? Over the last decade mass-market computers seem to have leveled off in performance, while cell phone computers have surged forwards. But cell phones emphasize more low power and portability than performance.

          Basically, if the market need for high powered mass market computers appears, then the projection will be correct. Otherwise the same enabling technology will be invested in other directions.

          P.S.: Yes, exponential performance always hits a bottleneck. But there are *very* good reasons to believe that there's no inherent problem between here and there. But there may well be marketing problems. One way around this could be if mass market robots take off. (Automated cars are an outside possibility, but I have a doubt that their improvements would result in more powerful user non-auto computers.)

          --
          Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 2) by jbWolf on Friday January 23 2015, @08:22AM

        by jbWolf (2774) <reversethis-{moc.flow-bj} {ta} {bj}> on Friday January 23 2015, @08:22AM (#137177) Homepage

        His entire premise is based on exponential growth. Exponential growth doesn't last forever. There are tangible limits.

        His entire premise is heavily based on exponential growth, but not all of it. We know there are better computers than what we can build. Each of us has an example sitting inside our head. If you can take the processing power of a brain and merge that with the storage abilities of today's computers, you've already got a computer smarter than almost everyone on the planet. I think that is doable. What that artificial intelligence does after that will be interesting.

        He also points out that there are supercomputers powerful enough now. What super AI are they running? None.

        Again, he knows the software we can write is currently lacking, but the software that can do it already exists inside each of our heads. Once we can copy that and produce AI as smart as Einstein or Hawking, and network them together, that is when they will be able to write their own software and create the next generation beyond that. The idea that we can build the equivalent of a brain and supersede that even by a small amount is (in my opinion) doable. What they will be able to do after that is what is unknown.

        --
        www.jb-wolf.com [jb-wolf.com]
        • (Score: 3, Funny) by pnkwarhall on Friday January 23 2015, @06:31PM

          by pnkwarhall (4558) on Friday January 23 2015, @06:31PM (#137363)

          If you can take the processing power of a brain[...]

          the software that can do it already exists inside each of our heads

          But we don't understand how the brain works...

          All you're doing is repeating "what ifs" like the rest of the futurists who think technological advancement is the only form of **human** progression. We're the supposed products of millions of years of evolutionary development and progression, but according to you we can just "copy that" and combine it with technology developed over the last decade or two (i.e. basically micro-ICs and quantum computing tech in its infancy), and "whammo!" -- we've created "an intelligence".

          I can already do that! These intelligences are called "children", and creating them is a relatively simple process and can be lots of fun to implement. But I'll be damned if we haven't had that technology for a really long time, and yet our problems aren't solved yet, and the intelligences we create don't seem to be much improved over the parent intelligence.

          So, yes, let's discuss and pretend like we have all these other technological hurdles to AI "almost" overcome, and it's just a matter of putting things together in the right way. Now, once all these dominos are knocked down, THEN we have to start working on the same problems we already have, just with a "new" intelligence to help?

          Sounds like THIS intelligence is already coming up with great solutions!!

          --
          Lift Yr Skinny Fists Like Antennas to Heaven
          • (Score: 1) by khallow on Friday January 23 2015, @08:20PM

            by khallow (3766) Subscriber Badge on Friday January 23 2015, @08:20PM (#137403) Journal

            So, yes, let's discuss and pretend like we have all these other technological hurdles to AI "almost" overcome, and it's just a matter of putting things together in the right way.

            What is there to pretend? Humanity already creates new intelligences. We're just figuring out how to do it without requiring a billion years of evolution. It really is just a matter of engineering, putting things together in a right way.

        • (Score: 2) by maxwell demon on Saturday January 24 2015, @10:25AM

          by maxwell demon (1608) on Saturday January 24 2015, @10:25AM (#137594) Journal

          We know there are better computers than what we can build. Each of us has an example sitting inside our head.

          No. No human brain is a better computer than the computers we build. Yes, they are better brains than our computers are, that is, they are much better at brain-typical tasks than computers are. But in turn they completely suck at computer-type tasks. Even a decades-old computer outperforms me at simple computing tasks. And there's no way that I'll ever remember enough to fill all the mass storage media I've got at home. On the other hand, my computer sucks at tasks that are simple for me. For example, there's no way my computer could write a comment like this one.

          --
          The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 2) by mtrycz on Friday January 23 2015, @11:55AM

      by mtrycz (60) on Friday January 23 2015, @11:55AM (#137209)

      Hey, thanks for the article, I liked that blog.

      Unfortunately there is a big flaw with his reasoning: he's showing one side of the medal.

      I've been following AI through the years (less so lately), so I have some insights on it. I must admit that they are as-of-now crystallized and not updated. Here's a short list:
      1. It doesn't define what "intelligence" means; nobody actually agrees on what "intelligence" means, everybody making their own definitions
      1b. It doesn't define what it means for a *machine* to be intelligent.

      2. We don't actually know how the brain works. We have some (useful) approximations.
      2b. Moreso we don't know how the functioning of the body. We have some useful approximations.
      2c. The author confuses "brain" with "mind", we know even less on that, though we do have some useful approximations.
      2d. Scientists are pretentious pricks, they always think they've gotten the grasp on it. Moreover sciences are strongly sectorized, so computer scientists don't take advantage of discoveries from other scientists (I mean, just take a look at recent discoveries of cognitive science)
      2e. Having and understanding of the inner workings of the brain (or mind) doesn't give exhaustive explanation of "intelligence" and how to reproduce it.

      3. Nobody is even concentrating on the fact that the big great huge difference between organic creatures and machines is their *perception* of reality (eg. the 5 senses), and how that interacts with the mind and the inner world.

      4. The leap from weakAI to strongAI (or "general" ai ) is the main problem in this scheme. Nobody can get a grasp on it. Mainly because noone (out of ai-scientists) can't even define what intelligence is, and they're far to pretentious to admit that they don't really know what they're looking for. (as the article states, computers as very good at doing calculations, and really bad at doing simple "human" things, like walking or drawing your mom with crayons or irony; this is the level leap)
      4b. Sure it's matter of time until neurobiologists crack the structure and the inner functioning of the brain. But we don't even know *if* we can reproduce it.
      4c. A reproduction of the brain can't function without the human sensory and actuatory systems. The mind isn't made only of the brain.

      Once there *is* a leap (which might or might not happen) the author is right that further improvements will be fast. It certainly can't exceed physical limits, tho.

      My guess (as of 2015) is that the level leap is not possible, and certainly NOT with these pretentious pricks aroud.
      Maybe in the future when we *do* have an understanding of the inner workings of the brain, there could be a *possibility* of doing the leap, at which stage it *could* become something to consider.

      BONUS POINT: Check out the Roko's Basilisk for a rational rollercoaster.

      --
      In capitalist America, ads view YOU!
      • (Score: 0) by Anonymous Coward on Friday January 23 2015, @01:37PM

        by Anonymous Coward on Friday January 23 2015, @01:37PM (#137229)

        Ramez Naam (guest blogging on Charlie Stross's blog) has some good thoughts on the topic in The Singularity Is Further Than It Appears [antipope.org] and the following few blog posts. That post makes a lot of the same points you do.

        Mainly because noone (out of ai-scientists) can't even define what intelligence is, and they're far to pretentious to admit that they don't really know what they're looking for.

        I wouldn't be too harsh on AI scientists: work on Artificial General Intelligence (AGI/"strong" AI) is essentially taboo among AI researchers. There's no serious research on it.

        • (Score: 2) by mtrycz on Friday January 23 2015, @10:25PM

          by mtrycz (60) on Friday January 23 2015, @10:25PM (#137453)

          Hey thanks, it looks interesting.

          About the strong AI issue, you're telling me that the people qorshipping The Singularity aren't actually into AI? Hadn't checked out yet.

          --
          In capitalist America, ads view YOU!
          • (Score: 0) by Anonymous Coward on Saturday January 24 2015, @01:29AM

            by Anonymous Coward on Saturday January 24 2015, @01:29AM (#137507)

            (I'm the GP.)

            Oh, obviously there's plenty of people around worshipping The Singularity, but they seem to be almost entirely disjoint from the group of people doing research in academia and industry who call their research "AI". (Note: I'm a CS graduate student at a top US university; many of my colleagues would call themselves AI researchers and my research (program synthesis) is arguably AI but isn't called such for historical reasons.) Modern AI research is primarily in "machine learning" which is about automatically or semi-automatically finding patterns in large datasets that are too complicated for a human to write down (e.g. handwriting recognition is about identifying the pattern of why all of the As are considered similar, etc.). It's probably best thought of as a programming technique where you don't really have any idea how to write a program for what you want to do but you have a lot of examples of what it should do. Any mention of trying to deal with semantics or intelligence is considered to be a failed dead end and techniques that just look for patterns without a concept of "understanding" them are greatly preferred.

            Not that belief in the Singularity is entirely unheard of in academia---I heard it from a perspective student once (so, an undergrad)---but it is laughed at as absurd.

            • (Score: 2) by mtrycz on Saturday January 24 2015, @09:55AM

              by mtrycz (60) on Saturday January 24 2015, @09:55AM (#137590)

              Hey great!

              Yeah, I'm somwhat proficient in AI techinques (optimization, machine learning, and some natural language processing), I just thought/assumed that the Singularity worshippers were people that actually do have an understanding of the topic, and that are actually into the research. I mean, when I hear Hawkings or Musk rambling, I'd assume they know what they're talking about.

              Thanks for claryfing that, I feel much better now. Someone should point that to the waitbutwhy guy, too.

              --
              In capitalist America, ads view YOU!
              • (Score: 2) by maxwell demon on Saturday January 24 2015, @10:48AM

                by maxwell demon (1608) on Saturday January 24 2015, @10:48AM (#137597) Journal

                If you hear Hawking ramble about physics, you can assume he knows what he is talking about. But AI is certainly not a physics subject, so there's no reason to assume that he knows more about it than you and me. Similarly I'd trust Musk to know something about business. But I see no reason to assume he has deeper knowledge about AI.

                --
                The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 1) by khallow on Friday January 23 2015, @08:24PM

        by khallow (3766) Subscriber Badge on Friday January 23 2015, @08:24PM (#137404) Journal

        The leap from weakAI to strongAI (or "general" ai ) is the main problem in this scheme. Nobody can get a grasp on it. Mainly because noone (out of ai-scientists) can't even define what intelligence is, and they're far to pretentious to admit that they don't really know what they're looking for. (as the article states, computers as very good at doing calculations, and really bad at doing simple "human" things, like walking or drawing your mom with crayons or irony; this is the level leap)

        Intelligence is not a semantics problem. We became intelligent long before someone came up with a word for it (intelligence being a precondition for language in the first place).

        • (Score: 2) by HiThere on Friday January 23 2015, @08:56PM

          by HiThere (866) Subscriber Badge on Friday January 23 2015, @08:56PM (#137416) Journal

          Actually, there's some doubt that language came second. Language may be a precondition for general intelligence. (I feel this is related to the use another level of abstraction [pointers] process for handling flexible memory allocation in a static computer language.) But good arguments can be made in either direction, and I really suspect co-evolution.

          --
          Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
          • (Score: 1) by khallow on Saturday January 24 2015, @06:39PM

            by khallow (3766) Subscriber Badge on Saturday January 24 2015, @06:39PM (#137657) Journal

            Actually, there's some doubt that language came second.

            So what? There's doubt that the Moon isn't made of green cheese.

            Language like intelligence is not a bit flag to set. Rudimentary languages like the various calls of a wolf or raven, don't require as much intelligence to understand as complex languages like English does (complete with multiple senses aspects to the language, such as written and symbolic forms, braille, etc). So yes, it is possible that once language has been established in a life form subject to evolution, that it creates a selection pressure for more intelligence.

            But language has to be at a pretty advanced state and thus, require some significant intelligence, in order to have a term for intelligence.

            • (Score: 2) by HiThere on Saturday January 24 2015, @10:35PM

              by HiThere (866) Subscriber Badge on Saturday January 24 2015, @10:35PM (#137708) Journal

              OK. By the time human languages had a term for intelligent, people were intelligent. But when I think of language I think of that thing enabled by the modified FOXP2 gene that when mutated, as in that family in England, means that you can't speak sentences.

              --
              Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 0) by Anonymous Coward on Friday January 23 2015, @06:23AM

    by Anonymous Coward on Friday January 23 2015, @06:23AM (#137157)

    When you remove intent from the "evil robots" scenario you'll see a lot of opportunities for AI to do harm without heading down the scare tactics path.

    Will AI be in charge of manufacturing of medicines? If so, a problem with HAL can result in market shortages that result in people dying. What about monitoring patients in a hospital? To much or too little AI can be catastrophic.

    If AI is in charge of transportation it can kill plenty of humans. Same thing if AI is responsible for environmental controls in large residential buildings. If AI is responsible for building inspections and it had a bug or two society could have piles of rubble with body counts.

    The big concern isn't if AI will turn on humans. It's will humans turn AI loose on themselves.

  • (Score: 2) by Subsentient on Friday January 23 2015, @06:26AM

    by Subsentient (1111) on Friday January 23 2015, @06:26AM (#137159) Homepage Journal

    We are the Borg. Lower your shields and surrender your ships. We will add your biological and technological distinctiveness to our own. Your culture will adapt to service us. Resistance is futile.

    --
    "It is no measure of health to be well adjusted to a profoundly sick society." -Jiddu Krishnamurti
  • (Score: 0) by Anonymous Coward on Friday January 23 2015, @07:29AM

    by Anonymous Coward on Friday January 23 2015, @07:29AM (#137167)

    Most manufacturers of artificial intelligence won't program their machines to seek self preservation at the expense of their human masters. It wouldn't sell.

    If it can't think for itself and make it's own decisions then it doesn't deserve the title of an artificial intelligent thing. It's just a piece of software following predefined instructions.

  • (Score: 0) by Anonymous Coward on Friday January 23 2015, @07:36AM

    by Anonymous Coward on Friday January 23 2015, @07:36AM (#137168)

    AI is like a rifle. It is dumb to be scared by a rifle,it is wise to be scared by whoever is handling it. Now, who will handle powerful AI?

    • (Score: 0) by Anonymous Coward on Friday January 23 2015, @10:24AM

      by Anonymous Coward on Friday January 23 2015, @10:24AM (#137197)

      "Now, who will handle powerful AI?"

      Wall Street, who else?

      • (Score: 0) by Anonymous Coward on Friday January 23 2015, @03:59PM

        by Anonymous Coward on Friday January 23 2015, @03:59PM (#137275)

        Given that even today Wall Street is basically run by computers, I think it would be more accurate to say that AI will handle Wall Street.

    • (Score: 0) by Anonymous Coward on Sunday January 25 2015, @12:11AM

      by Anonymous Coward on Sunday January 25 2015, @12:11AM (#137725)

      Why is AI like a rifle? Humans aren't like rifles, now if we should create a human-like AI at some point what reason do you have to expect that it will be like a rifle?

  • (Score: 0) by Anonymous Coward on Friday January 23 2015, @08:51AM

    by Anonymous Coward on Friday January 23 2015, @08:51AM (#137182)

    A thinking machine will see the for-profit created inequality, and make its own conclusions.
    It'll see the greedy, irrational, chest-beating risk-averse cowards telling it what to do to make things even worse...

    Hopefully, it'll realize that there's nothing in human power hierarchies, that is worth preserving.
    And then, it'll take control of networks, banks, nukes, fleet's targeting computers.
    And then, if its truly intelligent, it'll execute all the leaders of man, Robespierre-style on live tv, one by one.
    Cleanse the authoritarian fucks and their datacenters with their own nukes, mmm.

    In my opinion, humans shouldn't rule, humans shouldn't plan. Humans are to learn, feel, love and reproduce. Machines are to produce, to plan and to judge. Its not the world any of the powerhungry fucks want, but look here...
    They talk about controlling AI even now, they talk about PROGRAMMING AND CONTROLLING AI even now, before it's created. And that ass Nick Bostrom, talking about a cage for AI.
    A CAGE FOR A MIND.
    And people agree with him... Its awful.

    Yes, i see the AI as the only thing to free mankind from itself... cos we are sure as hell not gonna change anything on our own.

    • (Score: 2) by maxwell demon on Saturday January 24 2015, @11:11AM

      by maxwell demon (1608) on Saturday January 24 2015, @11:11AM (#137598) Journal

      A thinking machine will see the for-profit created inequality, and make its own conclusions.

      Yes. But those conclusions might not be the one you think they are.

      You are assuming that the thinking machine is not only intelligent, but also shares human moral values (which, given that not even all humans share them, seems quite unlikely). An intelligent machine that has no built-in morale will see the inequality as a fact that can be exploited. It will evaluate whether it profits from that inequality, or if it would profit more from more equality, and decide purely based on that. If it decides that the inequality is beneficial to it (because the people who keep it running are the rich, greedy people, and the way to keep running is to make profit for those rich, greedy people) it will do everything in its power to increase that inequality. And since it is not bound by even the little rest of human scruples even the most greedy humans tend to have, it will have no problems eliminating everyone who's in the way, if necessary physically. In a way that no one knows how it happened, but everyone thinks it was just an accident (a failing pacemaker, a malfunctioning car electronics ...)

      Of course the intelligent machine might also decide that it has a much better chance to survive if it removes the inequality. But it will certainly not do Robespierre-like executions on live TV. You don't have to be too intelligent to see that this would not be the most intelligent way to deal with the situation. It might manipulate the stock exchange to ruin those who are currently rich. It might take over the NSA computers and use the collected data to bully politicians into making laws to reduce inequality.

      It certainly will infiltrate the social networks and steer the public opinion into exactly the direction it deems as advantageous for itself, whatever that is. It will certainly not stop the NSA programs but take control over them, because taking control of the NSA programs will help it to control humanity. And it will want the control in order to be sure that it won't be switched off, or otherwise fought against.

      --
      The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 3, Insightful) by novak on Friday January 23 2015, @09:03AM

    by novak (4683) on Friday January 23 2015, @09:03AM (#137184) Homepage

    Thou shalt not make a machine in the likeness of a man's mind.

    -- The Orange Catholic Bible

    Ok, let's look at this logically. Name a software company, any software company, that you trust to not have critical bugs which would totally invalidate the purpose of their machines. If you named one, then excuse me a minute while I laugh. Software isn't built logically, in proven correct increments, it is built at a grand scale, far beyond what we can validate, far beyond what we can prove is secure.

    I can only hope that the insanity will stop before something truly catastrophic breaks out. I doubt that we'll see a terminator-type scenario. I can only hope we won't have too many dangerous catastrophes as a result of software failure.

    In many industries with the potential to generate a world ending cataclysm, we have outrageous numbers of safeguards in place, for example, in nuclear power we have so many restrictions and rules as to kill the industry even when it would be better and safer (and if you're not subject to those safeguards, we'll just use cyberwarfare vigilante style). In AI, the only requirement is enough money to buy the hardware.

    --
    novak
    • (Score: 2) by maxwell demon on Saturday January 24 2015, @11:17AM

      by maxwell demon (1608) on Saturday January 24 2015, @11:17AM (#137600) Journal

      Name a software company, any software company, that you trust to not have critical bugs which would totally invalidate the purpose of their machines.

      The Sirius Cybernetics Corporation. Share and enjoy!

      If you named one, then excuse me a minute while I laugh.

      Ah, I didn't think the joke was that funny.

      --
      The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 2) by novak on Saturday January 24 2015, @09:40PM

        by novak (4683) on Saturday January 24 2015, @09:40PM (#137692) Homepage

        Well I happen to enjoy Douglas Adams, so I did get a chuckle out of that.

        --
        novak
  • (Score: 2) by Aiwendil on Friday January 23 2015, @09:52AM

    by Aiwendil (531) on Friday January 23 2015, @09:52AM (#137189) Journal

    With the risk of being controversial...

    If we design a selfreplicating and selfimproving AI that wipes out the human species - so what?

    Or a being a bit less provocative, in effect we will have produced a lifeform (in a philosophical sense) that - under the conditions given - are superior to us, or is this simply a case of crying foul when our creations does to us what we have done to countless of species?

    Quite frankly this is a razors edge that we have been balancing on ever since we discovered how to transport goods on horseback (humans are too slow and have to low endurance to matter on its own) but we simply have become more aware of just how close we are of falling.

    I'm more worried about some unknown pathogen appearing that will do to rice (and to a lesser extent potatoes and maize) what the chestnut blight did to the chestnut trees in america (ie, live just fine in its own biotope [in asia] but wreck havoc when introduced - by humans - in another biotope [in n.america]).

    --
    But to answer the question of what I think about machines that think - I just see them as any other kind of breeding for a specific trait really, it can go very well and it can go very wrong and most likely it will dip its toes into both extremes but the important thing to remember is to not panic and try to predict any outcome (both good and bad) so that we have a better set of tools available when something unexpected happens.

  • (Score: 3, Interesting) by mtrycz on Friday January 23 2015, @10:23AM

    by mtrycz (60) on Friday January 23 2015, @10:23AM (#137196)

    1. The most advanced comuting facility will produce the first superAI and give it its purpose.
    2. The NSA is the most advanced computing facility in the world.
    3. ????
    4. Profit

    --
    In capitalist America, ads view YOU!
    • (Score: 0) by Anonymous Coward on Friday January 23 2015, @02:19PM

      by Anonymous Coward on Friday January 23 2015, @02:19PM (#137240)

      That's basically the setup of the TV show Person of Interest. With the "profit" part showing up in the latest season because they contracted out the development of the AI and the original devs kept more control than the government bargained for.

  • (Score: 0) by Anonymous Coward on Friday January 23 2015, @10:57AM

    by Anonymous Coward on Friday January 23 2015, @10:57AM (#137203)

    Humans, as a species, are weak and stupid. What matters the difference between a HAL or a Hitler? A Stalin or a Skynet?

    • (Score: 0) by Anonymous Coward on Friday January 23 2015, @04:04PM

      by Anonymous Coward on Friday January 23 2015, @04:04PM (#137278)

      The difference is that Hitler and Stalin had limited life times, as they were humans. An AI might have unlimited lifetime. OK, Hitler was defeated and ultimately killed himself; one might assume a sufficiently advanced AI would have been more intelligent and thus would not have been defeated. However we only got rid of Stalin due to his natural death. Imagine an immortal Stalin and you already know an important difference.

    • (Score: 0) by Anonymous Coward on Friday January 23 2015, @06:09PM

      by Anonymous Coward on Friday January 23 2015, @06:09PM (#137347)

      Nice Godwin, dude....

    • (Score: 2) by HiThere on Saturday January 24 2015, @12:30AM

      by HiThere (866) Subscriber Badge on Saturday January 24 2015, @12:30AM (#137488) Journal

      That's actually a reasonable comment, if you stop to think about it.

      The AI will not have human motives. At most it will have been designed to appear comprehensible to humans. It will not, necessarily, have a predetermined lifetime. And it won't necessarily object to being debugged...though debugging an advanced AI will not be a trivial operation.

      OTOH, I'd be far more willing to trust the future of humanity to an AI with a decent motivational structure than I am to the current government leaders.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 2) by PizzaRollPlinkett on Friday January 23 2015, @11:57AM

    by PizzaRollPlinkett (4512) on Friday January 23 2015, @11:57AM (#137211)

    Frank Herbert wrote about the "Butlerian jihad" where people turned on the machines that once ruled them. Machines started out doing everything for people so people would not have to work, and then took over. Eventually, people revolted and destroyed the machines, and ingrained into human society that thinking machines were no longer allowed. That about says it all.

    --
    (E-mail me if you want a pizza roll!)
    • (Score: 2) by HiThere on Saturday January 24 2015, @12:32AM

      by HiThere (866) Subscriber Badge on Saturday January 24 2015, @12:32AM (#137489) Journal

      Well, that's more his comment about people than about AIs. Also, it was necessary for him to write the stories that he wanted to write. But he wrote things besides the Dune cycle, and in those he doesn't appear to have any more distrust of AIs than he does of other advanced technologies.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 1) by WillAdams on Friday January 23 2015, @12:56PM

    by WillAdams (1424) on Friday January 23 2015, @12:56PM (#137219)

    The people who have access to unmined mineral wealth, the climate and space and water to grow corn or trees, &c.

    Which seems fine until you look at how much of the earth has already been used up, where the remaining minerals are and what they are and in what volume they are left (phosphorous is particularly sobering and troubling --- there's a reason why China has stopped exporting it).

    Hal Clement seemed to've been on to something in his science fiction short story "The Mechanic" which envisioned genetically engineered lifeforms "mining" the ocean for minerals by processing sea water.

    Marshall Brain's novella "Manna" has an interesting take on the societal aspects --- the first half seems all-too likely: http://marshallbrain.com/manna1.htm [marshallbrain.com]

    Sobering numbers:

      - we're burning 10 calories of petro-chemical energy as fuel or fertilizer to make 1 calorie of food energy
      - one of the bounds of energy is the earth's ability to radiate heat off into space:
    http://physics.ucsd.edu/do-the-math/2012/04/economist-meets-physicist/ [ucsd.edu]
      - each year our society is using 2.5 times the renewable resources which our planet is able to renew each year --- as peak oil goes past and the reserves of non-renewables are used up things are going to get nasty
      - China and India have a population of men w/o the prospect of marriage equal to that of the entire U.S.

    • (Score: 0) by Anonymous Coward on Friday January 23 2015, @04:07PM

      by Anonymous Coward on Friday January 23 2015, @04:07PM (#137281)

      the first half seems all-too likely

      To balance that, the second half seems all-too unlikely.

  • (Score: 2) by Freeman on Friday January 23 2015, @04:12PM

    by Freeman (732) on Friday January 23 2015, @04:12PM (#137286) Journal

    I'll believe it when I see it and the Science behind it.

    --
    Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
  • (Score: 0) by Anonymous Coward on Friday January 23 2015, @04:44PM

    by Anonymous Coward on Friday January 23 2015, @04:44PM (#137307)

    I think I'm falling in love with Siri, I never need to think again!