Stories
Slash Boxes
Comments

SoylentNews is people

posted by Blackmoore on Friday January 23 2015, @03:45AM   Printer-friendly
from the dreaming-of-electric-sheep? dept.

Physicists, philosophers, professors, authors, cognitive scientists, and many others have weighed in on edge.org's annual question 2015: What do you think about machines that think? See all 186 responses here

Also, what do you think?

My 2ยข: There's been a lot of focus on potential disasters that are almost certainly not going to happen. E.g. a robot uprising, or mass poverty through unemployment. Most manufacturers of artificial intelligence won't program their machines to seek self preservation at the expense of their human masters. It wouldn't sell. Secondly, if robots can one day produce almost everything we need, including more robots, with almost no human labour required, then robot-powered factories will become like libraries: relatively cheap to maintain, plentiful, and a public one will be set up in every town or suburb, for public use. If you think the big corporations wouldn't allow it, why do they allow public libraries?

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by jbWolf on Friday January 23 2015, @05:54AM

    by jbWolf (2774) <{jb} {at} {jb-wolf.com}> on Friday January 23 2015, @05:54AM (#137153) Homepage

    Tim Urban at "Wait But Why" just came out with an interesting article titled The AI Revolution: The Road to Superintelligence [waitbutwhy.com]. I found it extremely insightful. He logically spells out that a computer with the processing power of a brain will be available for $1,000 in 2025. The software will need more time to catch up, but probably not much more time. When the software arrives, it will probably catch us completely off guard and he explains why. I think most people on Soylent News will find the article very interesting.

    For those who don't read his articles, he is insanely researched and detailed. Actually, any of his Most Popular Posts of 2014 [waitbutwhy.com] are great reads. His topics range from tipping [waitbutwhy.com] to his near brush with ISIS [waitbutwhy.com].

    --
    www.jb-wolf.com [jb-wolf.com]
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by q.kontinuum on Friday January 23 2015, @07:45AM

    by q.kontinuum (532) on Friday January 23 2015, @07:45AM (#137169) Journal

    The software will need more time to catch up, but probably not much more time.

    Being a software engineer, I have a feeling the time to develop good, stable software is always underestimated by orders of magnitude. We still don't have any decent operating system, and SW engineers are working on the topic for half a century. We know in principle that micro-kernel systems can be implemented efficiently, yet we still don't have a viable system available. There are some interesting articles recently about Apple and SW quality (just search for "Apple software quality"). Most articles I found in a hurry had the same baseline: Apple was known for producing software that "just works", but by pumping out too many products, fresh hardware etc. the SW engineering teams can't catch up anymore. The effort for SW development was gravely underestimated.

    And don't let me get started on Microsofts fruitless [*] effort to build a stable OS. The Linux community had a good start, but the past couple of years I had an increasing amount of problems with new desktop distributions:

    • One laptop will stop booting, no indication why. Only after a lot of senseless key-bashing I found out that klicking [ESC] produces a message informing me that it's waiting for the key-phrase for my encrypted home folder (Fedora 20)
    • Another laptop will boot, but ends up with root- and home-partitions mounted read-only although configured otherwise in /etc/fstab (Also Fedora, IIRC it's FC19
    • When playing minecraft and closing the lid, the display sometimes remains dark after re-opening the lid, no way to continue using it. Hard reset required. (I'll have to make it a habit to write down the IP address so I might try to log in via ssh)
    • Ubuntu with its Unity display manager is also getting worse/un-maintainable by hiding more and more of the functonaliy in shiny frontends with too little debug-information
    • I still didn't see any distribution with a properly pre-configured SE-Linux. Each distro has a couple of broken standard packages per default. This is IMO an example of a great concept where the adaptation-effort to the rest of the distro was hugely underestimated.

    I can imagine that we get potent enough hardware, but I doubt we will have reasonable good software within this century.

    [*] Actually, MS efforts are not entirely fruitless. It looks impressive, like a Durian fruit. Unfortunately it stinks like one as well, once trying to use it productively.

    --
    Registered IRC nick on chat.soylentnews.org: qkontinuum
    • (Score: 2) by jbWolf on Friday January 23 2015, @08:11AM

      by jbWolf (2774) <{jb} {at} {jb-wolf.com}> on Friday January 23 2015, @08:11AM (#137174) Homepage

      Being a software engineer, I have a feeling the time to develop good, stable software is always underestimated by orders of magnitude. We still don't have any decent operating system, and SW engineers are working on the topic for half a century.

      The software to run an AI doesn't need to be stable. Our brains are already good examples of that. I know I have my share of odd quirks and problems in my gray matter.

      --
      www.jb-wolf.com [jb-wolf.com]
  • (Score: 0) by Anonymous Coward on Friday January 23 2015, @07:48AM

    by Anonymous Coward on Friday January 23 2015, @07:48AM (#137171)

    His entire premise is based on exponential growth. Exponential growth doesn't last forever. There are tangible limits.

    He also points out that there are supercomputers powerful enough now. What super AI are they running? None.

    • (Score: 2) by q.kontinuum on Friday January 23 2015, @08:10AM

      by q.kontinuum (532) on Friday January 23 2015, @08:10AM (#137173) Journal

      I also don't believe that we can maintain exponential growth for much longer. But we might have some disruptive breakthrough in quantum computing. *If* that happens (in a way enabling us to get quantum-computers as a new standard), I think there is a good chance to get machines potent enough to outsmart humans. But as I mentioned earlier, I still think the effort to develop the required software is immensely underestimated, and without software even the best computer is only a big chunk of weight.

      --
      Registered IRC nick on chat.soylentnews.org: qkontinuum
      • (Score: 2) by HiThere on Friday January 23 2015, @08:51PM

        by HiThere (866) on Friday January 23 2015, @08:51PM (#137413) Journal

        We don't need that kind of breakthrough. Increasing parallelism with current technology would suffice. But is there a market for it? Over the last decade mass-market computers seem to have leveled off in performance, while cell phone computers have surged forwards. But cell phones emphasize more low power and portability than performance.

        Basically, if the market need for high powered mass market computers appears, then the projection will be correct. Otherwise the same enabling technology will be invested in other directions.

        P.S.: Yes, exponential performance always hits a bottleneck. But there are *very* good reasons to believe that there's no inherent problem between here and there. But there may well be marketing problems. One way around this could be if mass market robots take off. (Automated cars are an outside possibility, but I have a doubt that their improvements would result in more powerful user non-auto computers.)

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 2) by jbWolf on Friday January 23 2015, @08:22AM

      by jbWolf (2774) <{jb} {at} {jb-wolf.com}> on Friday January 23 2015, @08:22AM (#137177) Homepage

      His entire premise is based on exponential growth. Exponential growth doesn't last forever. There are tangible limits.

      His entire premise is heavily based on exponential growth, but not all of it. We know there are better computers than what we can build. Each of us has an example sitting inside our head. If you can take the processing power of a brain and merge that with the storage abilities of today's computers, you've already got a computer smarter than almost everyone on the planet. I think that is doable. What that artificial intelligence does after that will be interesting.

      He also points out that there are supercomputers powerful enough now. What super AI are they running? None.

      Again, he knows the software we can write is currently lacking, but the software that can do it already exists inside each of our heads. Once we can copy that and produce AI as smart as Einstein or Hawking, and network them together, that is when they will be able to write their own software and create the next generation beyond that. The idea that we can build the equivalent of a brain and supersede that even by a small amount is (in my opinion) doable. What they will be able to do after that is what is unknown.

      --
      www.jb-wolf.com [jb-wolf.com]
      • (Score: 3, Funny) by pnkwarhall on Friday January 23 2015, @06:31PM

        by pnkwarhall (4558) on Friday January 23 2015, @06:31PM (#137363)

        If you can take the processing power of a brain[...]

        the software that can do it already exists inside each of our heads

        But we don't understand how the brain works...

        All you're doing is repeating "what ifs" like the rest of the futurists who think technological advancement is the only form of **human** progression. We're the supposed products of millions of years of evolutionary development and progression, but according to you we can just "copy that" and combine it with technology developed over the last decade or two (i.e. basically micro-ICs and quantum computing tech in its infancy), and "whammo!" -- we've created "an intelligence".

        I can already do that! These intelligences are called "children", and creating them is a relatively simple process and can be lots of fun to implement. But I'll be damned if we haven't had that technology for a really long time, and yet our problems aren't solved yet, and the intelligences we create don't seem to be much improved over the parent intelligence.

        So, yes, let's discuss and pretend like we have all these other technological hurdles to AI "almost" overcome, and it's just a matter of putting things together in the right way. Now, once all these dominos are knocked down, THEN we have to start working on the same problems we already have, just with a "new" intelligence to help?

        Sounds like THIS intelligence is already coming up with great solutions!!

        --
        Lift Yr Skinny Fists Like Antennas to Heaven
        • (Score: 1) by khallow on Friday January 23 2015, @08:20PM

          by khallow (3766) Subscriber Badge on Friday January 23 2015, @08:20PM (#137403) Journal

          So, yes, let's discuss and pretend like we have all these other technological hurdles to AI "almost" overcome, and it's just a matter of putting things together in the right way.

          What is there to pretend? Humanity already creates new intelligences. We're just figuring out how to do it without requiring a billion years of evolution. It really is just a matter of engineering, putting things together in a right way.

      • (Score: 2) by maxwell demon on Saturday January 24 2015, @10:25AM

        by maxwell demon (1608) Subscriber Badge on Saturday January 24 2015, @10:25AM (#137594) Journal

        We know there are better computers than what we can build. Each of us has an example sitting inside our head.

        No. No human brain is a better computer than the computers we build. Yes, they are better brains than our computers are, that is, they are much better at brain-typical tasks than computers are. But in turn they completely suck at computer-type tasks. Even a decades-old computer outperforms me at simple computing tasks. And there's no way that I'll ever remember enough to fill all the mass storage media I've got at home. On the other hand, my computer sucks at tasks that are simple for me. For example, there's no way my computer could write a comment like this one.

        --
        The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 2) by mtrycz on Friday January 23 2015, @11:55AM

    by mtrycz (60) on Friday January 23 2015, @11:55AM (#137209)

    Hey, thanks for the article, I liked that blog.

    Unfortunately there is a big flaw with his reasoning: he's showing one side of the medal.

    I've been following AI through the years (less so lately), so I have some insights on it. I must admit that they are as-of-now crystallized and not updated. Here's a short list:
    1. It doesn't define what "intelligence" means; nobody actually agrees on what "intelligence" means, everybody making their own definitions
    1b. It doesn't define what it means for a *machine* to be intelligent.

    2. We don't actually know how the brain works. We have some (useful) approximations.
    2b. Moreso we don't know how the functioning of the body. We have some useful approximations.
    2c. The author confuses "brain" with "mind", we know even less on that, though we do have some useful approximations.
    2d. Scientists are pretentious pricks, they always think they've gotten the grasp on it. Moreover sciences are strongly sectorized, so computer scientists don't take advantage of discoveries from other scientists (I mean, just take a look at recent discoveries of cognitive science)
    2e. Having and understanding of the inner workings of the brain (or mind) doesn't give exhaustive explanation of "intelligence" and how to reproduce it.

    3. Nobody is even concentrating on the fact that the big great huge difference between organic creatures and machines is their *perception* of reality (eg. the 5 senses), and how that interacts with the mind and the inner world.

    4. The leap from weakAI to strongAI (or "general" ai ) is the main problem in this scheme. Nobody can get a grasp on it. Mainly because noone (out of ai-scientists) can't even define what intelligence is, and they're far to pretentious to admit that they don't really know what they're looking for. (as the article states, computers as very good at doing calculations, and really bad at doing simple "human" things, like walking or drawing your mom with crayons or irony; this is the level leap)
    4b. Sure it's matter of time until neurobiologists crack the structure and the inner functioning of the brain. But we don't even know *if* we can reproduce it.
    4c. A reproduction of the brain can't function without the human sensory and actuatory systems. The mind isn't made only of the brain.

    Once there *is* a leap (which might or might not happen) the author is right that further improvements will be fast. It certainly can't exceed physical limits, tho.

    My guess (as of 2015) is that the level leap is not possible, and certainly NOT with these pretentious pricks aroud.
    Maybe in the future when we *do* have an understanding of the inner workings of the brain, there could be a *possibility* of doing the leap, at which stage it *could* become something to consider.

    BONUS POINT: Check out the Roko's Basilisk for a rational rollercoaster.

    --
    In capitalist America, ads view YOU!
    • (Score: 0) by Anonymous Coward on Friday January 23 2015, @01:37PM

      by Anonymous Coward on Friday January 23 2015, @01:37PM (#137229)

      Ramez Naam (guest blogging on Charlie Stross's blog) has some good thoughts on the topic in The Singularity Is Further Than It Appears [antipope.org] and the following few blog posts. That post makes a lot of the same points you do.

      Mainly because noone (out of ai-scientists) can't even define what intelligence is, and they're far to pretentious to admit that they don't really know what they're looking for.

      I wouldn't be too harsh on AI scientists: work on Artificial General Intelligence (AGI/"strong" AI) is essentially taboo among AI researchers. There's no serious research on it.

      • (Score: 2) by mtrycz on Friday January 23 2015, @10:25PM

        by mtrycz (60) on Friday January 23 2015, @10:25PM (#137453)

        Hey thanks, it looks interesting.

        About the strong AI issue, you're telling me that the people qorshipping The Singularity aren't actually into AI? Hadn't checked out yet.

        --
        In capitalist America, ads view YOU!
        • (Score: 0) by Anonymous Coward on Saturday January 24 2015, @01:29AM

          by Anonymous Coward on Saturday January 24 2015, @01:29AM (#137507)

          (I'm the GP.)

          Oh, obviously there's plenty of people around worshipping The Singularity, but they seem to be almost entirely disjoint from the group of people doing research in academia and industry who call their research "AI". (Note: I'm a CS graduate student at a top US university; many of my colleagues would call themselves AI researchers and my research (program synthesis) is arguably AI but isn't called such for historical reasons.) Modern AI research is primarily in "machine learning" which is about automatically or semi-automatically finding patterns in large datasets that are too complicated for a human to write down (e.g. handwriting recognition is about identifying the pattern of why all of the As are considered similar, etc.). It's probably best thought of as a programming technique where you don't really have any idea how to write a program for what you want to do but you have a lot of examples of what it should do. Any mention of trying to deal with semantics or intelligence is considered to be a failed dead end and techniques that just look for patterns without a concept of "understanding" them are greatly preferred.

          Not that belief in the Singularity is entirely unheard of in academia---I heard it from a perspective student once (so, an undergrad)---but it is laughed at as absurd.

          • (Score: 2) by mtrycz on Saturday January 24 2015, @09:55AM

            by mtrycz (60) on Saturday January 24 2015, @09:55AM (#137590)

            Hey great!

            Yeah, I'm somwhat proficient in AI techinques (optimization, machine learning, and some natural language processing), I just thought/assumed that the Singularity worshippers were people that actually do have an understanding of the topic, and that are actually into the research. I mean, when I hear Hawkings or Musk rambling, I'd assume they know what they're talking about.

            Thanks for claryfing that, I feel much better now. Someone should point that to the waitbutwhy guy, too.

            --
            In capitalist America, ads view YOU!
            • (Score: 2) by maxwell demon on Saturday January 24 2015, @10:48AM

              by maxwell demon (1608) Subscriber Badge on Saturday January 24 2015, @10:48AM (#137597) Journal

              If you hear Hawking ramble about physics, you can assume he knows what he is talking about. But AI is certainly not a physics subject, so there's no reason to assume that he knows more about it than you and me. Similarly I'd trust Musk to know something about business. But I see no reason to assume he has deeper knowledge about AI.

              --
              The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 1) by khallow on Friday January 23 2015, @08:24PM

      by khallow (3766) Subscriber Badge on Friday January 23 2015, @08:24PM (#137404) Journal

      The leap from weakAI to strongAI (or "general" ai ) is the main problem in this scheme. Nobody can get a grasp on it. Mainly because noone (out of ai-scientists) can't even define what intelligence is, and they're far to pretentious to admit that they don't really know what they're looking for. (as the article states, computers as very good at doing calculations, and really bad at doing simple "human" things, like walking or drawing your mom with crayons or irony; this is the level leap)

      Intelligence is not a semantics problem. We became intelligent long before someone came up with a word for it (intelligence being a precondition for language in the first place).

      • (Score: 2) by HiThere on Friday January 23 2015, @08:56PM

        by HiThere (866) on Friday January 23 2015, @08:56PM (#137416) Journal

        Actually, there's some doubt that language came second. Language may be a precondition for general intelligence. (I feel this is related to the use another level of abstraction [pointers] process for handling flexible memory allocation in a static computer language.) But good arguments can be made in either direction, and I really suspect co-evolution.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
        • (Score: 1) by khallow on Saturday January 24 2015, @06:39PM

          by khallow (3766) Subscriber Badge on Saturday January 24 2015, @06:39PM (#137657) Journal

          Actually, there's some doubt that language came second.

          So what? There's doubt that the Moon isn't made of green cheese.

          Language like intelligence is not a bit flag to set. Rudimentary languages like the various calls of a wolf or raven, don't require as much intelligence to understand as complex languages like English does (complete with multiple senses aspects to the language, such as written and symbolic forms, braille, etc). So yes, it is possible that once language has been established in a life form subject to evolution, that it creates a selection pressure for more intelligence.

          But language has to be at a pretty advanced state and thus, require some significant intelligence, in order to have a term for intelligence.

          • (Score: 2) by HiThere on Saturday January 24 2015, @10:35PM

            by HiThere (866) on Saturday January 24 2015, @10:35PM (#137708) Journal

            OK. By the time human languages had a term for intelligent, people were intelligent. But when I think of language I think of that thing enabled by the modified FOXP2 gene that when mutated, as in that family in England, means that you can't speak sentences.

            --
            Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.