Stories
Slash Boxes
Comments

SoylentNews is people

posted by Blackmoore on Friday January 23 2015, @03:45AM   Printer-friendly
from the dreaming-of-electric-sheep? dept.

Physicists, philosophers, professors, authors, cognitive scientists, and many others have weighed in on edge.org's annual question 2015: What do you think about machines that think? See all 186 responses here

Also, what do you think?

My 2ยข: There's been a lot of focus on potential disasters that are almost certainly not going to happen. E.g. a robot uprising, or mass poverty through unemployment. Most manufacturers of artificial intelligence won't program their machines to seek self preservation at the expense of their human masters. It wouldn't sell. Secondly, if robots can one day produce almost everything we need, including more robots, with almost no human labour required, then robot-powered factories will become like libraries: relatively cheap to maintain, plentiful, and a public one will be set up in every town or suburb, for public use. If you think the big corporations wouldn't allow it, why do they allow public libraries?

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday January 23 2015, @07:48AM

    by Anonymous Coward on Friday January 23 2015, @07:48AM (#137171)

    His entire premise is based on exponential growth. Exponential growth doesn't last forever. There are tangible limits.

    He also points out that there are supercomputers powerful enough now. What super AI are they running? None.

  • (Score: 2) by q.kontinuum on Friday January 23 2015, @08:10AM

    by q.kontinuum (532) on Friday January 23 2015, @08:10AM (#137173) Journal

    I also don't believe that we can maintain exponential growth for much longer. But we might have some disruptive breakthrough in quantum computing. *If* that happens (in a way enabling us to get quantum-computers as a new standard), I think there is a good chance to get machines potent enough to outsmart humans. But as I mentioned earlier, I still think the effort to develop the required software is immensely underestimated, and without software even the best computer is only a big chunk of weight.

    --
    Registered IRC nick on chat.soylentnews.org: qkontinuum
    • (Score: 2) by HiThere on Friday January 23 2015, @08:51PM

      by HiThere (866) on Friday January 23 2015, @08:51PM (#137413) Journal

      We don't need that kind of breakthrough. Increasing parallelism with current technology would suffice. But is there a market for it? Over the last decade mass-market computers seem to have leveled off in performance, while cell phone computers have surged forwards. But cell phones emphasize more low power and portability than performance.

      Basically, if the market need for high powered mass market computers appears, then the projection will be correct. Otherwise the same enabling technology will be invested in other directions.

      P.S.: Yes, exponential performance always hits a bottleneck. But there are *very* good reasons to believe that there's no inherent problem between here and there. But there may well be marketing problems. One way around this could be if mass market robots take off. (Automated cars are an outside possibility, but I have a doubt that their improvements would result in more powerful user non-auto computers.)

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 2) by jbWolf on Friday January 23 2015, @08:22AM

    by jbWolf (2774) <{jb} {at} {jb-wolf.com}> on Friday January 23 2015, @08:22AM (#137177) Homepage

    His entire premise is based on exponential growth. Exponential growth doesn't last forever. There are tangible limits.

    His entire premise is heavily based on exponential growth, but not all of it. We know there are better computers than what we can build. Each of us has an example sitting inside our head. If you can take the processing power of a brain and merge that with the storage abilities of today's computers, you've already got a computer smarter than almost everyone on the planet. I think that is doable. What that artificial intelligence does after that will be interesting.

    He also points out that there are supercomputers powerful enough now. What super AI are they running? None.

    Again, he knows the software we can write is currently lacking, but the software that can do it already exists inside each of our heads. Once we can copy that and produce AI as smart as Einstein or Hawking, and network them together, that is when they will be able to write their own software and create the next generation beyond that. The idea that we can build the equivalent of a brain and supersede that even by a small amount is (in my opinion) doable. What they will be able to do after that is what is unknown.

    --
    www.jb-wolf.com [jb-wolf.com]
    • (Score: 3, Funny) by pnkwarhall on Friday January 23 2015, @06:31PM

      by pnkwarhall (4558) on Friday January 23 2015, @06:31PM (#137363)

      If you can take the processing power of a brain[...]

      the software that can do it already exists inside each of our heads

      But we don't understand how the brain works...

      All you're doing is repeating "what ifs" like the rest of the futurists who think technological advancement is the only form of **human** progression. We're the supposed products of millions of years of evolutionary development and progression, but according to you we can just "copy that" and combine it with technology developed over the last decade or two (i.e. basically micro-ICs and quantum computing tech in its infancy), and "whammo!" -- we've created "an intelligence".

      I can already do that! These intelligences are called "children", and creating them is a relatively simple process and can be lots of fun to implement. But I'll be damned if we haven't had that technology for a really long time, and yet our problems aren't solved yet, and the intelligences we create don't seem to be much improved over the parent intelligence.

      So, yes, let's discuss and pretend like we have all these other technological hurdles to AI "almost" overcome, and it's just a matter of putting things together in the right way. Now, once all these dominos are knocked down, THEN we have to start working on the same problems we already have, just with a "new" intelligence to help?

      Sounds like THIS intelligence is already coming up with great solutions!!

      --
      Lift Yr Skinny Fists Like Antennas to Heaven
      • (Score: 1) by khallow on Friday January 23 2015, @08:20PM

        by khallow (3766) Subscriber Badge on Friday January 23 2015, @08:20PM (#137403) Journal

        So, yes, let's discuss and pretend like we have all these other technological hurdles to AI "almost" overcome, and it's just a matter of putting things together in the right way.

        What is there to pretend? Humanity already creates new intelligences. We're just figuring out how to do it without requiring a billion years of evolution. It really is just a matter of engineering, putting things together in a right way.

    • (Score: 2) by maxwell demon on Saturday January 24 2015, @10:25AM

      by maxwell demon (1608) Subscriber Badge on Saturday January 24 2015, @10:25AM (#137594) Journal

      We know there are better computers than what we can build. Each of us has an example sitting inside our head.

      No. No human brain is a better computer than the computers we build. Yes, they are better brains than our computers are, that is, they are much better at brain-typical tasks than computers are. But in turn they completely suck at computer-type tasks. Even a decades-old computer outperforms me at simple computing tasks. And there's no way that I'll ever remember enough to fill all the mass storage media I've got at home. On the other hand, my computer sucks at tasks that are simple for me. For example, there's no way my computer could write a comment like this one.

      --
      The Tao of math: The numbers you can count are not the real numbers.