Stories
Slash Boxes
Comments

SoylentNews is people

posted by Blackmoore on Friday January 23 2015, @03:45AM   Printer-friendly
from the dreaming-of-electric-sheep? dept.

Physicists, philosophers, professors, authors, cognitive scientists, and many others have weighed in on edge.org's annual question 2015: What do you think about machines that think? See all 186 responses here

Also, what do you think?

My 2ยข: There's been a lot of focus on potential disasters that are almost certainly not going to happen. E.g. a robot uprising, or mass poverty through unemployment. Most manufacturers of artificial intelligence won't program their machines to seek self preservation at the expense of their human masters. It wouldn't sell. Secondly, if robots can one day produce almost everything we need, including more robots, with almost no human labour required, then robot-powered factories will become like libraries: relatively cheap to maintain, plentiful, and a public one will be set up in every town or suburb, for public use. If you think the big corporations wouldn't allow it, why do they allow public libraries?

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by novak on Friday January 23 2015, @09:03AM

    by novak (4683) on Friday January 23 2015, @09:03AM (#137184) Homepage

    Thou shalt not make a machine in the likeness of a man's mind.

    -- The Orange Catholic Bible

    Ok, let's look at this logically. Name a software company, any software company, that you trust to not have critical bugs which would totally invalidate the purpose of their machines. If you named one, then excuse me a minute while I laugh. Software isn't built logically, in proven correct increments, it is built at a grand scale, far beyond what we can validate, far beyond what we can prove is secure.

    I can only hope that the insanity will stop before something truly catastrophic breaks out. I doubt that we'll see a terminator-type scenario. I can only hope we won't have too many dangerous catastrophes as a result of software failure.

    In many industries with the potential to generate a world ending cataclysm, we have outrageous numbers of safeguards in place, for example, in nuclear power we have so many restrictions and rules as to kill the industry even when it would be better and safer (and if you're not subject to those safeguards, we'll just use cyberwarfare vigilante style). In AI, the only requirement is enough money to buy the hardware.

    --
    novak
    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by maxwell demon on Saturday January 24 2015, @11:17AM

    by maxwell demon (1608) Subscriber Badge on Saturday January 24 2015, @11:17AM (#137600) Journal

    Name a software company, any software company, that you trust to not have critical bugs which would totally invalidate the purpose of their machines.

    The Sirius Cybernetics Corporation. Share and enjoy!

    If you named one, then excuse me a minute while I laugh.

    Ah, I didn't think the joke was that funny.

    --
    The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 2) by novak on Saturday January 24 2015, @09:40PM

      by novak (4683) on Saturday January 24 2015, @09:40PM (#137692) Homepage

      Well I happen to enjoy Douglas Adams, so I did get a chuckle out of that.

      --
      novak