Stories
Slash Boxes
Comments

SoylentNews is people

posted by Blackmoore on Friday January 23 2015, @03:45AM   Printer-friendly
from the dreaming-of-electric-sheep? dept.

Physicists, philosophers, professors, authors, cognitive scientists, and many others have weighed in on edge.org's annual question 2015: What do you think about machines that think? See all 186 responses here

Also, what do you think?

My 2ยข: There's been a lot of focus on potential disasters that are almost certainly not going to happen. E.g. a robot uprising, or mass poverty through unemployment. Most manufacturers of artificial intelligence won't program their machines to seek self preservation at the expense of their human masters. It wouldn't sell. Secondly, if robots can one day produce almost everything we need, including more robots, with almost no human labour required, then robot-powered factories will become like libraries: relatively cheap to maintain, plentiful, and a public one will be set up in every town or suburb, for public use. If you think the big corporations wouldn't allow it, why do they allow public libraries?

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday January 23 2015, @06:23AM

    by Anonymous Coward on Friday January 23 2015, @06:23AM (#137157)

    When you remove intent from the "evil robots" scenario you'll see a lot of opportunities for AI to do harm without heading down the scare tactics path.

    Will AI be in charge of manufacturing of medicines? If so, a problem with HAL can result in market shortages that result in people dying. What about monitoring patients in a hospital? To much or too little AI can be catastrophic.

    If AI is in charge of transportation it can kill plenty of humans. Same thing if AI is responsible for environmental controls in large residential buildings. If AI is responsible for building inspections and it had a bug or two society could have piles of rubble with body counts.

    The big concern isn't if AI will turn on humans. It's will humans turn AI loose on themselves.