Physicists, philosophers, professors, authors, cognitive scientists, and many others have weighed in on edge.org's annual question 2015: What do you think about machines that think? See all 186 responses here
Also, what do you think?
My 2ยข: There's been a lot of focus on potential disasters that are almost certainly not going to happen. E.g. a robot uprising, or mass poverty through unemployment. Most manufacturers of artificial intelligence won't program their machines to seek self preservation at the expense of their human masters. It wouldn't sell. Secondly, if robots can one day produce almost everything we need, including more robots, with almost no human labour required, then robot-powered factories will become like libraries: relatively cheap to maintain, plentiful, and a public one will be set up in every town or suburb, for public use. If you think the big corporations wouldn't allow it, why do they allow public libraries?
(Score: 2) by Aiwendil on Friday January 23 2015, @09:52AM
With the risk of being controversial...
If we design a selfreplicating and selfimproving AI that wipes out the human species - so what?
Or a being a bit less provocative, in effect we will have produced a lifeform (in a philosophical sense) that - under the conditions given - are superior to us, or is this simply a case of crying foul when our creations does to us what we have done to countless of species?
Quite frankly this is a razors edge that we have been balancing on ever since we discovered how to transport goods on horseback (humans are too slow and have to low endurance to matter on its own) but we simply have become more aware of just how close we are of falling.
I'm more worried about some unknown pathogen appearing that will do to rice (and to a lesser extent potatoes and maize) what the chestnut blight did to the chestnut trees in america (ie, live just fine in its own biotope [in asia] but wreck havoc when introduced - by humans - in another biotope [in n.america]).
--
But to answer the question of what I think about machines that think - I just see them as any other kind of breeding for a specific trait really, it can go very well and it can go very wrong and most likely it will dip its toes into both extremes but the important thing to remember is to not panic and try to predict any outcome (both good and bad) so that we have a better set of tools available when something unexpected happens.