The U.N. has begun discussion on "lethal autonomous robots," killing machines which take the next step from our current drones which are operator controlled, to completely autonomous killing machines.
"Killer robots would threaten the most fundamental of rights and principles in international law," warned Steve Goose, arms division director at Human Rights Watch.
Are we too far down the rabbit hole, or can we come to reasonable and humane limits on this new world of death-by-algorithm?
(Score: 0) by Anonymous Coward on Thursday May 15 2014, @02:26PM
Wow, modded to +4 insightful. I'd mod you to +5 if I had points. I was going to say the same thing a few weeks ago when everyone was like omg Crimea.
Part of me hopes that humanity wipes itself out in a full nuclear exchange. No, when I think about the technologies humanity is beginning to develop, especially in biotech, I'm frankly frightened to be on the same rock as them. A full nuclear exchange may be the best thing to hope for.
*communications interrupted*
Earth abides. The fallout would eventually decay. Life may evolve again. Apologies to Sagan.
There's no benevolent Overlords to save this species, no Karellan to ride in and save us from ourselves.
I often think that if intelligence were to evolve again on Earth, it would find evidence of a previous intelligence that nearly extinguished all life on earth and perhaps it would meditate on that and learn somehow to be better.
Or it could be the case that evolving intelligent life is impossible without the evolutionary baggage that leads to such abominations. It would explain why there seems to be no evidence for intelligent life elsewhere.
Of course they want to build Terminators. It was inevitable. Sick, sick, sick bastards.
Humanity is at a point where it could use its technology to build a utopia. Instead, it uses its technology to find better ways to kill and oppress.
Anyone wants to know where the Borg or the Necromongers come from? You're looking at the species that will make it happen.