Artificial intelligence made enormous strides in 2016, so it is fitting that one of the year's hit TV shows was an exploration of what it means for machines to gain consciousness. But how close are we to building the brains of Westworld's hosts for real? I'm going to look at some recent AI research papers and show that the hosts aren't quite as futuristic as you might think.
This long article discusses how plausible the robots in the show are, touching on neural networks, image compression, memory, and unintended emergent goals of AI systems. Well worth a read, even if you have not seen the show (contains spoilers).
-- submitted from IRC
(Score: 2, Insightful) by Anonymous Coward on Wednesday January 11 2017, @08:19PM
But how close are we to building the brains of Westworld's hosts for real?
Not close at all.
Are we done here now?
Seriously, these are dumb questions to start with. We don't even understand yet how our own brains work yet, and we think we can do it in machines? What we are doing with machines at this point is set up feedback-loops that enable us to do limited pattern recognition, nothing more, nothing less.
FFS, artificial intelligence my ass...
(Score: 0) by Anonymous Coward on Wednesday January 11 2017, @09:10PM
Meh, if we did understand how our own brains work then anyone who didn't would be at such a disadvantage they may never know the breakthrough had been achieved. Therefore your conclusions are unsupported.
(Score: 2) by tibman on Wednesday January 11 2017, @09:13PM
Not sure why understanding the human brain is a prerequisite for AI. I don't think the goal is to build an artificial human brain. It's to build a machine with the outward appearance of intelligence. If you wanted to simulate a human brain then obviously we would have to completely understand it.
SN won't survive on lurkers alone. Write comments.
(Score: 2) by SomeGuy on Thursday January 12 2017, @01:48AM
Probably because most people equate "AI" with the ability to process human emotions and other human-specific behaviors. This is, of course incorrect. If humans ever happened to encounter "intelligent" aliens from another planet, they would almost certainly have completely different emotions and such. But in the movies, AIs almost always have some level of human emotional understanding. The fact about a machine is it doesn't get happy, it doesn't get sad, it doesn't laugh at your jokes.... It just runs programs.
Also take from that AIs can not quite make decisions like humans - nor should they. As human decisions often include emotion and huge potential for error. Yet, if one wants a machine to make decisions in a similar way, the seemingly quickest way is to copy how the human brains works. One might expect such an "AI" to be more compassionate and understanding, but either way you know it will go KILL ALL HUMANS :P
(Score: 2) by krishnoid on Wednesday January 11 2017, @09:51PM
We don't even understand yet how our own brains work yet, and we think we can do it in machines?
Do we have to? If we could just simulate cerebral structures and neural electrical activity -- the physics of the how, if not the psychology -- wouldn't that be good enough?
(Score: 4, Funny) by darnkitten on Wednesday January 11 2017, @11:20PM
Heck, we can't even build a gunslinger robot that looks like Yul Brynner...
(Score: 2) by wisnoskij on Friday January 13 2017, @03:29AM
You don't need to understand how intelligence or consciousness works to design AI. Even dumb, limited task AI can be too complicated to understand, but it is still possible for us to design them.