John Carmack, programmer extraordinaire, and developer of seminal titles like "Doom" and "Quake" has said "Hasta La Vista" to his colleagues at Oculus to to set out for a new challenge. In a Facebook post (https://www.facebook.com/100006735798590/posts/2547632585471243/) he declares that he is going to work on artificial general intelligence.
What are the chances he can pull it off, and what could go wrong?
John Carmack Steps Down at Oculus to Pursue AI Passion Project `Before I get too Old':
Legendary coder John Carmack is leaving Facebook's Oculus after six years to focus on a personal project — no less than the creation of Artificial General Intelligence, or "Strong AI." He'll remain attached to the company in a "Consulting CTO" position, but will be spending all his time working on, perhaps, the AI that finally surpasses and destroys humanity.
AGI or strong AI is the concept of an AI that learns much the way humans do, and as such is not as limited as the extremely narrow machine learning algorithms we refer to as AI today. AGI is the science fiction version of AI — HAL 9000, Replicants and, of course, the Terminator. There are some good ones out there, too — Data and R2-D2, for instance.
[...] Carmack announced the move on Facebook, where he explained that the uncertainty about such a fascinating and exciting topic is exactly what attracted him to it:
When I think back over everything I have done across games, aerospace, and VR, I have always felt that I had at least a vague "line of sight" to the solutions, even if they were unconventional or unproven. I have sometimes wondered how I would fare with a problem where the solution really isn't in sight. I decided that I should give it a try before I get too old.
Skynet? Singularity? With great power comes great responsibility. Can he do it? Should he?
(Score: 4, Insightful) by acid andy on Friday November 15 2019, @04:30PM (1 child)
IMHO there are two big hurdles in achieving strong AI.
Firstly, as I understand it, the number of synaptic connections in a human brain is still impractical to accurately simulate in anything close to real time with current hardware. It's an open question though whether something with a human-like, or at least mammal-like, capacity for general learning can be achieved with a much smaller number of connections and a simplified neural network model.
I think the second hurdle is due to the fact that animal brains and senses have evolved many specialized regions and structures that are pre-engineered (if you like) to excel at various specific tasks which make thinking, modeling and responding to the environment much easier and more effective. For this reason I don't think strong AI can be achieved any time soon if the approach is one of just building the largest possible general purpose neural network or information processing engine, hooking up some sensors and expecting it to just learn. I expect such a system could form a big part of it, but the hard work that I think is necessary is to develop all these specialized subsystems--for example something for vision and spatial modeling, something for language processing, maybe something for understanding of time and anticipating future events and probably something to deal with motivation--emotions, mood and alertness. I think if we can hone all those sorts of subsystems and plug them all in to a big neural network we'd really be getting somewhere very interesting. Thanks to big data of course, things like computer vision and probably some bits of language processing might not be too far off. Some of the other subsystems could be a challenge though.
I really want to see this happen but at the same time I feel sure it will unleash hell simply because of how humanity will misuse it.
Welcome to Edgeways. Words should apply in advance as spaces are highly limite—
(Score: 0) by Anonymous Coward on Saturday November 16 2019, @05:58PM
How about something simpler? Do we really understand how single celled creatures work?
https://www.researchgate.net/publication/259824963_Detailed_Process_of_Shell_Construction_in_the_Photosynthetic_Testate_Amoeba_Paulinella_chromatophora_Euglyphid_Rhizaria [researchgate.net]
https://bogology.org/what-we-do/in-the-lab/testate-amoebae/ [bogology.org]
https://www.youtube.com/watch?v=UlGg2wt-wqI [youtube.com]
https://www.youtube.com/watch?v=JnlULOjUhSQ [youtube.com]
If neurons are very stupid and brains are smart just by organization of neurons then have humans managed to understand how to create human organizations that are much smarter than the individual brains and not merely more capable?
Or perhaps neurons and some other single celled creatures aren't as stupid as many assume? I'm not claiming they're as smart as humans but seems like many are considering neurons like merely dumb components not very much smarter than transistors.
Do current AI neurons really work like this: https://www.nature.com/news/2005/050620/full/news050620-7.html [nature.com]
And this: http://www.nature.com/nrn/journal/v11/n5/abs/nrn2822.html [nature.com]
From what I see lots of animals with brains aren't really that much smarter than the single celled creatures in some of the videos I linked to (some of those creatures build intricate shells that are typical of their species- not random, and if there isn't enough material for another shell they don't try to stupidly reproduce[1]).
So one of my hypotheses is that single celled creatures solved the problem of thinking first, and brains were initially more for solving the various problems of controlling a multicellular body. Redundancy, interfacing, signal boosting/processing etc. Only later did brains evolve for smartness.
And thus if we want to figure out the basics of how brains think we might be better off starting with figuring out and understanding _thoroughly_ on how single celled creatures think.
[1] https://archive.org/stream/biologicalbullet70mari/biologicalbullet70mari_djvu.txt [archive.org]