John Carmack, programmer extraordinaire, and developer of seminal titles like "Doom" and "Quake" has said "Hasta La Vista" to his colleagues at Oculus to to set out for a new challenge. In a Facebook post (https://www.facebook.com/100006735798590/posts/2547632585471243/) he declares that he is going to work on artificial general intelligence.
What are the chances he can pull it off, and what could go wrong?
John Carmack Steps Down at Oculus to Pursue AI Passion Project `Before I get too Old':
Legendary coder John Carmack is leaving Facebook's Oculus after six years to focus on a personal project — no less than the creation of Artificial General Intelligence, or "Strong AI." He'll remain attached to the company in a "Consulting CTO" position, but will be spending all his time working on, perhaps, the AI that finally surpasses and destroys humanity.
AGI or strong AI is the concept of an AI that learns much the way humans do, and as such is not as limited as the extremely narrow machine learning algorithms we refer to as AI today. AGI is the science fiction version of AI — HAL 9000, Replicants and, of course, the Terminator. There are some good ones out there, too — Data and R2-D2, for instance.
[...] Carmack announced the move on Facebook, where he explained that the uncertainty about such a fascinating and exciting topic is exactly what attracted him to it:
When I think back over everything I have done across games, aerospace, and VR, I have always felt that I had at least a vague "line of sight" to the solutions, even if they were unconventional or unproven. I have sometimes wondered how I would fare with a problem where the solution really isn't in sight. I decided that I should give it a try before I get too old.
Skynet? Singularity? With great power comes great responsibility. Can he do it? Should he?
(Score: 0) by Anonymous Coward on Saturday November 16 2019, @05:58PM
How about something simpler? Do we really understand how single celled creatures work?
https://www.researchgate.net/publication/259824963_Detailed_Process_of_Shell_Construction_in_the_Photosynthetic_Testate_Amoeba_Paulinella_chromatophora_Euglyphid_Rhizaria [researchgate.net]
https://bogology.org/what-we-do/in-the-lab/testate-amoebae/ [bogology.org]
https://www.youtube.com/watch?v=UlGg2wt-wqI [youtube.com]
https://www.youtube.com/watch?v=JnlULOjUhSQ [youtube.com]
If neurons are very stupid and brains are smart just by organization of neurons then have humans managed to understand how to create human organizations that are much smarter than the individual brains and not merely more capable?
Or perhaps neurons and some other single celled creatures aren't as stupid as many assume? I'm not claiming they're as smart as humans but seems like many are considering neurons like merely dumb components not very much smarter than transistors.
Do current AI neurons really work like this: https://www.nature.com/news/2005/050620/full/news050620-7.html [nature.com]
And this: http://www.nature.com/nrn/journal/v11/n5/abs/nrn2822.html [nature.com]
From what I see lots of animals with brains aren't really that much smarter than the single celled creatures in some of the videos I linked to (some of those creatures build intricate shells that are typical of their species- not random, and if there isn't enough material for another shell they don't try to stupidly reproduce[1]).
So one of my hypotheses is that single celled creatures solved the problem of thinking first, and brains were initially more for solving the various problems of controlling a multicellular body. Redundancy, interfacing, signal boosting/processing etc. Only later did brains evolve for smartness.
And thus if we want to figure out the basics of how brains think we might be better off starting with figuring out and understanding _thoroughly_ on how single celled creatures think.
[1] https://archive.org/stream/biologicalbullet70mari/biologicalbullet70mari_djvu.txt [archive.org]