Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Friday November 15 2019, @12:26PM   Printer-friendly
from the skynet-anyone? dept.

John Carmack Sets Out To Create General AI

John Carmack, programmer extraordinaire, and developer of seminal titles like "Doom" and "Quake" has said "Hasta La Vista" to his colleagues at Oculus to to set out for a new challenge. In a Facebook post (https://www.facebook.com/100006735798590/posts/2547632585471243/) he declares that he is going to work on artificial general intelligence.

What are the chances he can pull it off, and what could go wrong?
 

John Carmack Steps Down at Oculus to Pursue AI Passion Project `Before I get too old'

John Carmack Steps Down at Oculus to Pursue AI Passion Project `Before I get too Old':

Legendary coder John Carmack is leaving Facebook's Oculus after six years to focus on a personal project — no less than the creation of Artificial General Intelligence, or "Strong AI." He'll remain attached to the company in a "Consulting CTO" position, but will be spending all his time working on, perhaps, the AI that finally surpasses and destroys humanity.

AGI or strong AI is the concept of an AI that learns much the way humans do, and as such is not as limited as the extremely narrow machine learning algorithms we refer to as AI today. AGI is the science fiction version of AI — HAL 9000, Replicants and, of course, the Terminator. There are some good ones out there, too — Data and R2-D2, for instance.

[...] Carmack announced the move on Facebook, where he explained that the uncertainty about such a fascinating and exciting topic is exactly what attracted him to it:

When I think back over everything I have done across games, aerospace, and VR, I have always felt that I had at least a vague "line of sight" to the solutions, even if they were unconventional or unproven. I have sometimes wondered how I would fare with a problem where the solution really isn't in sight. I decided that I should give it a try before I get too old.

Skynet? Singularity? With great power comes great responsibility. Can he do it? Should he?


Original Submission #1Original Submission #2

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Interesting) by Anonymous Coward on Friday November 15 2019, @12:59PM (2 children)

    by Anonymous Coward on Friday November 15 2019, @12:59PM (#920676)

    The more people and groups working on artificial general intelligence, the better.

    Artificial general intelligence has probably been created already. It's just languishing under military control.

    Starting Score:    0  points
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   1  
  • (Score: 4, Interesting) by Anonymous Coward on Friday November 15 2019, @06:51PM (1 child)

    by Anonymous Coward on Friday November 15 2019, @06:51PM (#920759)

    I used to take a not dissimilar view. Then I ended up working with neural network based systems for a couple of years. Now I think anything vaguely resembling intelligence is probably impossible with current techniques, and we're most likely on our way to another AI winter once full self driving vehicles prove impossible.

    Why? Pretty simple. "AI" is driven by correlations. And it can pick up on some remarkable correlations that humans are not capable of. You can get from 0-90% super easily. And it looks like you're going to have created Data within a decade. Getting from 90-99% is a lot harder but still really quite doable. And at this point, your own results start to feel magical. As a silly example, I was able to give an arbitrary pattern to my network and it could generally pick the next digits. Of course it was just picking up on patterns and correlations but it really felt genuinely intelligent, like we might feel the first time we beam 2 3 5 out into space and get back 7 11 13. As an aside, no it could not do primes. In any case, now you're damned near positive you're going to be able to create Data. Then you start pushing for 99.9%. And things start getting really really hard. And by the time you start pushing for 99.99% it becomes increasingly clear that you're headed hard and fast for some asymptote.

    You can still do some cool things below that asymptote. For instance I worked on financial tech stuff, and it performed far better than humans. But if you want to apply this to a field like driving let alone some generalized field? No, it's simply not going to work. If we achieve anything like self driving I imagine we're going to have extensive 'hand coded' LIDAR (and probably also RADAR) systems constantly sanity testing everything. A fintech system can afford some head scratching decisions so long as the general outcome outperforms humans. A car can't handle you occasionally deciding to t-bone a concrete wall even if the other 99.99% of the time you drive like a super-human. I think the most probable outcome is us ditching automation altogether since even 'driver assistance' is probably going to do more harm than good since the driver is going to zone out. But if we do keep self driving vehicles I expect to see these systems with extensive 'hand coded' things driving only on white-listed paths which are further hand tuned, just like Waymo seems to be doing. And the final result may look like AI, but I think the emphasis is very much going to be on the "A" part there.