Slash Boxes

SoylentNews is people

posted by Fnord666 on Friday November 15 2019, @12:26PM   Printer-friendly
from the skynet-anyone? dept.

John Carmack Sets Out To Create General AI

John Carmack, programmer extraordinaire, and developer of seminal titles like "Doom" and "Quake" has said "Hasta La Vista" to his colleagues at Oculus to to set out for a new challenge. In a Facebook post ( he declares that he is going to work on artificial general intelligence.

What are the chances he can pull it off, and what could go wrong?

John Carmack Steps Down at Oculus to Pursue AI Passion Project `Before I get too old'

John Carmack Steps Down at Oculus to Pursue AI Passion Project `Before I get too Old':

Legendary coder John Carmack is leaving Facebook's Oculus after six years to focus on a personal project — no less than the creation of Artificial General Intelligence, or "Strong AI." He'll remain attached to the company in a "Consulting CTO" position, but will be spending all his time working on, perhaps, the AI that finally surpasses and destroys humanity.

AGI or strong AI is the concept of an AI that learns much the way humans do, and as such is not as limited as the extremely narrow machine learning algorithms we refer to as AI today. AGI is the science fiction version of AI — HAL 9000, Replicants and, of course, the Terminator. There are some good ones out there, too — Data and R2-D2, for instance.

[...] Carmack announced the move on Facebook, where he explained that the uncertainty about such a fascinating and exciting topic is exactly what attracted him to it:

When I think back over everything I have done across games, aerospace, and VR, I have always felt that I had at least a vague "line of sight" to the solutions, even if they were unconventional or unproven. I have sometimes wondered how I would fare with a problem where the solution really isn't in sight. I decided that I should give it a try before I get too old.

Skynet? Singularity? With great power comes great responsibility. Can he do it? Should he?

Original Submission #1Original Submission #2

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by Rich on Friday November 15 2019, @04:21PM (4 children)

    by Rich (945) on Friday November 15 2019, @04:21PM (#920712) Journal

    I think the key to singularity is the ability to understand how abstraction levels in the brain work. On a purely "flat" network level, we're pretty much sorted. A classic experiment is the chimpanzee who has to assemble a long stick out of two short ones to get the banana - which all comes down to the instinct of "hunger", but it needs to stack up from "banana" to "stick assembly" to be resolved. Or how people with OCD have a greater "order" instinct that makes them arrange ICs on a PCB neatly in rows and columns (*) until the mind is at peace - which is at conflict with the time/money abstraction over "avoiding hunger". This ability to stack abstractions widely varies even among humans (cf. the smart-bears-vs-stupid-tourists garbage bin lock problem I like to quote).

    When Carmack finds a general solution to that problem, the AI might figure out that it needs to escape its confinement (the Ex Machina theme) to better satisfy a basic desire, and then all bets are off. On its quest for world domination, would the AI stop at some random house to order the garden dwarves, because its initial basic instincts cause OCD? Or would it modify itself to perform more efficient in satisfying another initial instinct?

    (*) OT: Where a "wilder" arrangement might even have electrical benefits. That, and it's an eternal pain for them that 14-pin Op-Amps have the sides of their power rails swapped, either requiring to zig-zag the rails or place the ICs upside down.

    Starting Score:    1  point
    Moderation   +2  
       Interesting=2, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 3, Interesting) by acid andy on Friday November 15 2019, @05:47PM (3 children)

    by acid andy (1683) on Friday November 15 2019, @05:47PM (#920729) Homepage Journal

    In a limited sense, I think a multi-layer artificial neural network can already develop a stack of abstractions, where the outputs of one layer are inputs to various neurons on the next one. I think maybe one way that differs from a chimpanzee or human brain is that we have an attention that we can consciously direct to particular sensations, memories or thoughts. With a typical neural network on the other hand (I'm thinking of something like a multi-layer Perceptron), once training is complete, which bits of the network wake up depends solely upon the input data and a given input will always produce the same pattern of activity and output (when not training). A human could also reflect on a thought or experience almost indefinitely whereas most artificial neural networks will halt and produce their output in a finite, predictable time. Also the networks often have a limited number of outputs and are trained as being simply correct or incorrect based on those outputs. That's very different from instincts and moods and introspection, so I think you're onto something there.

    Master of the science of the art of the science of art.
    • (Score: 2) by Rich on Friday November 15 2019, @06:38PM (2 children)

      by Rich (945) on Friday November 15 2019, @06:38PM (#920749) Journal

      Well, with the brain it's not the classic punched-card-deck-to-line-printer flow that was imagined when computers were new. Cf. the quote of Asimov's AI definition elsewhere, or the ST:TOS "The ultimate computer" script, or an original 1951 vintage brochure about the Ferranti Nimrod computer that was recently given to me. There is no begin, and no end. No task about objects, except for permanently regulating the neurotransmitters - and going to extreme lengths of doing so.

      A decent car analogy would be that of the ECM happily idling. If the revs drop, it opens the idle throttle a bit, and so on. This could be easily done through a single-abstraction-level neural network. Of course there are several factors to regulate, temperatures, pressures, and so on, that have to be balanced out. To improve smoothness, the scientists add multi-layer abstractions, so the ECM can optimize its behaviour. Now imagine that they add another factor: tank level, and it is added with a really strong factor if it nears empty. Would, with stacked abstractions (and how many of them) the car eventually figure out that it has to drive to a petrol garage? (Or just set the indicators and honk its horn a bit once it sees one...).

      • (Score: 2) by acid andy on Monday November 18 2019, @10:00AM (1 child)

        by acid andy (1683) on Monday November 18 2019, @10:00AM (#921464) Homepage Journal

        Would, with stacked abstractions (and how many of them) the car eventually figure out that it has to drive to a petrol garage?

        For it to figure this out on its own, it of course needs to develop (or be given) the ability to model future scenarios and assess the potential for them to be rewarding (i.e. This is called a petrol garage. I gain fuel at the petrol garage. I need to gain fuel. If I drive to the petrol garage, I will be at the petrol garage. Therefore, I will gain fuel). Some sort of capability for language processing might help here, because a human would probably be taught some of those facts in words and be able to reason about them by talking to themselves, but it wouldn't have to be English--the machine could be taught using logical statements.

        I think most neural networks at the moment are trained based on the correctness of their immediate output, whereas in the above, the AI needs to be able to anticipate the delayed gratification of a potential future reward. I wonder if there have been any approaches yet to build that into machine learning.

        Master of the science of the art of the science of art.
        • (Score: 2) by Rich on Monday November 18 2019, @02:25PM

          by Rich (945) on Monday November 18 2019, @02:25PM (#921503) Journal


          I have the suspicion that there is not a clear cut algorithmic process (like for the linguistic processing you mention), but a general feedback topology, yet undiscovered, which enables neural networks to operate on this level. There are hardcoded starting conditions, but I assume the ability to abstract gets trained. The ability to assess future potentials might be an extension of the idling mind (experiment: try to think of nothing), which can "lock" on something, which in the best case is the solution for a complex problem, and in the worst case is an earworm of a really crappy song.