Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 9 submissions in the queue.
posted by Fnord666 on Friday November 15 2019, @12:26PM   Printer-friendly
from the skynet-anyone? dept.

John Carmack Sets Out To Create General AI

John Carmack, programmer extraordinaire, and developer of seminal titles like "Doom" and "Quake" has said "Hasta La Vista" to his colleagues at Oculus to to set out for a new challenge. In a Facebook post (https://www.facebook.com/100006735798590/posts/2547632585471243/) he declares that he is going to work on artificial general intelligence.

What are the chances he can pull it off, and what could go wrong?
 

John Carmack Steps Down at Oculus to Pursue AI Passion Project `Before I get too old'

John Carmack Steps Down at Oculus to Pursue AI Passion Project `Before I get too Old':

Legendary coder John Carmack is leaving Facebook's Oculus after six years to focus on a personal project — no less than the creation of Artificial General Intelligence, or "Strong AI." He'll remain attached to the company in a "Consulting CTO" position, but will be spending all his time working on, perhaps, the AI that finally surpasses and destroys humanity.

AGI or strong AI is the concept of an AI that learns much the way humans do, and as such is not as limited as the extremely narrow machine learning algorithms we refer to as AI today. AGI is the science fiction version of AI — HAL 9000, Replicants and, of course, the Terminator. There are some good ones out there, too — Data and R2-D2, for instance.

[...] Carmack announced the move on Facebook, where he explained that the uncertainty about such a fascinating and exciting topic is exactly what attracted him to it:

When I think back over everything I have done across games, aerospace, and VR, I have always felt that I had at least a vague "line of sight" to the solutions, even if they were unconventional or unproven. I have sometimes wondered how I would fare with a problem where the solution really isn't in sight. I decided that I should give it a try before I get too old.

Skynet? Singularity? With great power comes great responsibility. Can he do it? Should he?


Original Submission #1Original Submission #2

Related Stories

Oculus to Begin Requiring Facebook Accounts to Use VR Headsets 40 comments

Mandatory Socialization: Facebook Accounts To be Required for Oculus Headsets

Signaling the end to any remaining degrees of separation between Facebook and its VR headset division, Oculus, today the social media company announced that it will be further integrating the two services. Coming this fall, the company will begin sunsetting stand-alone Oculus accounts as part of an effort to transition the entire Oculus ecosystem over to Facebook. This will start in October, when all new Oculus accounts and devices will have to sign up for a Facebook account, while support for existing stand-alone accounts will be retired entirely at the start of 2023.

Previously: Facebook to Buy Rift Maker Oculus VR for $2bn
Facebook/Oculus Ordered to pay $500 Million to ZeniMax
Founder of Oculus VR, Palmer Luckey, Departs Facebook
Facebook Announces Oculus Go for $200
Facebook's Zuckerberg Wants to Get One Billion People in VR
Facebook Launches Oculus Go, a $200 Standalone VR Headset
Oculus Co-Founder Says there is No Market for VR Gaming
John Carmack Steps Down at Oculus to Pursue AI Passion Project
Facebook is Developing its Own OS to Reduce Dependence on Android


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Insightful) by Anonymous Coward on Friday November 15 2019, @12:41PM (3 children)

    by Anonymous Coward on Friday November 15 2019, @12:41PM (#920664)

    I don't know enough about him to know whether he'll "succeed" or not in the state enterprise, but I envy anyone who can just give up their job for a hobby.

    • (Score: 0) by Anonymous Coward on Friday November 15 2019, @12:46PM (2 children)

      by Anonymous Coward on Friday November 15 2019, @12:46PM (#920670)

      I mean he started by stealing resources while at Softdisk, then ousted founding employees in a bid to control more shares when they sold out to EA (which fell through, only to instead sell out to Zenimax a few years later when Doom3's id tech4 failed in the marketplace for licensing purposes.) Then he was involved with that jankyness with Palmer whatever his name was at Oculus stealing tech.

      And now he's abandoning that to go fuck off on another pet project that will fail like Armadillo Aerospace did.

      Good luck John, any relevance you had is long since passed. 5-10 years of stardom sure has allowed you 20 years of coasting! Not unlike my dad.

      • (Score: 0) by Anonymous Coward on Friday November 15 2019, @12:56PM

        by Anonymous Coward on Friday November 15 2019, @12:56PM (#920675)

        The highlights of his career include making Commander Keen, open sourcing Doom 3, and collecting that Facefook bread. He has done plenty.

      • (Score: 2, Funny) by Anonymous Coward on Friday November 15 2019, @03:43PM

        by Anonymous Coward on Friday November 15 2019, @03:43PM (#920703)

        Good luck John, any relevance you had is long since passed. 5-10 years of stardom sure has allowed you 20 years of coasting! Not unlike my dad.

        It sounds like you have a lot of anger about your dad's life choices. Does that come into play in other areas of your life too?

        Perhaps you'd like to talk about it?

  • (Score: 4, Interesting) by VLM on Friday November 15 2019, @12:45PM (4 children)

    by VLM (445) on Friday November 15 2019, @12:45PM (#920668)

    Can he do it?

    Probably not.

    The rep that I've heard of his coding style is its detail oriented, wisely/highly optimized, his loop of bug fixing/optimization is fast, and he dogfoods real well so his stuff is actually enjoyable and works and users love it. I've not studied his code in detail, however.

    I mean, anyone could make something like "doom" in 2019 that takes multiple GHz speed cores and a huge graphics card and a billion lines of (crappy) code from a team of hundreds of low productivity people like modern studios do everyday; he shipped it successfully in '93.

    That's not really where AI is today, is it? Where something cool works in a large research lab and there's an obvious application but its not been productized and optimized to run on everyone's desks quite yet but adequate hardware has JUST arrived that will JUST barely work?

    So if his secret sauce special skill is of no use, he's just kinda an average member of management at the new place?

    I mean, if he was going to code on a project to make the worlds most popular and addictive new genre game that runs on low power rarely charged wrist mounted fitness trackers, I'd believe that and expect success from the guy. But the current plan is unlikely success.

    • (Score: 1, Interesting) by Anonymous Coward on Friday November 15 2019, @12:59PM (2 children)

      by Anonymous Coward on Friday November 15 2019, @12:59PM (#920676)

      The more people and groups working on artificial general intelligence, the better.

      Artificial general intelligence has probably been created already. It's just languishing under military control.

      • (Score: 4, Interesting) by Anonymous Coward on Friday November 15 2019, @06:51PM (1 child)

        by Anonymous Coward on Friday November 15 2019, @06:51PM (#920759)

        I used to take a not dissimilar view. Then I ended up working with neural network based systems for a couple of years. Now I think anything vaguely resembling intelligence is probably impossible with current techniques, and we're most likely on our way to another AI winter once full self driving vehicles prove impossible.

        Why? Pretty simple. "AI" is driven by correlations. And it can pick up on some remarkable correlations that humans are not capable of. You can get from 0-90% super easily. And it looks like you're going to have created Data within a decade. Getting from 90-99% is a lot harder but still really quite doable. And at this point, your own results start to feel magical. As a silly example, I was able to give an arbitrary pattern to my network and it could generally pick the next digits. Of course it was just picking up on patterns and correlations but it really felt genuinely intelligent, like we might feel the first time we beam 2 3 5 out into space and get back 7 11 13. As an aside, no it could not do primes. In any case, now you're damned near positive you're going to be able to create Data. Then you start pushing for 99.9%. And things start getting really really hard. And by the time you start pushing for 99.99% it becomes increasingly clear that you're headed hard and fast for some asymptote.

        You can still do some cool things below that asymptote. For instance I worked on financial tech stuff, and it performed far better than humans. But if you want to apply this to a field like driving let alone some generalized field? No, it's simply not going to work. If we achieve anything like self driving I imagine we're going to have extensive 'hand coded' LIDAR (and probably also RADAR) systems constantly sanity testing everything. A fintech system can afford some head scratching decisions so long as the general outcome outperforms humans. A car can't handle you occasionally deciding to t-bone a concrete wall even if the other 99.99% of the time you drive like a super-human. I think the most probable outcome is us ditching automation altogether since even 'driver assistance' is probably going to do more harm than good since the driver is going to zone out. But if we do keep self driving vehicles I expect to see these systems with extensive 'hand coded' things driving only on white-listed paths which are further hand tuned, just like Waymo seems to be doing. And the final result may look like AI, but I think the emphasis is very much going to be on the "A" part there.

    • (Score: 2, Interesting) by Anonymous Coward on Friday November 15 2019, @06:15PM

      by Anonymous Coward on Friday November 15 2019, @06:15PM (#920741)

      In my opinion the secret sauce of individuals like Carmack is not any particular thing, but simply the overall skill set and brain that enables him to do what he does. For instance, imagine Carmack had chosen to pursue e.g. cosmological research instead of software development, I would generally expect he probably would have managed to excel there as well.

      In many ways I find many completely different tasks really boil down to the same thing at some level. It's simply assimilating information, obtaining a sufficiently intuitive understanding it in a logical and clear fashion, and then applying a good dose of creativity and cleverness to apply it in novel ways. I used to be much more a fan of Tabula Rasa, but I find real life experience tends to leave less and less room to believe in such things as the years pass.

      So of course none of this means he'll succeed or have any impact whatsoever. But I do think it means he has a vastly higher chance of such a happening than e.g. your average grad student who's focused on AI.

  • (Score: 0) by Anonymous Coward on Friday November 15 2019, @02:08PM (5 children)

    by Anonymous Coward on Friday November 15 2019, @02:08PM (#920687)

    Please stop using the "AI" terminology for dumb things.
    Asimov had a clear definition of AI: deterministic logic system, with predesigned methods to treat arbitrary situations by arguments based on a few basic principles.
    It's artificial because it's not intelligence, but essentially a predetermined, unique sequence of statements/actions.
    The person on computer support who insists that you reboot your computer before going to the next step in the list is an example of "artificial intelligence" since they only act according to the predetermined list.

    In the meantime it was discovered that if you use a black box, or a set of black boxes arranged in various logical patterns, you get better results than trying to use just logic.
    And today this is what's being used, with the blackboxes being generated in various somewhat random ways.

    But there's nothing artificial about this intelligence.
    These are dumb agents, but they work in the same way that our brains do.
    We have highly specialized hardware implementations of various functionalities, which come precombined in certain patterns (involuntary reflexes are a manifestation of these). But we also have general purpose brain-mass that gets wired in arbitrary ways in order to address specific problems that we encounter along the way.

    Nobody is working on artificial intelligence today. People are working on non-biological intelligence, if you want to emphasize that the hardware support is "artificial" (as in made by humans rather than naturally ocuring).

    • (Score: 3, Interesting) by ikanreed on Friday November 15 2019, @03:19PM (4 children)

      by ikanreed (3164) Subscriber Badge on Friday November 15 2019, @03:19PM (#920696) Journal

      This is rambling nonsense. No one crowned Issac Asimov king of all AI because he wrote some fun mysteries about them.

      AI is the study, in part or in whole, of replicating human intelligence with man-made machines. No more, no less.

      A shitty neural net that's only ability is to classify animals: that's AI.
      An engine built on replicating the structure of human brains with machine circuitry: that's AI
      A search engine that tries to find the best match for a string with some understanding of semantics: that's AI
      A dumbass chatbot that is entirely traditional functional programming, but is designed to trick people into thinking it's human: that's fucking AI.

      • (Score: 0) by Anonymous Coward on Friday November 15 2019, @03:38PM (1 child)

        by Anonymous Coward on Friday November 15 2019, @03:38PM (#920702)

        it's not rambling nonsense.
        if a future man-made machine shows human-like intelligence it should have human rights.
        if you guys start out by calling it "artificial", it will have strong repercussions for legal discussions.
        and yes, I think animal rights in general should also depend on level of intelligence.

      • (Score: 2) by stormreaver on Friday November 15 2019, @09:16PM (1 child)

        by stormreaver (5101) on Friday November 15 2019, @09:16PM (#920801)

        AI is the study, in part or in whole, of replicating human intelligence with man-made machines. No more, no less.

        I agree with you here, but then you go on to list three out of four examples that violate your definition. Those three examples are better classified as expert systems, not artificial intelligence.

        I have a different definition of what I would consider true artificial intelligence: a system that combines software and hardware which can be taught to do arbitrary things it wasn't programmed to do, and to do so without any additional programming. It gets even closer if it is programmed with no predispositions, but finds some things it observes to be interesting, and other things to be uninteresting. Such a system would be able to identify its own weaknesses and strengths, and decide what it wants to do about them (if anything).

        The closer we get to that ability, the closer we get to artificial intelligence. At this point, we're not even a single step closer to that than we were 50 years ago.

        • (Score: 2) by ikanreed on Friday November 15 2019, @09:30PM

          by ikanreed (3164) Subscriber Badge on Friday November 15 2019, @09:30PM (#920810) Journal

          They attempt to replicate one aspect of human intelligence, that's why I said part or whole. Whether the technologies contained therein can scale up to replicate all of it is irrelevant.

  • (Score: 4, Interesting) by Rich on Friday November 15 2019, @04:21PM (4 children)

    by Rich (945) on Friday November 15 2019, @04:21PM (#920712) Journal

    I think the key to singularity is the ability to understand how abstraction levels in the brain work. On a purely "flat" network level, we're pretty much sorted. A classic experiment is the chimpanzee who has to assemble a long stick out of two short ones to get the banana - which all comes down to the instinct of "hunger", but it needs to stack up from "banana" to "stick assembly" to be resolved. Or how people with OCD have a greater "order" instinct that makes them arrange ICs on a PCB neatly in rows and columns (*) until the mind is at peace - which is at conflict with the time/money abstraction over "avoiding hunger". This ability to stack abstractions widely varies even among humans (cf. the smart-bears-vs-stupid-tourists garbage bin lock problem I like to quote).

    When Carmack finds a general solution to that problem, the AI might figure out that it needs to escape its confinement (the Ex Machina theme) to better satisfy a basic desire, and then all bets are off. On its quest for world domination, would the AI stop at some random house to order the garden dwarves, because its initial basic instincts cause OCD? Or would it modify itself to perform more efficient in satisfying another initial instinct?

    (*) OT: Where a "wilder" arrangement might even have electrical benefits. That, and it's an eternal pain for them that 14-pin Op-Amps have the sides of their power rails swapped, either requiring to zig-zag the rails or place the ICs upside down.

    • (Score: 3, Interesting) by acid andy on Friday November 15 2019, @05:47PM (3 children)

      by acid andy (1683) on Friday November 15 2019, @05:47PM (#920729) Homepage Journal

      In a limited sense, I think a multi-layer artificial neural network can already develop a stack of abstractions, where the outputs of one layer are inputs to various neurons on the next one. I think maybe one way that differs from a chimpanzee or human brain is that we have an attention that we can consciously direct to particular sensations, memories or thoughts. With a typical neural network on the other hand (I'm thinking of something like a multi-layer Perceptron), once training is complete, which bits of the network wake up depends solely upon the input data and a given input will always produce the same pattern of activity and output (when not training). A human could also reflect on a thought or experience almost indefinitely whereas most artificial neural networks will halt and produce their output in a finite, predictable time. Also the networks often have a limited number of outputs and are trained as being simply correct or incorrect based on those outputs. That's very different from instincts and moods and introspection, so I think you're onto something there.

      --
      Welcome to Edgeways. Words should apply in advance as spaces are highly limite—
      • (Score: 2) by Rich on Friday November 15 2019, @06:38PM (2 children)

        by Rich (945) on Friday November 15 2019, @06:38PM (#920749) Journal

        Well, with the brain it's not the classic punched-card-deck-to-line-printer flow that was imagined when computers were new. Cf. the quote of Asimov's AI definition elsewhere, or the ST:TOS "The ultimate computer" script, or an original 1951 vintage brochure about the Ferranti Nimrod computer that was recently given to me. There is no begin, and no end. No task about objects, except for permanently regulating the neurotransmitters - and going to extreme lengths of doing so.

        A decent car analogy would be that of the ECM happily idling. If the revs drop, it opens the idle throttle a bit, and so on. This could be easily done through a single-abstraction-level neural network. Of course there are several factors to regulate, temperatures, pressures, and so on, that have to be balanced out. To improve smoothness, the scientists add multi-layer abstractions, so the ECM can optimize its behaviour. Now imagine that they add another factor: tank level, and it is added with a really strong factor if it nears empty. Would, with stacked abstractions (and how many of them) the car eventually figure out that it has to drive to a petrol garage? (Or just set the indicators and honk its horn a bit once it sees one...).

        • (Score: 2) by acid andy on Monday November 18 2019, @10:00AM (1 child)

          by acid andy (1683) on Monday November 18 2019, @10:00AM (#921464) Homepage Journal

          Would, with stacked abstractions (and how many of them) the car eventually figure out that it has to drive to a petrol garage?

          For it to figure this out on its own, it of course needs to develop (or be given) the ability to model future scenarios and assess the potential for them to be rewarding (i.e. This is called a petrol garage. I gain fuel at the petrol garage. I need to gain fuel. If I drive to the petrol garage, I will be at the petrol garage. Therefore, I will gain fuel). Some sort of capability for language processing might help here, because a human would probably be taught some of those facts in words and be able to reason about them by talking to themselves, but it wouldn't have to be English--the machine could be taught using logical statements.

          I think most neural networks at the moment are trained based on the correctness of their immediate output, whereas in the above, the AI needs to be able to anticipate the delayed gratification of a potential future reward. I wonder if there have been any approaches yet to build that into machine learning.

          --
          Welcome to Edgeways. Words should apply in advance as spaces are highly limite—
          • (Score: 2) by Rich on Monday November 18 2019, @02:25PM

            by Rich (945) on Monday November 18 2019, @02:25PM (#921503) Journal

            Precisely.

            I have the suspicion that there is not a clear cut algorithmic process (like for the linguistic processing you mention), but a general feedback topology, yet undiscovered, which enables neural networks to operate on this level. There are hardcoded starting conditions, but I assume the ability to abstract gets trained. The ability to assess future potentials might be an extension of the idling mind (experiment: try to think of nothing), which can "lock" on something, which in the best case is the solution for a complex problem, and in the worst case is an earworm of a really crappy song.

  • (Score: 4, Insightful) by acid andy on Friday November 15 2019, @04:30PM (1 child)

    by acid andy (1683) on Friday November 15 2019, @04:30PM (#920713) Homepage Journal

    Can he do it?

    IMHO there are two big hurdles in achieving strong AI.

    Firstly, as I understand it, the number of synaptic connections in a human brain is still impractical to accurately simulate in anything close to real time with current hardware. It's an open question though whether something with a human-like, or at least mammal-like, capacity for general learning can be achieved with a much smaller number of connections and a simplified neural network model.

    I think the second hurdle is due to the fact that animal brains and senses have evolved many specialized regions and structures that are pre-engineered (if you like) to excel at various specific tasks which make thinking, modeling and responding to the environment much easier and more effective. For this reason I don't think strong AI can be achieved any time soon if the approach is one of just building the largest possible general purpose neural network or information processing engine, hooking up some sensors and expecting it to just learn. I expect such a system could form a big part of it, but the hard work that I think is necessary is to develop all these specialized subsystems--for example something for vision and spatial modeling, something for language processing, maybe something for understanding of time and anticipating future events and probably something to deal with motivation--emotions, mood and alertness. I think if we can hone all those sorts of subsystems and plug them all in to a big neural network we'd really be getting somewhere very interesting. Thanks to big data of course, things like computer vision and probably some bits of language processing might not be too far off. Some of the other subsystems could be a challenge though.

    Should he?

    I really want to see this happen but at the same time I feel sure it will unleash hell simply because of how humanity will misuse it.

    --
    Welcome to Edgeways. Words should apply in advance as spaces are highly limite—
    • (Score: 0) by Anonymous Coward on Saturday November 16 2019, @05:58PM

      by Anonymous Coward on Saturday November 16 2019, @05:58PM (#921014)

      the number of synaptic connections in a human brain is still impractical to accurately simulate in anything close to real time with current hardware.

      How about something simpler? Do we really understand how single celled creatures work?
      https://www.researchgate.net/publication/259824963_Detailed_Process_of_Shell_Construction_in_the_Photosynthetic_Testate_Amoeba_Paulinella_chromatophora_Euglyphid_Rhizaria [researchgate.net]

      P. chromatophora has a siliceous shell made of brick-like scales. These scales are varied in size and shape. How a P. chromatophora cell makes this shell is still a mystery. We examined shell construction process in P. chromatophora in detail using time-lapse video microscopy. The new shell was constructed by a specialized pseudopodium that laid out each scale into correct position, one scale at a time.

      https://bogology.org/what-we-do/in-the-lab/testate-amoebae/ [bogology.org]
      https://www.youtube.com/watch?v=UlGg2wt-wqI [youtube.com]
      https://www.youtube.com/watch?v=JnlULOjUhSQ [youtube.com]

      If neurons are very stupid and brains are smart just by organization of neurons then have humans managed to understand how to create human organizations that are much smarter than the individual brains and not merely more capable?

      Or perhaps neurons and some other single celled creatures aren't as stupid as many assume? I'm not claiming they're as smart as humans but seems like many are considering neurons like merely dumb components not very much smarter than transistors.

      Do current AI neurons really work like this: https://www.nature.com/news/2005/050620/full/news050620-7.html [nature.com]
      And this: http://www.nature.com/nrn/journal/v11/n5/abs/nrn2822.html [nature.com]

      the second hurdle is due to the fact that animal brains and senses have evolved many specialized regions and structures that are pre-engineered

      From what I see lots of animals with brains aren't really that much smarter than the single celled creatures in some of the videos I linked to (some of those creatures build intricate shells that are typical of their species- not random, and if there isn't enough material for another shell they don't try to stupidly reproduce[1]).

      So one of my hypotheses is that single celled creatures solved the problem of thinking first, and brains were initially more for solving the various problems of controlling a multicellular body. Redundancy, interfacing, signal boosting/processing etc. Only later did brains evolve for smartness.

      And thus if we want to figure out the basics of how brains think we might be better off starting with figuring out and understanding _thoroughly_ on how single celled creatures think.

      [1] https://archive.org/stream/biologicalbullet70mari/biologicalbullet70mari_djvu.txt [archive.org]

      While experimenting with Pontigulasia vas, it was found that
      reproduction could be prevented if the cultures were kept free of
      substances used in shell construction.

      The following experiments were made to test the effect of culturing
      Pontigulasia vas without shell materials. The cultures were run in
      pairs, one was supplied with powdered sand or glass, the other was not.

      The rest of these Ponlignlasia were given sand to determine if
      their power of reproduction had been affected. After some delay
      division took place. An individual from Culture 3 gave a typical
      reaction. This animal made no effort at first to collect shell materials
      but began to do so three days later. By the fourth day it had produced
      a normal offspring. It appears, therefore, that the power of reproduc-
      tion had not been permanently affected.

      During the experiments the actions of the Pontigulasia without
      shell materials were interesting. Much of the time was spent moving
      about on the bottom of the watch glasses without any attempt to feed.
      At such times the pseudopods would become ragged in outline with a
      wide hyaline area at the ends. This type of pseudopod is usually
      associated with the collection of test materials. Undoubtedly these
      animals would have collected sand had it been present. After a day or
      two of such moving about the animals would begin to feed again. At
      other times they would go into a quiescent state for several days before
      feeding.

  • (Score: 0) by Anonymous Coward on Friday November 15 2019, @06:36PM

    by Anonymous Coward on Friday November 15 2019, @06:36PM (#920748)

    whatever it is but it can cook, clean and tell me every day what a great guy i am ... well i would be inclined to call that intelligent ^_^

(1)