Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Friday November 15 2019, @12:26PM   Printer-friendly
from the skynet-anyone? dept.

John Carmack Sets Out To Create General AI

John Carmack, programmer extraordinaire, and developer of seminal titles like "Doom" and "Quake" has said "Hasta La Vista" to his colleagues at Oculus to to set out for a new challenge. In a Facebook post (https://www.facebook.com/100006735798590/posts/2547632585471243/) he declares that he is going to work on artificial general intelligence.

What are the chances he can pull it off, and what could go wrong?
 

John Carmack Steps Down at Oculus to Pursue AI Passion Project `Before I get too old'

John Carmack Steps Down at Oculus to Pursue AI Passion Project `Before I get too Old':

Legendary coder John Carmack is leaving Facebook's Oculus after six years to focus on a personal project — no less than the creation of Artificial General Intelligence, or "Strong AI." He'll remain attached to the company in a "Consulting CTO" position, but will be spending all his time working on, perhaps, the AI that finally surpasses and destroys humanity.

AGI or strong AI is the concept of an AI that learns much the way humans do, and as such is not as limited as the extremely narrow machine learning algorithms we refer to as AI today. AGI is the science fiction version of AI — HAL 9000, Replicants and, of course, the Terminator. There are some good ones out there, too — Data and R2-D2, for instance.

[...] Carmack announced the move on Facebook, where he explained that the uncertainty about such a fascinating and exciting topic is exactly what attracted him to it:

When I think back over everything I have done across games, aerospace, and VR, I have always felt that I had at least a vague "line of sight" to the solutions, even if they were unconventional or unproven. I have sometimes wondered how I would fare with a problem where the solution really isn't in sight. I decided that I should give it a try before I get too old.

Skynet? Singularity? With great power comes great responsibility. Can he do it? Should he?


Original Submission #1Original Submission #2

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday November 15 2019, @02:08PM (5 children)

    by Anonymous Coward on Friday November 15 2019, @02:08PM (#920687)

    Please stop using the "AI" terminology for dumb things.
    Asimov had a clear definition of AI: deterministic logic system, with predesigned methods to treat arbitrary situations by arguments based on a few basic principles.
    It's artificial because it's not intelligence, but essentially a predetermined, unique sequence of statements/actions.
    The person on computer support who insists that you reboot your computer before going to the next step in the list is an example of "artificial intelligence" since they only act according to the predetermined list.

    In the meantime it was discovered that if you use a black box, or a set of black boxes arranged in various logical patterns, you get better results than trying to use just logic.
    And today this is what's being used, with the blackboxes being generated in various somewhat random ways.

    But there's nothing artificial about this intelligence.
    These are dumb agents, but they work in the same way that our brains do.
    We have highly specialized hardware implementations of various functionalities, which come precombined in certain patterns (involuntary reflexes are a manifestation of these). But we also have general purpose brain-mass that gets wired in arbitrary ways in order to address specific problems that we encounter along the way.

    Nobody is working on artificial intelligence today. People are working on non-biological intelligence, if you want to emphasize that the hardware support is "artificial" (as in made by humans rather than naturally ocuring).

  • (Score: 3, Interesting) by ikanreed on Friday November 15 2019, @03:19PM (4 children)

    by ikanreed (3164) on Friday November 15 2019, @03:19PM (#920696) Journal

    This is rambling nonsense. No one crowned Issac Asimov king of all AI because he wrote some fun mysteries about them.

    AI is the study, in part or in whole, of replicating human intelligence with man-made machines. No more, no less.

    A shitty neural net that's only ability is to classify animals: that's AI.
    An engine built on replicating the structure of human brains with machine circuitry: that's AI
    A search engine that tries to find the best match for a string with some understanding of semantics: that's AI
    A dumbass chatbot that is entirely traditional functional programming, but is designed to trick people into thinking it's human: that's fucking AI.

    • (Score: 0) by Anonymous Coward on Friday November 15 2019, @03:38PM (1 child)

      by Anonymous Coward on Friday November 15 2019, @03:38PM (#920702)

      it's not rambling nonsense.
      if a future man-made machine shows human-like intelligence it should have human rights.
      if you guys start out by calling it "artificial", it will have strong repercussions for legal discussions.
      and yes, I think animal rights in general should also depend on level of intelligence.

    • (Score: 2) by stormreaver on Friday November 15 2019, @09:16PM (1 child)

      by stormreaver (5101) on Friday November 15 2019, @09:16PM (#920801)

      AI is the study, in part or in whole, of replicating human intelligence with man-made machines. No more, no less.

      I agree with you here, but then you go on to list three out of four examples that violate your definition. Those three examples are better classified as expert systems, not artificial intelligence.

      I have a different definition of what I would consider true artificial intelligence: a system that combines software and hardware which can be taught to do arbitrary things it wasn't programmed to do, and to do so without any additional programming. It gets even closer if it is programmed with no predispositions, but finds some things it observes to be interesting, and other things to be uninteresting. Such a system would be able to identify its own weaknesses and strengths, and decide what it wants to do about them (if anything).

      The closer we get to that ability, the closer we get to artificial intelligence. At this point, we're not even a single step closer to that than we were 50 years ago.

      • (Score: 2) by ikanreed on Friday November 15 2019, @09:30PM

        by ikanreed (3164) on Friday November 15 2019, @09:30PM (#920810) Journal

        They attempt to replicate one aspect of human intelligence, that's why I said part or whole. Whether the technologies contained therein can scale up to replicate all of it is irrelevant.