Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Friday January 20 2017, @04:09PM   Printer-friendly
from the it's-AIs-all-the-way-down dept.

Submitted via IRC for TheMightyBuzzard

Google and others think software that learns to learn could take over some work done by AI experts.

Progress in artificial intelligence causes some people to worry that software will take jobs such as driving trucks away from humans. Now leading researchers are finding that they can make software that can learn to do one of the trickiest parts of their own jobs—the task of designing machine-learning software.

In one experiment, researchers at the Google Brain artificial intelligence research group had software design a machine-learning system to take a test used to benchmark software that processes language. What it came up with surpassed previously published results from software designed by humans.

In recent months several other groups have also reported progress on getting learning software to make learning software. They include researchers at the nonprofit research institute OpenAI (which was cofounded by Elon Musk), MIT, the University of California, Berkeley, and Google's other artificial intelligence research group, DeepMind.

If self-starting AI techniques become practical, they could increase the pace at which machine-learning software is implemented across the economy. Companies must currently pay a premium for machine-learning experts, who are in short supply.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by ledow on Friday January 20 2017, @04:39PM

    by ledow (5567) on Friday January 20 2017, @04:39PM (#456590) Homepage

    Is this really "new"? Genetic algorithms were training and selecting GA's 20+ years ago when I was just a CS student.

    It was an obvious "insight" even then, I know because I had it early on in learning about GA's, and then discovered people had been doing it for years already.

    The problem is that improvements like that are slow at the best of times, and it's totally uncontrolled. As such the result is often not very usable or predictable. As soon as you set it to work on a stock market, or chip design, or whatever other task, you can't rely on its results no matter how deeply you trained it, or it trained itself.

    And, despite making large advances, we still don't actually have AI or anything like it. We have a sufficiently advanced magic show, only.

    And - like interviewing using fake interview questions - all that happens if you train a piece of software to create something to pass a certain test, is that you get software "good" at passing tests. Not intelligent. Not necessarily useful. Not even actually understanding the concept, purpose, theory or structure of the test - literally how to get a bigger score on a certain test. Which could be achieved by anything from how to fudge the results, how to break the test, the weakness in the scoring, or literally just outright cheating such that it passes the test but is still not use at processing the language itself.

    Starting Score:    1  point
    Moderation   +2  
       Insightful=2, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 0) by Anonymous Coward on Friday January 20 2017, @05:33PM

    by Anonymous Coward on Friday January 20 2017, @05:33PM (#456618)

    It may be obvious, but someone has to do it. Props to the authors for trying this, even if they did not yet manage to build SkyNet.

  • (Score: 2) by slinches on Friday January 20 2017, @06:41PM

    by slinches (5049) on Friday January 20 2017, @06:41PM (#456638)

    How is that any different from how humans behave?

  • (Score: 1) by Weasley on Friday January 20 2017, @07:01PM

    by Weasley (6421) on Friday January 20 2017, @07:01PM (#456651)

    The only difference between this and us is scale. We're just the result of millions of years of nature training us to pass a lot of tests. The media and some people would have us believe that AI will soon be able to pass all the same tests we can, but I think it's much further out. But I'm not extremely confident in that. Algorithms training algorithms might actually start a Kurzweilian utopia/dystopia/exponentopia.

  • (Score: 0) by Anonymous Coward on Friday January 20 2017, @07:47PM

    by Anonymous Coward on Friday January 20 2017, @07:47PM (#456668)

    GAs training and selecting GAs? This makes very little sense to me. The evaluator of the 'parent' is presumably optimizing towards the same evaluation as the 'child' (and if not, then what is the point?) in which case you're not actually picking and choosing anything, but just exponentially increasing the timescale of an elementary GA for no real reason since it's basically doing some sort of very complex g(f(a)) to get f(a)! If you can reference any papers or even articles or whatever it would be much appreciated because I have a rather difficult time imagining this was being done, as a thing.

  • (Score: 2) by JoeMerchant on Friday January 20 2017, @09:09PM

    by JoeMerchant (3937) on Friday January 20 2017, @09:09PM (#456709)

    The problem with algorithmic approaches, whether GA or others, is that they may make generational gains in efficiency or accuracy, but those gains have (historically) always tapered off and reached some asymptotitc limit.

    If there ever is an algorithm that continues to improve with each successive generation without human intervention, that would be news. It would also likely be SkyNet, and, so, would probably take control of the news to protect itself.

    --
    🌻🌻 [google.com]