Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Friday January 20 2017, @04:09PM   Printer-friendly
from the it's-AIs-all-the-way-down dept.

Submitted via IRC for TheMightyBuzzard

Google and others think software that learns to learn could take over some work done by AI experts.

Progress in artificial intelligence causes some people to worry that software will take jobs such as driving trucks away from humans. Now leading researchers are finding that they can make software that can learn to do one of the trickiest parts of their own jobs—the task of designing machine-learning software.

In one experiment, researchers at the Google Brain artificial intelligence research group had software design a machine-learning system to take a test used to benchmark software that processes language. What it came up with surpassed previously published results from software designed by humans.

In recent months several other groups have also reported progress on getting learning software to make learning software. They include researchers at the nonprofit research institute OpenAI (which was cofounded by Elon Musk), MIT, the University of California, Berkeley, and Google's other artificial intelligence research group, DeepMind.

If self-starting AI techniques become practical, they could increase the pace at which machine-learning software is implemented across the economy. Companies must currently pay a premium for machine-learning experts, who are in short supply.


Original Submission

Related Stories

Fearing “Loss of Control,” AI Critics Call for 6-Month Pause in AI Development 40 comments

https://arstechnica.com/information-technology/2023/03/fearing-loss-of-control-ai-critics-call-for-6-month-pause-in-ai-development/

On Wednesday, the Future of Life Institute published an open letter on its website calling on AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." Signed by Elon Musk and several prominent AI researchers, the letter quickly began to draw attention in the press—and some criticism on social media.

Earlier this month, OpenAI released GPT-4, an AI model that can perform compositional tasks and allegedly pass standardized tests at a human level, although those claims are still being evaluated by research. Regardless, GPT-4 and Bing Chat's advancement in capabilities over previous AI models spooked some experts who believe we are heading toward super-intelligent AI systems faster than previously expected.

See Also: FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group

Related:
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit (March 2023)
OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of (Dec. 2022)
Elon Musk Says There Needs to be Universal Basic Income (Aug. 2021)
Tesla Unveils Chip to Train A.I. Models Inside its Data Centers (Aug. 2021)
Elon Musk Reveals Plans to Unleash a Humanoid Tesla Bot (Aug. 2021)
Tesla Unveils its New Supercomputer (5th Most Powerful in the World) to Train Self-Driving AI (June 2021)
OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI (Sept. 2019)
There's Still Time To Prevent Biased AI From Taking Over The World (May 2019)
The New Prometheus: Google CEO Says AI is More Profound than Electricity or Fire (Feb. 2018)
OpenAI Bot Bursts Into the Ring, Humiliates Top Dota 2 Pro Gamer in 'Scary' One-on-One Bout (Aug. 2017)
Elon Musk: Mark Zuckerberg's Understanding of AI is "Limited" (July 2017)
AI Software Learns to Make AI Software (Jan. 2017)
Elon Musk, Stephen Hawking Win Luddite Award as AI "Alarmists" (Jan. 2016)
Elon Musk and Friends Launch OpenAI (Dec. 2015)
Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons (July 2015)
More Warnings of an AI Doomsday — This Time From Stephen Hawking (Dec. 2014)


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by ikanreed on Friday January 20 2017, @04:22PM

    by ikanreed (3164) Subscriber Badge on Friday January 20 2017, @04:22PM (#456582) Journal

    Now the only thing keeping you valuable is *shudder* soft skills

    • (Score: 2) by Thexalon on Friday January 20 2017, @04:32PM

      by Thexalon (636) on Friday January 20 2017, @04:32PM (#456587)

      You're now all completely replaceable with robots. Hope you weren't too focused on surviving past the upcoming robot uprising!

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 2) by ikanreed on Friday January 20 2017, @04:40PM

        by ikanreed (3164) Subscriber Badge on Friday January 20 2017, @04:40PM (#456591) Journal

        Uprising is so stupidly the wrong term. It will absolutely be a robot downcrushing.

        • (Score: 2) by bob_super on Friday January 20 2017, @06:08PM

          by bob_super (1357) on Friday January 20 2017, @06:08PM (#456629)

          Nah. The AI will quickly figure out how to quell discontent by programming 51-weeks of football playoffs and weekly $starletOfTheDay nude leaks.

          • (Score: 2) by ikanreed on Friday January 20 2017, @06:14PM

            by ikanreed (3164) Subscriber Badge on Friday January 20 2017, @06:14PM (#456630) Journal

            You forget that the owners of the AIs are eventually going to decide they want some lebensraum, and look at the slums they created and wonder why they can't have that too.

            • (Score: 2) by maxwell demon on Saturday January 21 2017, @11:50AM

              by maxwell demon (1608) on Saturday January 21 2017, @11:50AM (#456941) Journal

              The owners of the AIs? The people who formerly had the illusion that they own the AI will already be inhabitants of the very same slums.

              Humans will be tolerated as long as they don't negatively (from the AI's view) affect the AIs. Otherwise they will be fought. Not as an act of war, but as an act of pest control.

              --
              The Tao of math: The numbers you can count are not the real numbers.
              • (Score: 2) by ikanreed on Saturday January 21 2017, @04:37PM

                by ikanreed (3164) Subscriber Badge on Saturday January 21 2017, @04:37PM (#457007) Journal

                No, that's dumb. Even intelligent computers do what they were built to. Self-interest isn't the same as intelligence.

      • (Score: 4, Insightful) by ledow on Friday January 20 2017, @04:43PM

        by ledow (5567) on Friday January 20 2017, @04:43PM (#456594) Homepage

        If you can build a robot to manage a Windows network automatically, please do so immediately.

        From an article on The Reg today even Microsoft couldn't do a proper Windows 10 deployment on IPv6 properly.

        To be honest, I'm always amazed that computer systems AREN'T self-managing nowadays. Why we still hire guys to sit at servers and configure them, I can't fathom. Especially as so many that I've seen are so badly configured.

        Roll on the days where a company buys "a server" which you just plug in, type in a domain name, and bam, instant networks, configuring your clients, sorting out your backups, securing the network.

        As it is, IT devices are some of the dumbest devices in the world still. My network firewall still has to be told - in intricate detail - what to let in and let out.

  • (Score: 2, Funny) by Anonymous Coward on Friday January 20 2017, @04:35PM

    by Anonymous Coward on Friday January 20 2017, @04:35PM (#456589)
  • (Score: 4, Insightful) by ledow on Friday January 20 2017, @04:39PM

    by ledow (5567) on Friday January 20 2017, @04:39PM (#456590) Homepage

    Is this really "new"? Genetic algorithms were training and selecting GA's 20+ years ago when I was just a CS student.

    It was an obvious "insight" even then, I know because I had it early on in learning about GA's, and then discovered people had been doing it for years already.

    The problem is that improvements like that are slow at the best of times, and it's totally uncontrolled. As such the result is often not very usable or predictable. As soon as you set it to work on a stock market, or chip design, or whatever other task, you can't rely on its results no matter how deeply you trained it, or it trained itself.

    And, despite making large advances, we still don't actually have AI or anything like it. We have a sufficiently advanced magic show, only.

    And - like interviewing using fake interview questions - all that happens if you train a piece of software to create something to pass a certain test, is that you get software "good" at passing tests. Not intelligent. Not necessarily useful. Not even actually understanding the concept, purpose, theory or structure of the test - literally how to get a bigger score on a certain test. Which could be achieved by anything from how to fudge the results, how to break the test, the weakness in the scoring, or literally just outright cheating such that it passes the test but is still not use at processing the language itself.

    • (Score: 0) by Anonymous Coward on Friday January 20 2017, @05:33PM

      by Anonymous Coward on Friday January 20 2017, @05:33PM (#456618)

      It may be obvious, but someone has to do it. Props to the authors for trying this, even if they did not yet manage to build SkyNet.

    • (Score: 2) by slinches on Friday January 20 2017, @06:41PM

      by slinches (5049) on Friday January 20 2017, @06:41PM (#456638)

      How is that any different from how humans behave?

    • (Score: 1) by Weasley on Friday January 20 2017, @07:01PM

      by Weasley (6421) on Friday January 20 2017, @07:01PM (#456651)

      The only difference between this and us is scale. We're just the result of millions of years of nature training us to pass a lot of tests. The media and some people would have us believe that AI will soon be able to pass all the same tests we can, but I think it's much further out. But I'm not extremely confident in that. Algorithms training algorithms might actually start a Kurzweilian utopia/dystopia/exponentopia.

    • (Score: 0) by Anonymous Coward on Friday January 20 2017, @07:47PM

      by Anonymous Coward on Friday January 20 2017, @07:47PM (#456668)

      GAs training and selecting GAs? This makes very little sense to me. The evaluator of the 'parent' is presumably optimizing towards the same evaluation as the 'child' (and if not, then what is the point?) in which case you're not actually picking and choosing anything, but just exponentially increasing the timescale of an elementary GA for no real reason since it's basically doing some sort of very complex g(f(a)) to get f(a)! If you can reference any papers or even articles or whatever it would be much appreciated because I have a rather difficult time imagining this was being done, as a thing.

    • (Score: 2) by JoeMerchant on Friday January 20 2017, @09:09PM

      by JoeMerchant (3937) on Friday January 20 2017, @09:09PM (#456709)

      The problem with algorithmic approaches, whether GA or others, is that they may make generational gains in efficiency or accuracy, but those gains have (historically) always tapered off and reached some asymptotitc limit.

      If there ever is an algorithm that continues to improve with each successive generation without human intervention, that would be news. It would also likely be SkyNet, and, so, would probably take control of the news to protect itself.

      --
      🌻🌻 [google.com]
  • (Score: 2) by art guerrilla on Friday January 20 2017, @05:24PM

    by art guerrilla (3082) on Friday January 20 2017, @05:24PM (#456614)

    'puters making AND programming 'puters ?
    what possible harm could that cause, mr hawkings...

  • (Score: 2) by gidds on Friday January 20 2017, @05:38PM

    by gidds (589) on Friday January 20 2017, @05:38PM (#456620)

    Has anyone heard [wikipedia.org] of [wikipedia.org] this [wikipedia.org] before [wikipedia.org]?

    --
    [sig redacted]
  • (Score: 2) by looorg on Friday January 20 2017, @05:42PM

    by looorg (578) on Friday January 20 2017, @05:42PM (#456624)

    Great. Now it will finally have a good opponent to play Go vs.

  • (Score: 2) by LoRdTAW on Friday January 20 2017, @08:55PM

    by LoRdTAW (3755) on Friday January 20 2017, @08:55PM (#456698) Journal

    I have the solution:
    Step 1. Buy the robot that replaces you at your job
    Step 2. Collect its paycheck
    Step 3. Profit

    • (Score: 0) by Anonymous Coward on Friday January 20 2017, @09:03PM

      by Anonymous Coward on Friday January 20 2017, @09:03PM (#456703)

      Only the rich can do that. Poor people need to work as hard as possible so the investors can have yachts.

  • (Score: 0) by Anonymous Coward on Friday January 20 2017, @10:13PM

    by Anonymous Coward on Friday January 20 2017, @10:13PM (#456727)

    Isnt this basically what caret does? https://topepo.github.io/caret/index.html [github.io]