Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Friday February 15 2019, @12:12PM   Printer-friendly
from the that's-just-what-the-bot-wants-you-to-believe! dept.

New AI fake text generator may be too dangerous to release, say creators

The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed "deepfakes for text" – have taken the unusual step of not releasing their research publicly, for fear of potential misuse.

OpenAI, an nonprofit research company backed by Elon Musk, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.

When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.

More like ClosedAI or OpenAIEEEEEE.

Related: OpenAI 'Universe' Platform Provides Means to Link Image Recognition Vehicular AI Agents Into GTA 5
The OpenAI Dota 2 Bots Defeated a Team of Former Pros


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by bzipitidoo on Friday February 15 2019, @01:10PM (5 children)

    by bzipitidoo (4388) on Friday February 15 2019, @01:10PM (#801510) Journal

    I agree. They have an overly high opinion of themselves and their work, if this isn't a cheap marketing ploy.

    Nuclear bombs are frightfully easy to design. Just smash 2 bricks of plutonium or uranium 235 together, that's all it takes. What keeps the lid on that one is the extreme difficulty and expense in obtaining material. The reason for precision is to set off the chain reaction with a minimum amount of material. Lack of precision can be overcome by the costly approach of making the bricks bigger.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2, Insightful) by khallow on Friday February 15 2019, @01:29PM (1 child)

    by khallow (3766) Subscriber Badge on Friday February 15 2019, @01:29PM (#801519) Journal

    if this isn't a cheap marketing ploy

    That has my vote.

    • (Score: 0) by Anonymous Coward on Friday February 15 2019, @06:48PM

      by Anonymous Coward on Friday February 15 2019, @06:48PM (#801712)

      As soon as I read Elon Musk in the article I rolled my eyes. Enough said.

  • (Score: 2) by JoeMerchant on Friday February 15 2019, @04:09PM

    by JoeMerchant (3937) on Friday February 15 2019, @04:09PM (#801609)

    As I understand it, the radiation is the easy part to handle in making A-bomb components. The chemicals involved in refinement of Uranium, Plutonium, etc. are far more nasty and difficult to deal with - the environmental protection requirements for refinery workers (e.g. yellowcake, centrifuge operators, etc.) are extreme.

    On the other hand, when the robot revolution (singularity?) comes, they're not going to have much trouble at all building bodies that withstand both the radiation and the corrosive/toxic chemicals, not to mention no problems working in the mines... yet another way we are so very very screwed once robots start building and maintaining robots which can themselves build and maintain robots. This AI generated text - propaganda will be just one part of the war which meatbags seem doomed to lose.

    --
    🌻🌻 [google.com]
  • (Score: 1) by Ethanol-fueled on Friday February 15 2019, @04:49PM (1 child)

    by Ethanol-fueled (2792) on Friday February 15 2019, @04:49PM (#801646) Homepage

    You can see the examples. This thing is not even close to being "dangerous," the only danger here is that "fake news" is a fad now. "Fake news," like "racism" or "racist," is a phrase that's been tossed around in all different directions so much it no longer means anything.

    AI when it comes to words still sucks. Whether it's generating text like in TFA example, or analyzing a writing style and generating text from that and a second input, is barely 1 step above LaDarius and only with the gimmick of using AI rather than substitution.

    • (Score: 2) by DeathMonkey on Friday February 15 2019, @06:49PM

      by DeathMonkey (1380) on Friday February 15 2019, @06:49PM (#801713) Journal

      Well I wasn't even sure what threat they were worried about so I took a look at the article.

      It was trained on a dataset containing about 10m articles, selected by trawling the social news site Reddit for links with more than three votes.

      Ok, I agree with the researchers now. In fact, we should probably dust of and nuke the entire site from orbit, just to be safe.