Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Friday February 15 2019, @12:12PM   Printer-friendly
from the that's-just-what-the-bot-wants-you-to-believe! dept.

New AI fake text generator may be too dangerous to release, say creators

The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed "deepfakes for text" – have taken the unusual step of not releasing their research publicly, for fear of potential misuse.

OpenAI, an nonprofit research company backed by Elon Musk, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.

When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.

More like ClosedAI or OpenAIEEEEEE.

Related: OpenAI 'Universe' Platform Provides Means to Link Image Recognition Vehicular AI Agents Into GTA 5
The OpenAI Dota 2 Bots Defeated a Team of Former Pros


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by ilsa on Friday February 15 2019, @07:11PM (1 child)

    by ilsa (6082) Subscriber Badge on Friday February 15 2019, @07:11PM (#801724)

    Yeah, that's the problem. You can't close Pandora's box once it's opened. But IMO adding barriers to it's usage will at least help limit how quickly and wide-spread the damage will be.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by JoeMerchant on Friday February 15 2019, @10:23PM

    by JoeMerchant (3937) on Friday February 15 2019, @10:23PM (#801791)

    But, what kind of barriers? Block publication of academic articles related to advancement of the tech? How about academic articles related to detection of the tech? Or, do we just send out agents to break kneecaps on anyone who's interested in the subject?

    A huge problem with the internet is that information crosses borders freely, and this is basically a pure information play. You can try to erect the great Firewall of China against it, but nothing like that has been effective to-date.

    --
    🌻🌻 [google.com]