Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Friday February 15 2019, @12:12PM   Printer-friendly
from the that's-just-what-the-bot-wants-you-to-believe! dept.

New AI fake text generator may be too dangerous to release, say creators

The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed "deepfakes for text" – have taken the unusual step of not releasing their research publicly, for fear of potential misuse.

OpenAI, an nonprofit research company backed by Elon Musk, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.

When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.

More like ClosedAI or OpenAIEEEEEE.

Related: OpenAI 'Universe' Platform Provides Means to Link Image Recognition Vehicular AI Agents Into GTA 5
The OpenAI Dota 2 Bots Defeated a Team of Former Pros


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by takyon on Friday February 15 2019, @07:31PM

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Friday February 15 2019, @07:31PM (#801730) Journal

    "Deepfake abuse" = stuff we don't like, theoretical harm, and mostly First Amendment protected activity. Maybe you can nail people with a harassment, stalking, or child porn charge, but the majority of it should be legal.

    Just make sure it stays complicated enough that the average person can't use it on a whim.

    Who needs to make sure? I welcome someone making it easy enough for the average person to use.

    Given the many thousands of people working on this kind of thing, it only takes one person to decide it should be more accessible and create user-friendly tools toward that end. It can't really be stopped. What are you going to do about it? SWAT them? Make sharing AI algorithms illegal?

    Hiring away AI researchers and hoarding code like OpenAI is doing will only delay the inevitable. Maybe by a few years at most.

    I would celebrate OpenAI being hacked and all of their code being leaked. They can live up to their name that way. Maybe wait a decade or two until they develop "strong AI" and try to keep that from the world by screeching about Terminator.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2