Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by chromas on Friday February 15 2019, @12:12PM   Printer-friendly
from the that's-just-what-the-bot-wants-you-to-believe! dept.

New AI fake text generator may be too dangerous to release, say creators

The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed "deepfakes for text" – have taken the unusual step of not releasing their research publicly, for fear of potential misuse.

OpenAI, an nonprofit research company backed by Elon Musk, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.

When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.

More like ClosedAI or OpenAIEEEEEE.

Related: OpenAI 'Universe' Platform Provides Means to Link Image Recognition Vehicular AI Agents Into GTA 5
The OpenAI Dota 2 Bots Defeated a Team of Former Pros


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by ilsa on Friday February 15 2019, @02:39PM (8 children)

    by ilsa (6082) Subscriber Badge on Friday February 15 2019, @02:39PM (#801538)

    Deepfakes has proven beyond any doubt that making these kinds of technologies easily accessible to the unwashed masses will accomplish nothing but create new avenues for abuse, with zero actual positive value.

    These technologies have their uses, and people who genuinely need them will figure out a way to make use of them. But they should not be commoditized so that any yahoo can just download and use it.

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Informative=1, Overrated=1, Total=3
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 3, Touché) by takyon on Friday February 15 2019, @02:54PM (4 children)

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday February 15 2019, @02:54PM (#801543) Journal

    Deepfakes has proven beyond any doubt that making these kinds of technologies easily accessible to the unwashed masses will accomplish nothing but create new avenues for abuse, with zero actual positive value.

    Citation needed. I can think of plenty of great things to do with "these kinds of technologies". Creating photorealistic "actors" from scratch, for instance, or more realistic animations, randomly generated background worlds, the list goes on.

    But they should not be commoditized so that any yahoo can just download and use it.

    Who are you to decide that? Are you prepared to imprison or kill coders for working on "AI" algorithms?

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 0) by Anonymous Coward on Friday February 15 2019, @03:15PM (1 child)

      by Anonymous Coward on Friday February 15 2019, @03:15PM (#801556)

      IDK... AC has a point... I also do not want Yahoo to have easy access to this tech; or any tech really...

      • (Score: 3, Informative) by takyon on Friday February 15 2019, @03:31PM

        by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday February 15 2019, @03:31PM (#801565) Journal

        Finally. The end of OPEN SORES. A.I. is so incredibly dangerous, that there is no way that we can allow neckbeards to share OPEN SORES code anymore. Luckily, we can have ClosedAI hire all of the world's top A.I. researchers and provide them a living writing code that will never see the light of day. Anyone who tries to break the A.I. embargo will be captured or killed by the FBI, NSA, CIA, et al. Good thing we have an effective surveillance state to enable that.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 3, Insightful) by ilsa on Friday February 15 2019, @07:08PM (1 child)

      by ilsa (6082) Subscriber Badge on Friday February 15 2019, @07:08PM (#801722)

      Citation needed. I can think of plenty of great things to do with "these kinds of technologies". Creating photorealistic "actors" from scratch, for instance, or more realistic animations, randomly generated background worlds, the list goes on.

      What do you mean, "Citation needed"? Have you been living under a rock and intentionally avoiding all forms of news? Literally, a quick google search for "deepfake abuse" will find countless articles about the damage that is being done. Here: http://lmgtfy.com/?iie=1&q=deepfake+abuse [lmgtfy.com]

      And yes, the examples you site are exactly what I was thinking of. And studios have the skills and resources to do exactly that, which is why *they are doing it already*. My point is that giving the average person access to this kind of technology will be overwhelmingly more harmful than beneficial, because the average person cannot be trusted to be responsible.

      But they should not be commoditized so that any yahoo can just download and use it.

      Who are you to decide that? Are you prepared to imprison or kill coders for working on "AI" algorithms?

      Who hurt you? Seriously, I've suggested nothing even remotely of the sort and I can't even fathom how you got here. Let coders work on it. Let researchers use it. Just make sure it stays complicated enough that the average person can't use it on a whim.

      • (Score: 2) by takyon on Friday February 15 2019, @07:31PM

        by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday February 15 2019, @07:31PM (#801730) Journal

        "Deepfake abuse" = stuff we don't like, theoretical harm, and mostly First Amendment protected activity. Maybe you can nail people with a harassment, stalking, or child porn charge, but the majority of it should be legal.

        Just make sure it stays complicated enough that the average person can't use it on a whim.

        Who needs to make sure? I welcome someone making it easy enough for the average person to use.

        Given the many thousands of people working on this kind of thing, it only takes one person to decide it should be more accessible and create user-friendly tools toward that end. It can't really be stopped. What are you going to do about it? SWAT them? Make sharing AI algorithms illegal?

        Hiring away AI researchers and hoarding code like OpenAI is doing will only delay the inevitable. Maybe by a few years at most.

        I would celebrate OpenAI being hacked and all of their code being leaked. They can live up to their name that way. Maybe wait a decade or two until they develop "strong AI" and try to keep that from the world by screeching about Terminator.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 2) by JoeMerchant on Friday February 15 2019, @04:16PM (2 children)

    by JoeMerchant (3937) on Friday February 15 2019, @04:16PM (#801617)

    Deepfakes has proven beyond any doubt that making these kinds of technologies easily accessible to the unwashed masses will accomplish nothing but create new avenues for abuse, with zero actual positive value.

    Good for them, unfortunately this kind of tech is all to copyable, distributable, and readily available to anyone who cares enough to turn over a rock or two to get it, much like encryption and steganography - it's part of the landscape, the world is going to have to deal with it, even if it doesn't get included in the base distribution of Android, iOS and Windows.

    --
    🌻🌻 [google.com]
    • (Score: 2) by ilsa on Friday February 15 2019, @07:11PM (1 child)

      by ilsa (6082) Subscriber Badge on Friday February 15 2019, @07:11PM (#801724)

      Yeah, that's the problem. You can't close Pandora's box once it's opened. But IMO adding barriers to it's usage will at least help limit how quickly and wide-spread the damage will be.

      • (Score: 2) by JoeMerchant on Friday February 15 2019, @10:23PM

        by JoeMerchant (3937) on Friday February 15 2019, @10:23PM (#801791)

        But, what kind of barriers? Block publication of academic articles related to advancement of the tech? How about academic articles related to detection of the tech? Or, do we just send out agents to break kneecaps on anyone who's interested in the subject?

        A huge problem with the internet is that information crosses borders freely, and this is basically a pure information play. You can try to erect the great Firewall of China against it, but nothing like that has been effective to-date.

        --
        🌻🌻 [google.com]