Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Friday February 15 2019, @12:12PM   Printer-friendly
from the that's-just-what-the-bot-wants-you-to-believe! dept.

New AI fake text generator may be too dangerous to release, say creators

The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed "deepfakes for text" – have taken the unusual step of not releasing their research publicly, for fear of potential misuse.

OpenAI, an nonprofit research company backed by Elon Musk, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.

When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.

More like ClosedAI or OpenAIEEEEEE.

Related: OpenAI 'Universe' Platform Provides Means to Link Image Recognition Vehicular AI Agents Into GTA 5
The OpenAI Dota 2 Bots Defeated a Team of Former Pros


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by BsAtHome on Friday February 15 2019, @12:34PM (9 children)

    by BsAtHome (889) on Friday February 15 2019, @12:34PM (#801499)

    If they do not release it today, then tomorrow will come another. Not releasing it is just like "security-by-obscurity". It never works and always ends with the knowledge being shared anyway (anybody can get the plans for a bomb or a broom). Any "good" tech can be used for "bad" purposes. Any "bad" tech can be used for "good" purposes. The technology has no concept of good or bad. That is a human construct.

    It would be better to release the system, fully documented, and then show how you may defeat it as well. If it can't be defeated, then show paths as how to cope with such system in place. The fears that this may be used in an adversarial setting may be reasonable, but /anything/ can be used that way. If a student wants to cheat, he/she will cheat. Just ask the right questions instead of relying on some anonymous writings. If you want to make some fake story/news, you do not need this technology. Humans are perfectly capable of producing bullshit.

    And then, if these are real researchers, then how does "full disclosure" fit into this? How can any other research group reproduce the findings? They are saying: "We struck gold, we won't tell you where, how or how much, just believe us! Oh, and we don't show the gold either.". That is just bad science.

    Starting Score:    1  point
    Moderation   +3  
       Troll=1, Insightful=3, Interesting=1, Total=5
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 2) by bzipitidoo on Friday February 15 2019, @01:10PM (5 children)

    by bzipitidoo (4388) on Friday February 15 2019, @01:10PM (#801510) Journal

    I agree. They have an overly high opinion of themselves and their work, if this isn't a cheap marketing ploy.

    Nuclear bombs are frightfully easy to design. Just smash 2 bricks of plutonium or uranium 235 together, that's all it takes. What keeps the lid on that one is the extreme difficulty and expense in obtaining material. The reason for precision is to set off the chain reaction with a minimum amount of material. Lack of precision can be overcome by the costly approach of making the bricks bigger.

    • (Score: 2, Insightful) by khallow on Friday February 15 2019, @01:29PM (1 child)

      by khallow (3766) Subscriber Badge on Friday February 15 2019, @01:29PM (#801519) Journal

      if this isn't a cheap marketing ploy

      That has my vote.

      • (Score: 0) by Anonymous Coward on Friday February 15 2019, @06:48PM

        by Anonymous Coward on Friday February 15 2019, @06:48PM (#801712)

        As soon as I read Elon Musk in the article I rolled my eyes. Enough said.

    • (Score: 2) by JoeMerchant on Friday February 15 2019, @04:09PM

      by JoeMerchant (3937) on Friday February 15 2019, @04:09PM (#801609)

      As I understand it, the radiation is the easy part to handle in making A-bomb components. The chemicals involved in refinement of Uranium, Plutonium, etc. are far more nasty and difficult to deal with - the environmental protection requirements for refinery workers (e.g. yellowcake, centrifuge operators, etc.) are extreme.

      On the other hand, when the robot revolution (singularity?) comes, they're not going to have much trouble at all building bodies that withstand both the radiation and the corrosive/toxic chemicals, not to mention no problems working in the mines... yet another way we are so very very screwed once robots start building and maintaining robots which can themselves build and maintain robots. This AI generated text - propaganda will be just one part of the war which meatbags seem doomed to lose.

      --
      🌻🌻 [google.com]
    • (Score: 1) by Ethanol-fueled on Friday February 15 2019, @04:49PM (1 child)

      by Ethanol-fueled (2792) on Friday February 15 2019, @04:49PM (#801646) Homepage

      You can see the examples. This thing is not even close to being "dangerous," the only danger here is that "fake news" is a fad now. "Fake news," like "racism" or "racist," is a phrase that's been tossed around in all different directions so much it no longer means anything.

      AI when it comes to words still sucks. Whether it's generating text like in TFA example, or analyzing a writing style and generating text from that and a second input, is barely 1 step above LaDarius and only with the gimmick of using AI rather than substitution.

      • (Score: 2) by DeathMonkey on Friday February 15 2019, @06:49PM

        by DeathMonkey (1380) on Friday February 15 2019, @06:49PM (#801713) Journal

        Well I wasn't even sure what threat they were worried about so I took a look at the article.

        It was trained on a dataset containing about 10m articles, selected by trawling the social news site Reddit for links with more than three votes.

        Ok, I agree with the researchers now. In fact, we should probably dust of and nuke the entire site from orbit, just to be safe.

  • (Score: 0) by Anonymous Coward on Friday February 15 2019, @01:31PM

    by Anonymous Coward on Friday February 15 2019, @01:31PM (#801520)

    Well, maybe they just wait until that system has written their article for them. ;-)

  • (Score: 2) by meustrus on Friday February 15 2019, @05:04PM (1 child)

    by meustrus (4961) on Friday February 15 2019, @05:04PM (#801654)

    It would be better to release the system, fully documented, and then show how you may defeat it as well. If it can't be defeated, then show paths as how to cope with such system in place.

    Presumably their desire to "discuss the ramifications of the technological breakthrough" includes finding such ways of defeating it. Obscurity can only be a temporary solution, after all.

    Of course this still raises the question of why the research is being discussed now before finishing their discussions. Probably it's a play for more grant money, and less cynically, a play to get more people potentially involved by broadcasting the topic.

    --
    If there isn't at least one reference or primary source, it's not +1 Informative. Maybe the underused +1 Interesting?
    • (Score: 3, Touché) by captain normal on Friday February 15 2019, @06:22PM

      by captain normal (2205) on Friday February 15 2019, @06:22PM (#801699)

      You think Elon Musk needs to hustle grant money? Maybe they just put too much faith in the power of words.

      --
      Everyone is entitled to his own opinion, but not to his own facts"- --Daniel Patrick Moynihan--