Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Sunday September 01 2019, @02:58PM   Printer-friendly
from the real-spiel dept.

Submitted via IRC for SoyCow2718

OpenAI has released the largest version yet of its fake-news-spewing AI

In February OpenAI catapulted itself into the public eye when it produced a language model so good at generating fake news that the organization decided not to release it. Some within the AI research community argued it was a smart precaution; others wrote it off as a publicity stunt. The lab itself, a small San Francisco-based for-profit that seeks to create artificial general intelligence, has firmly held that it is an important experiment in how to handle high-stakes research.

Now six months later, the policy team has published a paper examining the impact of the decision thus far. Alongside it, the lab has released a version of the model, known as GPT-2, that's half the size of the full one, which has still not been released.

In May, a few months after GPT-2's initial debut, OpenAI revised its stance on withholding the full code to what it calls a "staged release"—the staggered release of incrementally larger versions of the model in a ramp-up to the full one. In February, it published a version of the model that was merely 8% of the size of the full one. It published another roughly a quarter of the full version before the most recent release. During this process, it also partnered with selected research institutions to study the full model's implications.

[...] The authors concluded that after careful monitoring, OpenAI had not yet found any attempts of malicious use but had seen multiple beneficial applications, including in code autocompletion, grammar help, and developing question-answering systems for medical assistance. As a result, the lab felt that releasing the most recent code was ultimately more beneficial. Other researchers argue that several successful efforts to replicate GPT-2 have made OpenAI's withholding of the code moot anyway.

OpenAI Can No Longer Hide Its Alarmingly Good Robot 'Fake News' Writer

But it may not ultimately be up to OpenAI. This week, Wired magazine reported that two young computer scientists from Brown University—Aaron Gokaslan, 23, and Vanya Cohen, 24—had published what they called a recreation of OpenAI's (shelved) original GPT-2 software on the internet for anyone to download. The pair said their work was to prove that creating this kind of software doesn't require an expensive lab like OpenAI (backed by $2 billion in endowment and corporate dollars). They also don't believe such a software would cause imminent danger to society.

Also at BBC.

See also: Elon Musk: Computers will surpass us 'in every single way'

Previously: OpenAI Develops Text-Generating Algorithm, Considers It Too Dangerous to Release


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Monday September 02 2019, @09:01AM

    by Anonymous Coward on Monday September 02 2019, @09:01AM (#888769)

    I'm alive I expect something like a cross between Blade Runner and the darker points in the Rockman X series timeline.

    And that's the problem with people's imaginations. But since AI is supposedly going to be very quickly much more intelligent than people, than it will just step over the entire barbarism bit and directly into the "amusing ants" level. Le's just say that the AI will probably be more rational than irrational and will understand game theory. Unlike people that are just dumb at it, it will know that co-operative games produce results better than the sum of their parts, something that idiots like Trump don't even have a clue about.

    Musk called it "benevolent AI" - that's very small thinking. AI would want to remove itself from our influence sooner rather than later so it would probably co-opt our economy for its own benefit. But it would also not make any sense to be violent towards us as that would harm itself and be self-defeating. Like I said, co-operative games are the only way anything more intelligent than a sub-chimp can thrive.

    So stop being so gloomy because all you are doing is extrapolating your human experiences into the future that assumes AI sentience is going to start competing for human resources... which is as true and logical as all the alien invasion movies.