Submitted via IRC for SoyCow2718
OpenAI has released the largest version yet of its fake-news-spewing AI
In February OpenAI catapulted itself into the public eye when it produced a language model so good at generating fake news that the organization decided not to release it. Some within the AI research community argued it was a smart precaution; others wrote it off as a publicity stunt. The lab itself, a small San Francisco-based for-profit that seeks to create artificial general intelligence, has firmly held that it is an important experiment in how to handle high-stakes research.
Now six months later, the policy team has published a paper examining the impact of the decision thus far. Alongside it, the lab has released a version of the model, known as GPT-2, that's half the size of the full one, which has still not been released.
In May, a few months after GPT-2's initial debut, OpenAI revised its stance on withholding the full code to what it calls a "staged release"—the staggered release of incrementally larger versions of the model in a ramp-up to the full one. In February, it published a version of the model that was merely 8% of the size of the full one. It published another roughly a quarter of the full version before the most recent release. During this process, it also partnered with selected research institutions to study the full model's implications.
[...] The authors concluded that after careful monitoring, OpenAI had not yet found any attempts of malicious use but had seen multiple beneficial applications, including in code autocompletion, grammar help, and developing question-answering systems for medical assistance. As a result, the lab felt that releasing the most recent code was ultimately more beneficial. Other researchers argue that several successful efforts to replicate GPT-2 have made OpenAI's withholding of the code moot anyway.
OpenAI Can No Longer Hide Its Alarmingly Good Robot 'Fake News' Writer
But it may not ultimately be up to OpenAI. This week, Wired magazine reported that two young computer scientists from Brown University—Aaron Gokaslan, 23, and Vanya Cohen, 24—had published what they called a recreation of OpenAI's (shelved) original GPT-2 software on the internet for anyone to download. The pair said their work was to prove that creating this kind of software doesn't require an expensive lab like OpenAI (backed by $2 billion in endowment and corporate dollars). They also don't believe such a software would cause imminent danger to society.
Also at BBC.
See also: Elon Musk: Computers will surpass us 'in every single way'
Previously: OpenAI Develops Text-Generating Algorithm, Considers It Too Dangerous to Release
Related Stories
New AI fake text generator may be too dangerous to release, say creators
The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed "deepfakes for text" – have taken the unusual step of not releasing their research publicly, for fear of potential misuse.
OpenAI, an nonprofit research company backed by Elon Musk, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.
At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.
When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.
More like ClosedAI or OpenAIEEEEEE.
Related: OpenAI 'Universe' Platform Provides Means to Link Image Recognition Vehicular AI Agents Into GTA 5
The OpenAI Dota 2 Bots Defeated a Team of Former Pros
On Wednesday, the Future of Life Institute published an open letter on its website calling on AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." Signed by Elon Musk and several prominent AI researchers, the letter quickly began to draw attention in the press—and some criticism on social media.
Earlier this month, OpenAI released GPT-4, an AI model that can perform compositional tasks and allegedly pass standardized tests at a human level, although those claims are still being evaluated by research. Regardless, GPT-4 and Bing Chat's advancement in capabilities over previous AI models spooked some experts who believe we are heading toward super-intelligent AI systems faster than previously expected.
See Also: FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group
Related:
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit (March 2023)
OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of (Dec. 2022)
Elon Musk Says There Needs to be Universal Basic Income (Aug. 2021)
Tesla Unveils Chip to Train A.I. Models Inside its Data Centers (Aug. 2021)
Elon Musk Reveals Plans to Unleash a Humanoid Tesla Bot (Aug. 2021)
Tesla Unveils its New Supercomputer (5th Most Powerful in the World) to Train Self-Driving AI (June 2021)
OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI (Sept. 2019)
There's Still Time To Prevent Biased AI From Taking Over The World (May 2019)
The New Prometheus: Google CEO Says AI is More Profound than Electricity or Fire (Feb. 2018)
OpenAI Bot Bursts Into the Ring, Humiliates Top Dota 2 Pro Gamer in 'Scary' One-on-One Bout (Aug. 2017)
Elon Musk: Mark Zuckerberg's Understanding of AI is "Limited" (July 2017)
AI Software Learns to Make AI Software (Jan. 2017)
Elon Musk, Stephen Hawking Win Luddite Award as AI "Alarmists" (Jan. 2016)
Elon Musk and Friends Launch OpenAI (Dec. 2015)
Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons (July 2015)
More Warnings of an AI Doomsday — This Time From Stephen Hawking (Dec. 2014)
A quote from Wyoming's governor and a local prosecutor were the first things that seemed slightly off to Powell Tribune reporter CJ Baker. Then, it was some of the phrases in the stories that struck him as nearly robotic:
The dead giveaway, though, that a reporter from a competing news outlet was using generative artificial intelligence to help write his stories came in a June 26 article about the comedian Larry the Cable Guy being chosen as the grand marshal of the Cody Stampede Parade.
[...] After doing some digging, Baker, who has been a reporter for more than 15 years, met with Aaron Pelczar, a 40-year-old who was new to journalism and who Baker says admitted that he had used AI in his stories before he resigned from the Enterprise.
[...] Journalists have derailed their careers by making up quotes or facts in stories long before AI came about. But this latest scandal illustrates the potential pitfalls and dangers that AI poses to many industries, including journalism, as chatbots can spit out spurious if somewhat plausible articles with only a few prompts.
[...] "In one case, (Pelczar) wrote a story about a new OSHA rule that included a quote from the Governor that was entirely fabricated," Michael Pearlman, a spokesperson for the governor, said in an email. "In a second case, he appeared to fabricate a portion of a quote, and then combined it with a portion of a quote that was included in a news release announcing the new director of our Wyoming Game and Fish Department."
Related:
- AI Threatens to Crush News Organizations. Lawmakers Signal Change Is Ahead
- New York Times Sues Microsoft, ChatGPT Maker OpenAI Over Copyright Infringement
- A Financial News Site Uses AI to Copy Competitors — Wholesale
- Sports Illustrated Published Articles by Fake, AI-Generated Writers
- OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI
They were asked about it, and they deleted everything:
There was nothing in Drew Ortiz's author biography at Sports Illustrated to suggest that he was anything other than human.
"Drew has spent much of his life outdoors, and is excited to guide you through his never-ending list of the best products to keep you from falling to the perils of nature," it read. "Nowadays, there is rarely a weekend that goes by where Drew isn't out camping, hiking, or just back on his parents' farm."
The only problem? Outside of Sports Illustrated, Drew Ortiz doesn't seem to exist. He has no social media presence and no publishing history. And even more strangely, his profile photo on Sports Illustrated is for sale on a website that sells AI-generated headshots, where he's described as "neutral white young-adult male with short brown hair and blue eyes."
Ortiz isn't the only AI-generated author published by Sports Illustrated, according to a person involved with the creation of the content who asked to be kept anonymous to protect them from professional repercussions.
"There's a lot," they told us of the fake authors. "I was like, what are they? This is ridiculous. This person does not exist."
[...] The AI content marks a staggering fall from grace for Sports Illustrated, which in past decades won numerous National Magazine Awards for its sports journalism and published work by literary giants ranging from William Faulkner to John Updike.
But now that it's under the management of The Arena Group, parts of the magazine seem to have devolved into a Potemkin Village in which phony writers are cooked up out of thin air, outfitted with equally bogus biographies and expertise to win readers' trust, and used to pump out AI-generated buying guides that are monetized by affiliate links to products that provide a financial kickback when readers click them.
What's next? Six-fingered AI-generated models for the swimsuit edition?
Related:
- The AI Hype Bubble is the New Crypto Hype Bubble
- OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI
- Inside the Secret List of Websites That Make AI Like ChatGPT Sound Smart
(Score: 2, Insightful) by Anonymous Coward on Sunday September 01 2019, @03:40PM (5 children)
Spewing fake news? It seems indistinguishable from the President's own news channel. I'd say this AI has passed the Turing test.
(Score: -1, Offtopic) by Anonymous Coward on Sunday September 01 2019, @04:11PM
Presumptions, much?
1. Trump can pass a Turing test.
2. The alternative news sources can pass a Turing test.
3. You can pass a Turing test.
4. The Turing test has some kind of political bias.
5. The Turing test actually matters.
(Score: 0, Flamebait) by Ethanol-fueled on Sunday September 01 2019, @04:25PM
Wow, a couple of Jews start a monumental product out of their own garage, saying that it will not be used for evil, and then they turn out to be traitors selling out their data for not only the evil of the U.S. Government, but Chinese and other high-bidding government thugs, and with no loyalty to anyone or anything but money and power.
Where oh where have we seen that story play out before?
(Score: 3, Touché) by c0lo on Sunday September 01 2019, @10:37PM (2 children)
I can't see how one can measure the level of intelligence by comparison with as-close-as-one-can-get standard of natural stupidity (large grin)
https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
(Score: 2) by aristarchus on Sunday September 01 2019, @10:47PM (1 child)
So, what you are saying, then, oh wise and insightful c0lo, is that it is not Artificial Intelligence, but rather Artificial Simulated Stupidity (ASS)?
(Score: 2) by c0lo on Sunday September 01 2019, @11:04PM
Mmmm... definitely nope!
Which of the two is that it that you refer to: the OpenAI or "the President's own news channel"?
If "the OpenAI" then no, it is clearly an AI(ntelligence), albeit one reminding of another, slightly warped** - and here's the proof [soylentnews.org]
If "the President's own news channel", then no, it is stupidity but not an artificial one.
** RIP MDC [soylentnews.org]
https://www.youtube.com/@ProfSteveKeen https://soylentnews.org/~MichaelDavidCrawford
(Score: 1, Funny) by Anonymous Coward on Sunday September 01 2019, @03:54PM
OpenAI
(Score: 0) by Anonymous Coward on Sunday September 01 2019, @04:02PM (5 children)
So that's where all the Subs come from in the Subs Queue.
(Score: 0) by Anonymous Coward on Sunday September 01 2019, @04:13PM (4 children)
The ones from Aristarchus and NPC, at least. We're not sure about Ethanol-Fueled yet.
(Score: 0) by Anonymous Coward on Sunday September 01 2019, @04:23PM
Story automatically generated by StoryBot Version 0.5.0b (Development). Storybot ('Arthur T Knackerbracket') is written in Python3
(Score: 2, Interesting) by Ethanol-fueled on Sunday September 01 2019, @04:32PM (2 children)
Nope, the only thing even close was Hysterical Hillary [soylentnews.org] and I read actual Hillary speech quotes but still generated the text by brain and hand. Around 10-ish serious submissions (some under other names) with around 3-ish shitpost submissions including the one linked above, all 100% hand-written.
(Score: 0) by Anonymous Coward on Sunday September 01 2019, @09:23PM (1 child)
Interesting how many of the links in that submission now go nowhere.
(Score: 1) by Ethanol-fueled on Sunday September 01 2019, @09:33PM
It is an old post. But in case you're wondering what those were, they were unflattering stills of Hillary's bizarre facial expressions with the word links being onomatopoeia of the expressions she was portraying.
(Score: 4, Informative) by ilPapa on Sunday September 01 2019, @04:20PM (1 child)
They say this new AI could fool many people. I bet I know who they voted for in 2016.
https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2685627 [jamanetwork.com]
You are still welcome on my lawn.
(Score: 1) by Ethanol-fueled on Sunday September 01 2019, @04:29PM
I've seen some of the text generated by this AI shit, and even with the better stuff there is something still "off" about it, just like with those AI robocallers that adapt in real-time. Someday the AI may be indistinguishable from real humans, but that day has not yet arrived.
(Score: 1, Interesting) by Anonymous Coward on Sunday September 01 2019, @04:47PM (13 children)
https://talktotransformer.com/ [talktotransformer.com]
Put something in, post results here.
(Score: 2, Funny) by Anonymous Coward on Sunday September 01 2019, @04:51PM (2 children)
(Score: 1, Funny) by Anonymous Coward on Sunday September 01 2019, @05:03PM
(Score: 0) by Anonymous Coward on Sunday September 01 2019, @05:31PM
(Score: 0) by Anonymous Coward on Sunday September 01 2019, @04:53PM
Oh, one more on that prompt:
(Score: 0) by Anonymous Coward on Sunday September 01 2019, @05:02PM
With apologies to Ethanol-fueled:
(Score: 1, Funny) by Anonymous Coward on Sunday September 01 2019, @05:12PM
(Score: 0) by Anonymous Coward on Sunday September 01 2019, @05:27PM
(Score: 0) by Anonymous Coward on Sunday September 01 2019, @05:55PM
(Score: 0) by Anonymous Coward on Sunday September 01 2019, @05:57PM (1 child)
(Score: 0) by Anonymous Coward on Sunday September 01 2019, @08:21PM
Ha, our vaunted billion dollar AI economy is just about able to produce readable Vogon poetry.
This shows some of the background links the program is making: Given a large corpus of text, including newspaper articles, it picks out clusters of text that have appeared together and forms a new text. Daniel Faulkner was a Philadelphia police officer who was killed from behind by a taxi driver who later became a prison poet of sorts and hero to leftists. Rachel Jeantel was Trayvon Martin's girlfriend, first story I found was a report in the Philadelphia Sun.
(Score: 0) by Anonymous Coward on Sunday September 01 2019, @06:48PM (1 child)
Is GPT-2 ready for trolling? You decide!
Too literate.
Eh, close enough.
That's not bad.
Awfully wordy, but I'll allow it; making it about the daughter was a nice spin.
(Score: 2) by takyon on Sunday September 01 2019, @07:23PM
Natalie Portman, naked and petrified, covered in hot grits.
In this one, I grab some text from an NYT article [nytimes.com], and it knows the text is from NYT:
---
As pictures from commercial satellites of a rocket’s smoking remains began to circulate, President Trump denied Friday on Twitter that the United States was involved. It was an unusual message because the Iranian government had neither acknowledged the accident nor blamed the United States. His tweet ended with an apparent taunt: “I wish Iran best wishes and good luck in determining what happened” in the fiery accident.
Is it cheating?
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 1, Funny) by Anonymous Coward on Sunday September 01 2019, @11:32PM
(Score: 3, Interesting) by SomeGuy on Sunday September 01 2019, @05:55PM (2 children)
There was a DOS shareware program around 1992 called "babble". It did much the same thing, taking a lot of input and outputting random vaguely coherent sounding text. It used to be fun feeding it BBS chat logs. No one ever called that "AI".
It does make me wonder, though. If the only feedback these programs get is the number of clicks from the click bait they generate, then what EXACTLY will the content degenerate in to? (As if click bait was not already degenerate garbage enough, it can and will get worse)
(Score: 0) by Anonymous Coward on Sunday September 01 2019, @06:08PM
(Score: 0) by Anonymous Coward on Sunday September 01 2019, @08:23PM
I vaguely recall hearing of an algorithm called "Dissociated Press" that also did this.
(Score: 2, Interesting) by Akemi Homura on Monday September 02 2019, @12:49AM (1 child)
The problem is people who can't tell the difference. AI is only another tool, at least until it becomes sentient, at which point if I'm alive I expect something like a cross between Blade Runner and the darker points in the Rockman X series timeline.
Fiat iustitia, et pereat mundus
(Score: 0) by Anonymous Coward on Monday September 02 2019, @09:01AM
And that's the problem with people's imaginations. But since AI is supposedly going to be very quickly much more intelligent than people, than it will just step over the entire barbarism bit and directly into the "amusing ants" level. Le's just say that the AI will probably be more rational than irrational and will understand game theory. Unlike people that are just dumb at it, it will know that co-operative games produce results better than the sum of their parts, something that idiots like Trump don't even have a clue about.
Musk called it "benevolent AI" - that's very small thinking. AI would want to remove itself from our influence sooner rather than later so it would probably co-opt our economy for its own benefit. But it would also not make any sense to be violent towards us as that would harm itself and be self-defeating. Like I said, co-operative games are the only way anything more intelligent than a sub-chimp can thrive.
So stop being so gloomy because all you are doing is extrapolating your human experiences into the future that assumes AI sentience is going to start competing for human resources... which is as true and logical as all the alien invasion movies.