https://www.theverge.com/2020/8/16/21371049/gpt3-hacker-news-ai-blog
College student Liam Porr used the language-generating AI tool GPT-3 to produce a fake blog post that recently landed in the No. 1 spot on Hacker News, MIT Technology Review reported. Porr was trying to demonstrate that the content produced by GPT-3 could fool people into believing it was written by a human. And, he told MIT Technology Review, "it was super easy, actually, which was the scary part."
So to set the stage in case you're not familiar with GPT-3: It's the latest version of a series of AI autocomplete tools designed by San Francisco-based OpenAI, and has been in development for several years. At its most basic, GPT-3 (which stands for "generative pre-trained transformer") auto-completes your text based on prompts from a human writer.
[...] OpenAI decided to give access to GPT-3's API to researchers in a private beta, rather than releasing it into the wild at first. Porr, who is a computer science student at the University of California, Berkeley, was able to find a PhD student who already had access to the API, who agreed to work with him on the experiment. Porr wrote a script that gave GPT-3 a blog post headline and intro. It generated a few versions of the post, and Porr chose one for the blog, copy-pasted from GPT-3's version with very little editing.
The post went viral in a matter of a few hours, Porr said, and the blog had more than 26,000 visitors. He wrote that only one person reached out to ask if the post was AI-generated, although several commenters did guess GPT-3 was the author.
Previously:
(2020-08-14) OpenAI's New Language Generator GPT-3 is Shockingly Good
(Score: 2) by PiMuNu on Tuesday August 18 2020, @09:16AM (4 children)
It is rather a readable article - pure nonsense of course, but compared to computer-generated articles from just a few years ago it flows rather well. It would be nice if TFA had a link to "raw" unedited output, just so that one can see how much editing has gone on...
(Score: 2) by RamiK on Tuesday August 18 2020, @10:05AM
The last "Shockingly Good" post (linked above) discussed a a machine generated film screenplay and other short stories and news pieces [arr.am] that were good even by professional writers' standards minus a couple of stylistic hiccups that an editor could be improved upon like repeating a noun instead of using an obvious synonym.
The same blog posted a follow-up of sorts expanding on the semi-auto process involved in producing these machine-generated pieces here: https://arr.am/2020/08/11/ai-fan-fiction-or-barry-by-terry-pratchett-gpt-3/ [arr.am]
Regardless, it's here, it works and if Hollywood wasn't on lockdown right now with all new productions on halt, I'm sure someone would have made some noise about it on a talk show or something.
compiling...
(Score: 2) by driverless on Wednesday August 19 2020, @04:04AM
It's still a lot better than the older MBR-1 generated texts. EBR-2 was a step along the way, but GPT-3 beats both of them hands down.
(Score: 0) by Anonymous Coward on Wednesday August 19 2020, @06:04AM (1 child)
I think the big picture here is that nobody is reading the voluminous output that our best-and-brightest are forced to write to keep employed nowadays. The goal has shifted to producing semi-believable garbage and nobody in their right mind reads any of it. At best, you assign a 1st year grad. student to waste 6 months of their life - well, life's tough and that's a valuable lesson too.
(Score: 2) by PiMuNu on Wednesday August 19 2020, @06:33AM
No. This was not a research article. It was an article on some junk blog that got aggregated into a news feed.