200 percent BuzzFeed stock rise might signal start of a "pivot to AI" media trend:
On Thursday, an internal memo obtained by The Wall Street Journal revealed that BuzzFeed is planning to use ChatGPT-style text synthesis technology from OpenAI to create individualized quizzes and potentially other content in the future. After the news hit, BuzzFeed's stock rose 200 percent. On Friday, BuzzFeed formally announced the move in a post on its site.
[...] "The creative process will increasingly become AI-assisted and technology-enabled. If the past 15 years of the internet have been defined by algorithmic feeds that curate and recommend content, the next 15 years will be defined by AI and data helping create, personalize, and animate the content itself. Our industry will expand beyond AI-powered curation (feeds), to AI-powered creation (content). AI opens up a new era of creativity, where creative humans like us play a key role providing the ideas, cultural currency, inspired prompts, IP, and formats that come to life using the newest technologies."
« Open Source Teams at Google Hit Hard by Layoffs: Was It the Algorithm? | Australian Approval of MDMA and Psilocybin a ‘Baby Step in the Right Direction’ »
Related Stories
The tool could let teachers spot plagiarism or help social media platforms fight disinformation bots:
Hidden patterns purposely buried in AI-generated texts could help identify them as such, allowing us to tell whether the words we're reading are written by a human or not.
These "watermarks" are invisible to the human eye but let computers detect that the text probably comes from an AI system. If embedded in large language models, they could help prevent some of the problems that these models have already caused.
For example, since OpenAI's chatbot ChatGPT was launched in November, students have already started cheating by using it to write essays for them. News website CNET has used ChatGPT to write articles, only to have to issue corrections amid accusations of plagiarism. Building the watermarking approach into such systems before they're released could help address such problems.
In studies, these watermarks have already been used to identify AI-generated text with near certainty. Researchers at the University of Maryland, for example, were able to spot text created by Meta's open-source language model, OPT-6.7B, using a detection algorithm they built. The work is described in a paper that's yet to be peer-reviewed, and the code will be available for free around February 15.
[...] There are limitations to this new method, however. Watermarking only works if it is embedded in the large language model by its creators right from the beginning. Although OpenAI is reputedly working on methods to detect AI-generated text, including watermarks, the research remains highly secretive. The company doesn't tend to give external parties much information about how ChatGPT works or was trained, much less access to tinker with it. OpenAI didn't immediately respond to our request for comment.
Related:
- BuzzFeed Preps AI-Written Content While CNET Fumbles
- Seattle Public Schools Bans ChatGPT; District 'Requires Original Thought and Work From Students'
- ChatGPT Arrives in the Academic World
- OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of
These read like a proof of concept for replacing human writers:
Earlier this year, when BuzzFeed announced plans to start publishing AI-assisted content, its CEO Jonah Peretti promised the tech would be held to a high standard.
"I think that there are two paths for AI in digital media," Peretti told CNN. "One path is the obvious path that a lot of people will do — but it's a depressing path — using the technology for cost savings and spamming out a bunch of SEO articles that are lower quality than what a journalist could do, but a tenth of the cost."
[...] Indeed, the first AI content BuzzFeed published — a series of quizzes that turned user input into customized responses — were an interesting experiment, avoiding many of the missteps that other publishers have made with the tech.
It doesn't seem like that commitment to quality has held up, though. This month, we noticed that with none of the fanfare of Peretti's multiple interviews about the quizzes, BuzzFeed quietly started publishing fully AI-generated articles that are produced by non-editorial staff — and they sound a lot like the content mill model that Peretti had promised to avoid.
[...] A BuzzFeed spokesperson told us that the AI-generated pieces are part of an "experiment" the company is doing to see how well its AI writing assistance incorporates statements from non-writers.
The linked article includes many laughable examples of bland and similar phrases in multiple stories published on the site.
Previously: BuzzFeed Preps AI-Written Content While CNET Fumbles
(Score: 2) by PiMuNu on Wednesday February 08, @10:05AM (5 children)
If all the media outlets move to AI to generate content, who writes the training data (i.e. the actual content)?
(Score: 3, Insightful) by Ox0000 on Wednesday February 08, @10:55AM (1 child)
In 5-10 years time, the 90% of the internet will be AI's talking to AI's, training themselves on AI-generated content. The results of it will probably be similar to what happens when you do biological inbreeding.
We will also yearn for how 'little' energy the crypto-currency craze of the 20's wasted in comparison to what AI will consume. The returns of this energy-wasting will most likely track similar to those of the 2020's NFT madness.
And if, big if, we ever search for "how to turn off the internet", we will only find articles extolling the virtues of AI's and how dangerous the road ahead without an AI will be. These articles will be convincing enough for all but a few to be fooled.
I, for one, do not welcome our new AI overlords...
(Score: 0) by Anonymous Coward on Wednesday February 08, @05:43PM
I've seen the future: it is like cable television at the end of the movie when they insert a 5 minute block of adverts every 5 minutes. Except the content is replaced with regurgitated advertainment targetted to whatever your phone recorded you talking about last.
(Score: 2) by JoeMerchant on Wednesday February 08, @01:03PM
Journalism has been a mess of copy-pasta for decades now. Automation of the process will put it in full Ouroboros mode.
I foresee AI trained to recognize novel content, which hopefully would then be fed to meatbags for ground truth validation, but more likely will just get highlighted and fed to news junkies regardless of any basis in fact, fantasy, or malicious fabrication.
Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
(Score: 2) by looorg on Wednesday February 08, @03:31PM (1 child)
There probably is enough content around already for ChatGPT etc to train on to fill Buzzfeed and CNET etc by now certainly so for the bland news snippets and quizzes. Doesn't need an expensive human to do any of that.
(Score: 0) by Anonymous Coward on Wednesday February 08, @05:48PM
Who reads this shit anyway? Isn't the lie of "content" being exposed as worthless chum to drive views of the propaganda?
(Score: 2) by legont on Thursday February 09, @02:39AM
Can't copyright trolls kill AI "generated" content? Most stuff in training sets is copyrighted, isn't it? Or perhaps die trying.
"Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.