
from the all-the-made-up-news-that's-fit-to-print dept.
A quote from Wyoming's governor and a local prosecutor were the first things that seemed slightly off to Powell Tribune reporter CJ Baker. Then, it was some of the phrases in the stories that struck him as nearly robotic:
The dead giveaway, though, that a reporter from a competing news outlet was using generative artificial intelligence to help write his stories came in a June 26 article about the comedian Larry the Cable Guy being chosen as the grand marshal of the Cody Stampede Parade.
[...] After doing some digging, Baker, who has been a reporter for more than 15 years, met with Aaron Pelczar, a 40-year-old who was new to journalism and who Baker says admitted that he had used AI in his stories before he resigned from the Enterprise.
[...] Journalists have derailed their careers by making up quotes or facts in stories long before AI came about. But this latest scandal illustrates the potential pitfalls and dangers that AI poses to many industries, including journalism, as chatbots can spit out spurious if somewhat plausible articles with only a few prompts.
[...] "In one case, (Pelczar) wrote a story about a new OSHA rule that included a quote from the Governor that was entirely fabricated," Michael Pearlman, a spokesperson for the governor, said in an email. "In a second case, he appeared to fabricate a portion of a quote, and then combined it with a portion of a quote that was included in a news release announcing the new director of our Wyoming Game and Fish Department."
Related:
- AI Threatens to Crush News Organizations. Lawmakers Signal Change Is Ahead
- New York Times Sues Microsoft, ChatGPT Maker OpenAI Over Copyright Infringement
- A Financial News Site Uses AI to Copy Competitors — Wholesale
- Sports Illustrated Published Articles by Fake, AI-Generated Writers
- OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI
Related Stories
Submitted via IRC for SoyCow2718
OpenAI has released the largest version yet of its fake-news-spewing AI
In February OpenAI catapulted itself into the public eye when it produced a language model so good at generating fake news that the organization decided not to release it. Some within the AI research community argued it was a smart precaution; others wrote it off as a publicity stunt. The lab itself, a small San Francisco-based for-profit that seeks to create artificial general intelligence, has firmly held that it is an important experiment in how to handle high-stakes research.
Now six months later, the policy team has published a paper examining the impact of the decision thus far. Alongside it, the lab has released a version of the model, known as GPT-2, that's half the size of the full one, which has still not been released.
In May, a few months after GPT-2's initial debut, OpenAI revised its stance on withholding the full code to what it calls a "staged release"—the staggered release of incrementally larger versions of the model in a ramp-up to the full one. In February, it published a version of the model that was merely 8% of the size of the full one. It published another roughly a quarter of the full version before the most recent release. During this process, it also partnered with selected research institutions to study the full model's implications.
[...] The authors concluded that after careful monitoring, OpenAI had not yet found any attempts of malicious use but had seen multiple beneficial applications, including in code autocompletion, grammar help, and developing question-answering systems for medical assistance. As a result, the lab felt that releasing the most recent code was ultimately more beneficial. Other researchers argue that several successful efforts to replicate GPT-2 have made OpenAI's withholding of the code moot anyway.
OpenAI Can No Longer Hide Its Alarmingly Good Robot 'Fake News' Writer
But it may not ultimately be up to OpenAI. This week, Wired magazine reported that two young computer scientists from Brown University—Aaron Gokaslan, 23, and Vanya Cohen, 24—had published what they called a recreation of OpenAI's (shelved) original GPT-2 software on the internet for anyone to download. The pair said their work was to prove that creating this kind of software doesn't require an expensive lab like OpenAI (backed by $2 billion in endowment and corporate dollars). They also don't believe such a software would cause imminent danger to society.
Also at BBC.
See also: Elon Musk: Computers will surpass us 'in every single way'
Previously: OpenAI Develops Text-Generating Algorithm, Considers It Too Dangerous to Release
They were asked about it, and they deleted everything:
There was nothing in Drew Ortiz's author biography at Sports Illustrated to suggest that he was anything other than human.
"Drew has spent much of his life outdoors, and is excited to guide you through his never-ending list of the best products to keep you from falling to the perils of nature," it read. "Nowadays, there is rarely a weekend that goes by where Drew isn't out camping, hiking, or just back on his parents' farm."
The only problem? Outside of Sports Illustrated, Drew Ortiz doesn't seem to exist. He has no social media presence and no publishing history. And even more strangely, his profile photo on Sports Illustrated is for sale on a website that sells AI-generated headshots, where he's described as "neutral white young-adult male with short brown hair and blue eyes."
Ortiz isn't the only AI-generated author published by Sports Illustrated, according to a person involved with the creation of the content who asked to be kept anonymous to protect them from professional repercussions.
"There's a lot," they told us of the fake authors. "I was like, what are they? This is ridiculous. This person does not exist."
[...] The AI content marks a staggering fall from grace for Sports Illustrated, which in past decades won numerous National Magazine Awards for its sports journalism and published work by literary giants ranging from William Faulkner to John Updike.
But now that it's under the management of The Arena Group, parts of the magazine seem to have devolved into a Potemkin Village in which phony writers are cooked up out of thin air, outfitted with equally bogus biographies and expertise to win readers' trust, and used to pump out AI-generated buying guides that are monetized by affiliate links to products that provide a financial kickback when readers click them.
What's next? Six-fingered AI-generated models for the swimsuit edition?
Related:
- The AI Hype Bubble is the New Crypto Hype Bubble
- OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI
- Inside the Secret List of Websites That Make AI Like ChatGPT Sound Smart
One of the most highly-trafficked financial news websites in the world is creating AI-generated stories that bear an uncanny resemblance to stories published just hours earlier by other competitors:
Investing.com, a Tel Aviv-based site owned by Joffre Capital, is a financial news and information hub that provides a mix of markets data and investing tips and trends. But increasingly, the site has been relying on AI to create its stories, which often appear to be thinly-veiled copies of human-written stories written elsewhere.
[...] Pere Monguió, the head of content at FXStreet, told Semafor in an email that he and his team noticed several months ago that Investing was publishing stories similar to their site's articles. FXStreet's 60-person team monitors and quickly analyzes developments in global currencies. By pumping out AI articles, Investing was eroding FXStreet's edge, Monguió said.
"Using AI to rewrite exclusive content from competitors is a threat to journalism and original content creation," he said.
[...] "This isn't truly a new thing," Lawrence Greenberg, senior vice president and chief legal officer at The Motley Fool, said in an email. "We have seen, and acted against, people plagiarizing our content from time to time, and if you're right about what's going on, AI has achieved a level of human intelligence that copies good content and makes it mediocre."
See also: Sports Illustrated Published Articles by Fake, AI-Generated Writers
New York Times Sues Microsoft, ChatGPT Maker OpenAI Over Copyright Infringement
The New York Times on Wednesday filed a lawsuit against Microsoft and OpenAI, the company behind popular AI chatbot ChatGPT, accusing the companies of creating a business model based on "mass copyright infringement," stating their AI systems "exploit and, in many cases, retain large portions of the copyrightable expression contained in those works:"
Microsoft both invests in and supplies OpenAI, providing it with access to the Redmond, Washington, giant's Azure cloud computing technology.
The publisher said in a filing in the U.S. District Court for the Southern District of New York that it seeks to hold Microsoft and OpenAI to account for the "billions of dollars in statutory and actual damages" it believes it is owed for the "unlawful copying and use of The Times's uniquely valuable works."
[...] The Times said in an emailed statement that it "recognizes the power and potential of GenAI for the public and for journalism," but added that journalistic material should be used for commercial gain with permission from the original source.
"These tools were built with and continue to use independent journalism and content that is only available because we and our peers reported, edited, and fact-checked it at high cost and with considerable expertise," the Times said.
Media outlets are calling foul play over AI companies using their content to build chatbots. They may find friends in the Senate:
Logo text More than a decade ago, the normalization of tech companies carrying content created by news organizations without directly paying them — cannibalizing readership and ad revenue — precipitated the decline of the media industry. With the rise of generative artificial intelligence, those same firms threaten to further tilt the balance of power between Big Tech and news.
On Wednesday, lawmakers in the Senate Judiciary Committee referenced their failure to adopt legislation that would've barred the exploitation of content by Big Tech in backing proposals that would require AI companies to strike licensing deals with news organizations.
Richard Blumenthal, Democrat of Connecticut and chair of the committee, joined several other senators in supporting calls for a licensing regime and to establish a framework clarifying that intellectual property laws don't protect AI companies using copyrighted material to build their chatbots.
[...] The fight over the legality of AI firms eating content from news organizations without consent or compensation is split into two camps: Those who believe the practice is protected under the "fair use" doctrine in intellectual property law that allows creators to build upon copyrighted works, and those who argue that it constitutes copyright infringement. Courts are currently wrestling with the issue, but an answer to the question is likely years away. In the meantime, AI companies continue to use copyrighted content as training materials, endangering the financial viability of media in a landscape in which readers can bypass direct sources in favor of search results generated by AI tools.
[...] A lawsuit from The New York Times, filed last month, pulled back the curtain behind negotiations over the price and terms of licensing its content. Before suing, it said that it had been talking for months with OpenAI and Microsoft about a deal, though the talks reached no such truce. In the backdrop of AI companies crawling the internet for high-quality written content, news organizations have been backed into a corner, having to decide whether to accept lowball offers to license their content or expend the time and money to sue in a lawsuit. Some companies, like Axel Springer, took the money.
It's important to note that under intellectual property laws, facts are not protected.
Also at Courthouse News Service and Axios.
Related:
- New York Times Sues Microsoft, ChatGPT Maker OpenAI Over Copyright Infringement
- Report: Potential NYT lawsuit could force OpenAI to wipe ChatGPT and start over
- Writers and Publishers Face an Existential Threat From AI: Time to Embrace the True Fans Model
(Score: 3, Insightful) by SomeGuy on Friday August 16 2024, @02:31AM (7 children)
Did this idiot really think that the slop that AI produces is somehow perfect?
It's one thing to use one of these slopbots to produce a template or some rambling filler, but at least proofread the damn results.
And don't expect any of the facts to be even remotely accurate, especially when dealing with NEWS. You know, new stuff that a damn AI hasn't been "trained" on.
(Score: 3, Interesting) by khallow on Friday August 16 2024, @02:42AM (6 children)
I think it's more an indication of how low the standards were. He probably thought that no one would know or care enough to doublecheck his work.
(Score: -1, Flamebait) by Anonymous Coward on Friday August 16 2024, @02:45AM (5 children)
> .. it's more an indication of how low the standards were.
... it's more an indication of how low the standards were in Wyoming.
ftfy
(Score: 1) by khallow on Friday August 16 2024, @03:42AM (4 children)
You point me to a part of the world where that isn't a problem, and I'll buy all their unicorns.
(Score: 0, Funny) by Anonymous Coward on Friday August 16 2024, @04:41AM
Well, there ain't no natural intelligence in Wyoming, so they have to go with the artificial stuff. It's like using margarine instead of butter
(Score: 3, Funny) by Tork on Friday August 16 2024, @02:41PM (2 children)
I've visited a fair amount of this planet and can tell you... the disregard for Wyoming is universal. 🤡
🏳️🌈 Proud Ally 🏳️🌈
(Score: 1) by khallow on Friday August 16 2024, @11:53PM (1 child)
(Score: 3, Touché) by Tork on Saturday August 17 2024, @12:38AM
🏳️🌈 Proud Ally 🏳️🌈
(Score: 2) by Rosco P. Coltrane on Friday August 16 2024, @03:13AM
Cheating is nothing new: cheating is about trying to get a result without putting in the work.
AI cheating is about trying to cheat without even putting in the work to cheat.
Of course there is a logic in that: if you cheat in the first place, your goal is to minimize the amount of work you do, and AI let you do even less work.
But mostly I find it just turbocharges the patheticness of the low-effort approach to everything. Super-convincing reviews, articles, videos, images, soundbites... without any effort to fake them is AI's biggest claim to fame so far.
(Score: 0, Troll) by Runaway1956 on Friday August 16 2024, @03:13AM (3 children)
Seriously, baby-faced high school grad shows up on a construction site, everyone is telling the kid all the ways he can end his career in one dramatic moment. Larger construction companies put the kid through an oreintation class that lasts hours, even days. He is force fed a diet of "safety safety safety" officially, then unofficially.
At reporter/jounalist university, these clowns aren't told about plagiarism? They aren't warned to never make up facts? They don't learn that they are to report the news, not make news? I mean, even a relatively stupid person should absorb those basics in a few days. Maybe they need to publish a brochure for the noobs. "Ten fastest ways to destroy your reporting career". If dummies in construction work can learn, surely reporters can learn?
“I have become friends with many school shooters” - Tampon Tim Walz
(Score: 3, Touché) by Anonymous Coward on Friday August 16 2024, @02:44PM
if they really did that, who would fox news hire?
(Score: 0) by Anonymous Coward on Friday August 16 2024, @03:09PM (1 child)
You complain about making up facts but your sig suggests that fake news is right up your alley.
(Score: 3, Touché) by fliptop on Friday August 16 2024, @03:32PM
You're right, it should say "Two MEN...two women."
Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other.
(Score: 3, Touché) by KritonK on Friday August 16 2024, @05:15AM
I'm too lazy to write a reply, so I asked an AI to write one for me:
(Score: 2) by PiMuNu on Friday August 16 2024, @07:15AM
Alternative reading of the story:
A journalist managed to pass off a whole load of slop as journalism for a few months and was actually paid, while he spent the time on long vacations and tidying up his garden. Eventually they caught him, by which time his bank balance was fleshed out and he had a nice tan.