from the this-department-was-generated-using-human-technology dept.
"This article was generated using automation technology," reads a dropdown description:
Next time you're on your favorite news site, you might want to double check the byline to see if it was written by an actual human.
CNET, a massively popular tech news outlet, has been quietly employing the help of "automation technology" — a stylistic euphemism for AI — on a new wave of financial explainer articles, seemingly starting around November of last year.
[...] The articles are published under the unassuming appellation of "CNET Money Staff," and encompass topics like "Should You Break an Early CD for a Better Rate?" or "What is Zelle and How Does It Work?"
That byline obviously does not paint the full picture, and so your average reader visiting the site likely would have no idea that what they're reading is AI-generated. It's only when you click on "CNET Money Staff," that the actual "authorship" is revealed.
"This article was generated using automation technology," reads a dropdown description, "and thoroughly edited and fact-checked by an editor on our editorial staff."
Since the program began, CNET has put out around 73 AI-generated articles. That's not a whole lot for a site that big, and absent an official announcement of the program, it appears leadership is trying to keep the experiment as lowkey as possible. CNET did not respond to questions about the AI-generated articles.
[...] Nonetheless, AP's justification for using AI — and a talking point being adopted across the industry — is that it frees up journalists and other staff from having to write tedious recaps. In reality, it's hard to believe that the technology would forever be limited to a cure to tedium and never intrude on "real" writing jobs.
Now, looking at the entire explainers that CNET has generated using AI, it looks like that goalpost has already shifted — and may never return.
(Score: 5, Interesting) by canopic jug on Saturday January 14, @12:48PM (4 children)
Well that sure explains a lot. I stopped following CNet a while back because their articles were just a bunch of words without a point or a flow. If you slowly read through them, they probably can be bent into making some kind of sense. But if you skim through them quickly they are just a word salad, albeit a high quality word salad, without a clear point or meaning.
It's not clever to dump this waste onto the public web without proper machine readable tags. Regardless of the quality, what this is doing by leaving this crap unlabeled is poisoning the data set used for training: Bots writing word salad which other bots will ingest and use as training data to produce their own n-generation word salad. That only takes us further and further from programs that can actually understand and produce English, while risking a harmful feedback loop [indiatimes.com].
Take a step back and ask, who should control what? Should we control the computers and their actions? Or should the computers control and manipulate us? I decided long ago that it is we that should control the computers and use them as tools for amplifying skill and knowledge. That's an extreme minority position, but I'll continue to stick to it. CNet is now formally blacklisted as far as my reading goes.
Money is not free speech. Elections should not be auctions.
(Score: 1, Informative) by Anonymous Coward on Saturday January 14, @01:29PM (2 children)
They only did it for some uncredited financial articles during a time that you probably weren't reading the site, so it doesn't explain a lot.
(Score: 4, Interesting) by canopic jug on Saturday January 14, @01:36PM (1 child)
Ok. So that doesn't explain the massive decline in the quality and readability of the CNet articles. It remains unexplained for now. Yet, the point about the lack of a machine-readable warning still stands: They are polluting the training set data.
There have been some other sites which do AI generated text, but so far they label their output as autogenerated so they have easy to avoid re-visiting, for now.
Money is not free speech. Elections should not be auctions.
(Score: 5, Touché) by Captival on Saturday January 14, @06:03PM
>It remains unexplained for now.
It's easily explained. The content of CNet's articles are almost entirely irrelevant. It only matters that the article is long enough and has the proper keywords to attract attention from search engines. Once the link is clicked, they make their money and no long care. Considerate, well-researched writing by quality reporters exists, it just gets less traffic than SEO-focused tripe.
AI's are cheaper than employees. ESL authors from poor countries will produce articles for less than local ones. CNET needs to cut costs, so anyone who can produce vaguely English work at a low cost gets hired. The AI's are just the next step on the trend.
(Score: 4, Interesting) by JoeMerchant on Saturday January 14, @02:09PM
Transparency is always the answer.
Remember when newspaper articles were published with the journalists' byline?
A standard form machine readable byline identifying author(s), editor(s), any writing assistance tools used (I like https://hemingwayapp.com/ [hemingwayapp.com] , not that I bother to use it on message board posts), oh and such information about sources as should be shared, funding, potential conflicts of interest, topic tags, etc. All that good stuff could be linked to every article, hey maybe html could even have a section called HEAD or something...
If reputable journalistic outlets would all populate all those fields reliably, then we could just ignore the unsubstantiated tweets and such. But, you know how much the popular press consumers care about such things...
Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
(Score: 3, Informative) by Anonymous Coward on Saturday January 14, @04:35PM (5 children)
One day the output of bot-AI articles will vastly outnumber those of real people. We'll be in a swirling mess of AI-generated content learning off AI-generated content in an entirely self-referential frame of self-reference. We might have to... go outside :(
(Score: 3, Interesting) by JoeMerchant on Saturday January 14, @06:22PM (2 children)
Is that entirely bad?
We asked ChatGPT several work related questions and it came up with better answers than most of our employees (working in the areas in question) could come up with after a couple of hours of research. Even if the employees reports are better, it's probably worth checking with the ChatGPT output to see if it's accurately covering anything relevant or even important that they missed.
There have been "bot written" articles on the web trawling for advertising dollars for many years now. Yes, they should be clearly marked as such. But if they contain the content you are looking for, and you have ways to verify that they're not incorrect, is that all bad?
Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
(Score: 1, Interesting) by Anonymous Coward on Saturday January 14, @06:53PM
Bot essay invasion and CAPTCHAs being destroyed will probably be bad. You could draw a direct line from that to the death of all discussion forums.
(Score: 0) by Anonymous Coward on Wednesday January 18, @02:23PM
How long until chatgpt replaces search engines?
(Score: 3, Insightful) by aafcac on Sunday January 15, @02:22PM (1 child)
I'm wondering why anybody should continue to read these outlets if the output is coming from a machine? The actual writing of a story is probably the easiest part of the whole process. It's the research to figure out what should be included and the editing that follows that are typically the most involved parts of the process. These are people that are writing at a pretty basic level of the local language, not the equivalent of Shakespeare after all.
That being said, editorial standards do seem to be dropping as well as there are fewer and fewer independent outlets out there and more of them are owned by the same people or are just printing things off the wire.
(Score: 0) by Anonymous Coward on Sunday January 15, @09:33PM
Just get a bot to read the articles and summarize. Wait...
(Score: 3, Interesting) by bradley13 on Saturday January 14, @07:16PM
This was inevitable. When CHATGPT & Co can generate such good text (and it *is* good), why not? Sure, you will want to read through it, and maybe do a light edit. But for ordinary journalism, this is an obvious application of AI.
The same is going to happen for any domain requiring lots of text generation. Boilerplate novels. Pop-sci articles. Even business reports. We are at the beginning of a paradigm shift as big as the introduction of word-processing.
Everyone is somebody else's weirdo.
(Score: 5, Interesting) by istartedi on Saturday January 14, @07:30PM (1 child)
It became quite painfully obvious during the pandemic that Yahoo Finance had been syndicating a lot of AI generated articles. They still do it. After a while you recognize the pattern with something like, "Why is company X doing Y" and "Is company X really a value?". Then if you actually click it there's more boilerplate like that. The pandemic made it obvious because the whole market was going down--there was nothing specific to say about most companies, but they continued to run those stories like nothing had changed, LOL.
We should be deeply suspicious of most things online, but I'm particularly suspicious of finance content now because of that, and because of the obvious incentives to manipulate people in that space for financial gain. They will eventually learn from their mistakes and there will come a point where the tells aren't so obvious. We may already be there and not know it.
Of course as Buffet (or perhaps his mentor Benjamin Graham) said: "In the short run, the market is a voting machine. In the long run it's a weighing machine". You can't fake sales or dividends--not for long anyway, so it's going to get to the point where financial media is not just 95% junk, but 100% junk and we'll all have to learn how to read a balance sheet and to really care that regulators are punishing anybody who tries to fake that because to reiterate, manipulative AI media is DEAD MEDIA, even if you don't realize it. You're taking advice from a corpse.
Appended to the end of comments you post. Max: 120 chars.
(Score: 0) by Anonymous Coward on Sunday January 15, @09:38PM
That's a classic title something like "Why X leads to Y and Z" where it describes X, then Y and Z but gives no mention of Why. The whole article is a re-hash of describing the simple things without any of the promised insight implied by the title.
(Score: 3, Funny) by kazzie on Sunday January 15, @12:42PM
I'll stick to my Red Book standards thanks, even if it does limit me to 44.1kHz.
(Score: 3, Insightful) by DannyB on Monday January 16, @05:58PM (1 child)
I would swear that some YouTube channels are generated by AI.
The video narration, while sounding nice, is obviously an AI voice. The text that it is "reading" does not sound like it was written by a human.
The video itself is many semi-related stitched together clips.
Slap it all together to create a low-quality channel. Monitize it for whatever you can get. Rinse. Repeat.
How often should I have my memory checked? I used to know but...
(Score: 0) by Anonymous Coward on Wednesday January 18, @02:31PM
Thank you for this glowing review of Hololive! Fan feedback is so important to us!