Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.

Submission Preview

Link to Story

AI-Generated Text is Overwhelming Institutions – Setting Off a No-Win ‘Arms Race’ with AI Detectors

Accepted submission by fliptop at 2026-02-10 17:19:57
Code

In 2023, the science fiction literary magazine Clarkesworld stopped accepting [npr.org] new submissions because so many were generated by artificial intelligence. Near as the editors could tell, many submitters pasted the magazine’s detailed story guidelines into an AI and sent in the results. And they weren’t alone [theconversation.com]:

This is only one example of a ubiquitous trend. A legacy system relied on the difficulty of writing and cognition to limit volume. Generative AI overwhelms the system because the humans on the receiving end can’t keep up.

This is happening everywhere. Newspapers are being inundated by AI-generated letters to the editor [nytimes.com], as are academic journals [marketplace.org]. Lawmakers are inundated with AI-generated constituent comments [cornell.edu]. Courts around the world are flooded with AI-generated filings [law.com], particularly by people representing themselves. AI conferences are flooded [futurism.com] with AI-generated research papers. Social media is [app.com] flooded [nytimes.com] with AI posts [cyberlink.com]. In music [time.com], open source software [github.com], education [newyorker.com], investigative journalism [bsky.app] and hiring [nytimes.com], it’s the same story.

Like Clarkesworld’s initial response, some of these institutions shut down their submissions processes. Others have met the offensive of AI inputs with some defensive response, often involving a counteracting use of AI.

[...] These are all arms races: rapid, adversarial iteration to apply a common technology to opposing purposes. Many of these arms races have clearly deleterious effects. Society suffers if the courts are clogged with frivolous, AI-manufactured cases. There is also harm if the established measures of academic performance – publications and citations – accrue to those researchers most willing to fraudulently submit AI-written letters and papers rather than to those whose ideas have the most impact. The fear is that, in the end, fraudulent behavior enabled by AI will undermine systems and institutions that society relies on.

TFA goes on to discuss the upsides of AI, how AI makes fraud easier, and some ideas on balancing harms with benefits. Originally spotted on Schneier on Security [schneier.com].


Original Submission