Stories
Slash Boxes
Comments

SoylentNews is people

Submission Preview

Link to Story

Due to AI fakes, the “deep doubt” era is here

Accepted submission by Freeman at 2024-09-18 13:38:30 from the fake department of fake liars dept.
News

https://arstechnica.com/information-technology/2024/09/due-to-ai-fakes-the-deep-doubt-era-is-here/ [arstechnica.com]

Given the flood of photorealistic AI-generated images washing over social media networks like X [arstechnica.com] and Facebook [404media.co] these days, we're seemingly entering a new age of media skepticism: the era of what I'm calling "deep doubt." While questioning the authenticity of digital content stretches back [nytimes.com] decades—and analog media long before [wikipedia.org] that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people's existing skepticism toward online content from strangers may be reaching new heights.
[...]
Legal scholars Danielle K. Citron and Robert Chesney foresaw this trend [bu.edu] years ago, coining the term "liar's dividend" in 2019 to describe the consequence of deep doubt: deepfakes being weaponized by liars to discredit authentic evidence. But whereas deep doubt was once a hypothetical academic concept, it is now our reality.

Doubt has been a political weapon since ancient times [populismstudies.org]. This modern AI-fueled manifestation is just the latest evolution of a tactic where the seeds of uncertainty are sown to manipulate public opinion, undermine opponents, and hide the truth. AI is the newest refuge of liars.
[...]
In April, a panel of federal judges [arstechnica.com] highlighted the potential for AI-generated deepfakes to not only introduce fake evidence but also cast doubt on genuine evidence in court trials.
[...]
Deep doubt impacts more than just current events and legal issues. In 2020, I wrote about a potential "cultural singularity [fastcompany.com]," a threshold where truth and fiction in media become indistinguishable.
[...]
"Deep doubt" is a new term, but it's not a new idea. The erosion of trust in online information from synthetic media extends back to the origins of deepfakes themselves. Writing for The Guardian in 2018, David Shariatmadari spoke of [theguardian.com] an upcoming "information apocalypse" due to deepfakes and questioned, "When a public figure claims the racist or sexist audio of them is simply fake, will we believe them?"
[...]
Throughout recorded history, historians and journalists have had to evaluate the reliability of sources [wm.edu] based on provenance, context, and the messenger's motives. For example, imagine a 17th-century parchment that apparently provides key evidence about a royal trial. To determine if it's reliable, historians would evaluate the chain of custody, as well as check if other sources report the same information. They might also check the historical context to see if there is a contemporary historical record of that parchment existing. That requirement has not magically changed in the age of generative AI.
[...]
You'll notice that our suggested counters to deep doubt above do not include watermarks, metadata, or AI detectors as ideal solutions. That's because trust does not inherently derive from the authority of a software tool. And while AI and deepfakes have dramatically accelerated the issue, bringing us to this new "deep doubt" era, the necessity of finding reliable sources of information about events you didn't witness firsthand is as old as history itself.
[...]
It's likely that in the near future, well-crafted synthesized digital media artifacts will be completely indistinguishable from human-created ones. That means there may be no reliable automated way to determine if a convincingly created media artifact was human or machine-generated solely by looking at one piece of media in isolation (remember the sermon on context above). This is already true of text, which has resulted in many human-authored works being falsely labeled [thedailybeast.com] as AI-generated, creating ongoing pain for students in particular.

Throughout history, any form of recorded media, including ancient clay tablets, has been susceptible to forgeries [researchgate.net]. And since the invention of photography, we have never been able to fully trust a camera's output: the camera can lie [nytimes.com].
[...]
Credible and reliable sourcing is our most critical tool in determining the value of information, and that's as true today as it was in 3000 BCE [wikipedia.org], when humans first began to create written records.


Original Submission