Stories
Slash Boxes
Comments

SoylentNews is people

Submission Preview

Link to Story

How Fake News Spreads Like a Real Virus

Accepted submission by upstart at 2019-10-11 11:01:06
/dev/random

████ This is just here to be deleted. ████

Submitted via IRC for Bytram

How fake news spreads like a real virus [techxplore.com]

October 11, 2019

How fake news spreads like a real virus

When it comes to real fake news, the kind of disinformation that Russia deployed during the 2016 elections, "going viral" isn't just a metaphor.

Using the tools for modeling the spread of infectious disease, cyber-risk researchers at Stanford Engineering are analyzing the spread of fake news much as if it were a strain of Ebola. "We want to find the most effective way to cut the transmission chains, correct the information if possible and educate the most vulnerable targets," says Elisabeth Paté-Cornell, a professor of management science and engineering. She has long specialized in risk analysis and cybersecurity and is overseeing the research in collaboration with Travis I. Trammell, a doctoral candidate at Stanford. Here are some of the key learnings:

How does fake news replicate across social media?

The researchers have adapted a model for understanding diseases that can infect a person more than once. It looks at how many people are "susceptible" to the disease—or in this case, likely to believe a piece of fake news. It also looks at how many have been exposed to it, and how many are actually "infected" and believe the story; and how many people are likely to spread a piece of fake news.

Much like a virus, the researchers say that over time being exposed to multiple strains of fake news can wear down a person's resistance and make them increasingly susceptible. The more times a person is exposed to a piece of fake news, especially if it comes from an influential source, the more likely they are to become persuaded or infected.

What makes it spread faster?

The so-called "power law'" of social media, a well-documented pattern in social networks, holds that messages replicate most rapidly if they are targeted at relatively small numbers of influential people with large followings.

Researchers are also looking at the relative effectiveness of trolls versus bots. Trammell says bots, which are automated programs that masquerade as people, tend to be particularly good for spreading massive numbers of highly emotional messages with little informational content. Think here of a message with the image of Hillary Clinton behind bars and the words "Lock Her Up!" That kind of message will spread rapidly within the echo chambers populated by those who already agree with the basic sentiment. Bots have considerable power to inflame people who are already like-minded, though they can be easier to detect and block than trolls.

By contrast, trolls are typically real people who spread provocative stories and memes. Trolls can be better at persuading people who are less convinced and want more information.

What kinds of people are most susceptible?

Paté-Cornell and Trammell say there is considerable evidence that the elderly, the young and the lesser educated are particularly susceptible to fake news. But in the broadest sense it is partisans at the political extremes, whether liberal or conservative, who are most like to believe a false story in part because of confirmation bias—the tendency in all of us to believe stories that reinforce our convictions—and the stronger those convictions, the more powerfully the person feels the pull of confirmation bias.

Is inoculation possible?

Paté-Cornell and Trammell say that, much like ordinary crime, disinformation will never disappear. But by learning how it is propagated through social media, the researchers say it's possible to fight back. Social media platforms could become much quicker at spotting suspect content. They could then attach warnings—a form of inoculation—or they could quarantine more of it.

The challenge, they say, is that protection has costs—financial costs as well as reduced convenience and limitations on free expression. Paté-Cornell says the dangers of fake news should be analyzed as a strategic management risk similar to how we have traditionally analyzed the risks posed by cyberattacks aimed at disabling critical infrastructure. "It's an issue of how we can best manage our resources in order to minimize the risk," she says. "How much are you willing to spend, and what level of risk are we willing to accept?"

What does the future hold?

Fake news is already a national security issue. But Paté-Cornell and Trammell predict that artificial intelligence will turbocharge fake news in the years ahead. AI will make it much easier to target people with fake news or deep-fake videos—videos that appear real but have been fabricated in whole or in part—that are finely tailored to what a susceptible viewer is likely to accept and perhaps spread. AI could also make it easy to create armies of more influential bots that appear to share a target's social background, hometown, personal interests or religious beliefs. Such kinds of hyper-targeting would make the messages much more persuasive. AI also shows great potential to counter this scourge by identifying fake content in all forms, but only time will tell who prevails in this new age arms race.

Explore further

'Fake news,' diminishing media trust and the role of social media [phys.org] Provided by Stanford University [techxplore.com]Citation: How fake news spreads like a real virus (2019, October 11) retrieved 11 October 2019 from https://techxplore.com/news/2019-10-fake-news-real-virus.html [techxplore.com] This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.  shares [javascript]

Feedback to editors


Original Submission