Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.

Submission Preview

Link to Story

The Grim Consequences of a Misleading Study on Disinformation

Accepted submission by upstart at 2021-02-18 16:08:38
News

████ # This file was generated bot-o-matically! Edit at your own risk. ████

The Grim Consequences of a Misleading Study on Disinformation [wired.com]:

esteemed Oxford Internet Institute announced a major report on disinformation and “cyber troops” with a press release [ox.ac.uk] describing an "industrial-scale problem." Worldwide press coverage echoed claims that OII had revealed the "increasing role" private firms play in spreading computational propaganda. Actual evidence presented in the annual “survey” of social media manipulation, however, is much thinner than the hype.

While the report's website declares [ox.ac.uk], "Cyber troop activity continues to increase around the world," inside the report, OII claim they show “publicly identified” cases of disinformation operations have “grow[n] in number over time.” They point to their own studies counting public reporting as evidence of actual operations increasing since 2017. Citing OII’s last report, which was based on similar evidence, The New York Times in 2019 heralded [nytimes.com] that "the number of countries with political disinformation campaigns more than doubled to 70 in the last two years."

The big problem here is the phrase “publicly identified.”

As a longtime propaganda scholar, I know we struggled to get disinformation and propaganda reported on before the 2016 US election and Brexit, when journalistic interest suddenly grew. In 2015, a NexisUni search reveals, the Times mentioned disinformation in just 33 articles; there were 95 in 2016, 274 in 2017, 586 in 2018, and 684 in 2019. This is, of course, an indication of increased reporting of disinformation.

Platforms like Facebook and Twitter, having taken years to acknowledge the crisis, have slowly introduced measures to identify and take down large swathes of fake accounts and disinformation. They suddenly invested millions in research [fb.com] and accompanying lobbying [nbcnews.com] and PR [nbcnews.com], to persuade the world they are acting against disinformation. Increased stories in the press increasingly hailed their identification and takedowns of disinformation. Does the cited [ox.ac.uk] figure of 317,000 accounts and pages removed by Facebook and Twitter in 2019–20 indicate the problem appeared at that scale in 2019 when these operations were found or reported? Of course it doesn’t.

More researchers are finally looking for disinformation now, which is good, so it’s more likely to be found and reported than in 2017. Since 2017, millions and millions [hewlett.org] have been spent on researching disinformation campaigns; a $100 million investment was made by the Omidyar Network [hewlett.org] alone to fight misinformation and support journalism. ”Disinformation researcher” has also become an increasingly common job title in the West: Countering disinformation became a lucrative industry as startups and big-name scholars now compete with think tanks, nonprofits, and journalistic fact-checkers for funding. For the OII’s Computational Propaganda project, which conducts the annual survey, the institute itself has secured [nsf.gov] generous funds—$218,825 from the National Science Foundation, a $2.2 million European Commission [europa.eu]/European Research Council Award, and $500,000 from the Ford Foundation [ox.ac.uk]. Its new report also acknowledges they received unstated amounts of funding from the Adessium Foundation, Civitates Initiative, Hewlett Foundation, Luminate, Newmark Philanthropies, and Open Society Foundation.

Within the study, OII “surveys” a large number of countries’ reporting, but disinformation is underreported in some parts of the world. As news organizations in liberal democracies appointed specialist disinformation reporters covering the phenomenon daily, other countries experienced crackdowns on speech. OII prioritized mainstream media reports as most reliable, which tend more often to focus on disinformation by foreign policy adversaries and attacks against allies. The OII methodology does admit [ox.ac.uk], “Given the nature of disinformation operations, there are almost certainly cyber troop activities that have not been publicly documented.” All that money could have gone to researching that dark industry.

The income generated by influence campaigns might be a more reliable measure of industry scale than reporting. OII’s report states that $10 million has been spent on Facebook political advertisements, only one aspect of a campaign. They say $60 million has been spent by state actors on computational propaganda by private firms since 2009—a figure that seems low, and it’s not very clear how it was derived. In the US alone (one country OII acknowledges as hiring companies for computational propaganda), the Pentagon spent [nbcnews.com] $4.7 billion in 2009 on its “hearts and minds” campaigns. The following year The Washington Post [archive.org] reported on 37 companies in psychological operations and related activities including social media. Later contracts included in 2014 a $1.5 billion contract for support with psychological operations and information operations to a group of contractors, reported in The Intercept [theintercept.com], and in 2017 Northrop Grumman got [dailymail.co.uk] a $500 million contract for psychological operations. The defense industry is so opaque that the specific activities this money was spent on is often secret, but it is no less important to acknowledge overall industry spending in each country when giving an assessment of governments’ contracts with private firms for propaganda. There is huge money to be made from state actors in influence, worldwide.

OII’s methodology also acknowledges that its findings may be impacted by “media bias.” This is unacceptable in a study assessing disinformation. The problem is worse than they admit, because their evidence appears to hang on the hope that all the media reporting I describe above reflects the scale of disinformation, not reporters’ sudden discovery of it.

Once one knows all this, numbers in the report come to seem largely meaningless. Take the claim that disinformation has increased to 76 out of the 81 countries they found using computational propaganda. If so, politicians in five of these countries apparently never lie online. Wherever that is, I'm going.

The problem is that figures are then presented as authoritative with colorful tables and charts. Statistics look more persuasive than anecdotal examples. And many journalists seem to have taken away from the press release a few impressive numbers without examining the methods.

When the report was released, typical media headlines included [ft.com] “Boom in Private Companies Offering Disinformation-for-Hire” in the Financial Times. Disinformation is a growing problem. Oxford probably underestimates the scale of the industry profiting from creating it, because a survey of public reporting of disinformation campaigns would never be a good indicator of the scale of an industry that largely exists in the shadows.

One misleading graph, titled "Countries With Evidence of Private Firms Managing Manipulation Campaigns," shows these rising year by year to 48 in 2020. The reader would think firms ran such campaigns in only nine countries in 2017—yet this was a year when just one firm, Cambridge Analytica, was hired for political work in at least four countries that we know of. Not counting government work. Many more companies existed. My preliminary company spreadsheet for what I call the “influence industry” currently has 600 firms, and I’ve barely begun.

The private companies responsible for disinformation are understudied and underreported. I have argued [brookings.edu] repeatedly that research is needed on the industry behind influence operations, and I tried to raise funds myself for the difficult work that’s needed to reveal the opaque networks, clients, methods, and projects of the global influence industry.

What worries me is that misreporting gives a false sense that researchers have done that work, and—beyond the concern of basing policy on weak evidence—that this might discourage funding of the important and difficult work that is needed to reveal this in the future.

The OII study’s limitations are not mentioned in the reporting.

This research by one of the most influential research centers on disinformation in the world was reported globally—from London [ft.com] to Mexico [verietyinfo.com] to Germany [spiegel.de], even in The New York Times [nytimes.com]. Professor Philip Howard, the OII’s director and the report’s coauthor, says in the press release: “Our report shows misinformation has become more professionalized and is now produced on an industrial scale.” Normally experts refer to deliberately produced intentional falsehoods as “disinformation,” but this odd phrasing seemed to go unnoticed. One would hope care was taken in the report itself, but no. Would less esteemed institutions get away with publishing a report that includes misspellings like misinfromation and propoganda—having apparently not deployed a spell-checker or proof-reader on their work? (And not for the first [nsf.gov] time.)

Was there a lack of journalistic scrutiny because it's Oxford University? The OII computational propaganda project has produced some amazing students and excellent research. Sam Woolley, director of propaganda research at the University of Texas at Austin’s Center for Media Engagement, for example, produced a first analysis of the phenomenon of “computational propaganda [ox.ac.uk]” with the group.

I raise my concern because of how influential the institute is. OII’s director has thousands [google.com] of citations and has given prominent testimonies [ox.ac.uk] on foreign influence and propaganda before the US Senate, UK Parliament, and European Commission. Its press release [ox.ac.uk] touts Oxford University’s world-leading reputation for “ground-breaking research and innovation.”

A well-regarded research center like OII gets immediate big headlines whenever it produces new research, but the authors have made far more of these findings than they can reasonably claim.

Policymakers [scotsman.com] are already using this new OII report as the basis to push forward policy recommendations. With attacks on scientific knowledge fueling the infodemic and the violent attack on the Capitol, it’s vital that the public can have confidence that robust evidence underpins new policy decisions.

Misleading reporting about such weak findings risks undermining public confidence in research and new policies on disinformation, making it more urgent than ever that we get the difficult research that’s needed done.

WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinionshere [wired.com], and see our submission guidelineshere [wired.com]. Submit an op-ed atopinion@wired.com [mailto].

More Great WIRED Stories


Original Submission