Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by martyb on Friday June 05 2020, @12:03AM   Printer-friendly
from the orly? dept.

FiveThirtyEight is covering the efficacy of fact-checking and other methods to combat the spread of misinformation and disinformation. Fact-checking, after the fact, is better than nothing, it turns out. There are some common factors in the times when it has been done successfully:

Political scientists Ethan Porter and Thomas J. Wood conducted an exhaustive battery of surveys on fact-checking, across more than 10,000 participants and 13 studies that covered a range of political, economic and scientific topics. They found that 60 percent of respondents gave accurate answers when presented with a correction, while just 32 percent of respondents who were not given a correction expressed accurate beliefs. That’s pretty solid proof that fact-checking can work.

But Porter and Wood have found, alongside many other fact-checking researchers, some methods of fact-checking are more effective than others. Broadly speaking, the most effective fact checks have this in common:

  1. They are from highly credible sources (with extra credit for those that are also surprising, like Republicans contradicting other Republicans or Democrats contradicting other Democrats).
  2. They offer a new frame for thinking about the issue (that is, they don’t simply dismiss a claim as “wrong” or “unsubstantiated”).
  3. They don’t directly challenge one’s worldview and identity.
  4. They happen early, before a false narrative gains traction.

It is as much about psychology as actually rebutting the disinformation because factors like partisanship and worldview have strong effects, and it is hard to reach people inside their social control media echo chambers from an accurate source they will accept.

[Though often incorrectly attributed to Mark Twain, one is reminded of the adage: “A lie can travel halfway around the world while the truth is still putting on its shoes”. --Ed.]

Previously:
(2020) Nearly Half of Twitter Accounts Pushing to Reopen America May be Bots
(2019) Russians Engaging in Ongoing 'Information Warfare,' FBI Director Says
(2019) How Fake News Spreads Like a Real Virus
(2019) More and More Countries are Mounting Disinformation Campaigns Online
(2019) At Defcon, Teaching Disinformation Campaigns Is Child's Play
(2018) Why You Stink at Fact-Checking
(2017) Americans Are “Under Siege” From Disinformation
(2015) Education Plus Ideology Exaggerates Rejection of Reality


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday June 05 2020, @06:27AM (2 children)

    by Anonymous Coward on Friday June 05 2020, @06:27AM (#1003597)

    What happened with Hillary was much more simple and also the reason 538 was smeared. Obama's unprecedented turnout was in large part driven by very poor folks. Many of these guys have no normal means of contact. I grew up urban poor, and my apartment complex had several (often broken..) phone booths in the middle of it that were used by many (including myself) when we needed to make a call. No incoming calls - you might be dealing drugs after all! Fun times.

    So how do you accurately sample and predict based on a group you can't get into contact with? You fudge it. More specifically, you go through the efforts of getting a small sample of these people and then you scale those responses upwards to whatever you expect the group turnout to be. It's a lot more of an art than it is a science. And that is exactly why predictions for the election were so completely wrong. It's also why people were attacking Nate. He was, implicitly, expecting much lower relative turnout and/or support from various groups than other pollsters were.

    ---

    Second thing. A 28% chance is still horribly wrong. The thing about elections is that they are, mostly, not random. What I mean here is that a 28% chance suggests that if you ran 2016 100 times that you'd expect to see Clinton win 72 times. That's extremely improbable. There are two reasons. The first is that most people know who (and if) they're going to vote well before they do. There is indeed a fairly good chunk of people who are undecided - it was 5% in 2016. And that 5% is of course far more than enough to decide an election but in practice they don't have much of an impact They tend to distribute pretty equally between the three parties: democratic, republican, stay at home.

    The second reason is an even bigger one. The above applies to an at large election, but that's not how we vote of course. In a presidential election every single state votes which basically translates to tons of micro-elections which helps to further reduce variance. In 2016 the election was not close particularly close. Trump won by 77 electoral votes, even accounting for faithless electors. You have to roll back the clock 32 years to Bush-Dukakis to see a republican winning a presidential election by that large a margin. Trump won by a large enough margin to have lost Florida and still won the election.

    The result here indicate that the modeling was simply wrong. The pollsters were, in my opinion intentionally, overstating expectations for 'Obama centric' demographics while also simultaneously taking 0 account of obvious and predictable sample biases on the other side of the aisle. The media was literally running 'Trump is Hitler stuff' along with numerous reports of violence and other acts of hostility and aggression towards Trump supporters. When somebody gets a phone call asking for their political views, Trump supporters were obviously going to be disproportionately likely to hang up.

  • (Score: 2) by takyon on Friday June 05 2020, @10:48AM

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday June 05 2020, @10:48AM (#1003664) Journal

    Reasonable post. But let me throw this out here: Poll results can take days or weeks to reflect the mood caused by current events. And Hillary was dealing with the Comey investigation just days before Election Day. Comey was the MVP of the Trump campaign. He threw and withdrew an October Surprise, sparking a lot of bad press for Hillary right in the final moments of the campaign. Clinton apparently blames Sanders for her loss, but Comey may be the one person on the planet most responsible for the outcome.

    On the electoral vote margin, it was still just a handful of states that swung the election, with very close vote counts in Pennsylvania, Wisconsin, and Michigan. A shift of less than 1% of votes between Clinton, Trump, and "eh, I can't be bothered" would have changed the outcome.

    If you are right about modeling, maybe polling is just done for good, and forecasters need to use more cerebral Big Data methods to correctly predict the outcome. Like measuring the mood of people on social media without asking them anything, or paying to insert stuff into their feeds to see how they react to it.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 0) by Anonymous Coward on Friday June 05 2020, @07:28PM

    by Anonymous Coward on Friday June 05 2020, @07:28PM (#1003910)

    The result here indicate that the modeling was simply wrong.

    That has a good bit of truth to it.

    The vast majority of media outlets (including 538) predicted a Clinton victory.

    Which proved to be inaccurate.

    However, if you go back and look at *actual polling results*, almost all of them were accurate, within the margins of error of those polls. This includes the three states (WI, MI, PA) that sealed Trump's victory.

    IIRC, on election day 2016, fivethirtyeight.com gave Clinton a 65%-35% (or ~2 to 1) chance to win the election.

    That turned out to be incorrect.

    Regardless, as I mentioned, almost all the state polls ended up matching the actual outcome *within the margin of error*. That many folks took the results of those polls and made inaccurate predictions about the overall outcome doesn't invalidate the polls themselves.