Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday July 01 2021, @11:18AM   Printer-friendly
from the good-science-is-boring dept.

Social science papers that failed to replicate racked up 153 more citations, on average, than papers that replicated successfully.

This latest result is "pretty damning," says University of Maryland, College Park, cognitive scientist Michael Dougherty, who was not involved with the research. "Citation counts have long been treated as a proxy for research quality," he says, so the finding that less reliable research is cited more points to a "fundamental problem" with how such work is evaluated.

[...] University of California, San Diego, economists Marta Serra-Garcia and Uri Gneezy were interested in whether catchy research ideas would get more attention than mundane ones, even if they were less likely to be true. So they gathered data on 80 papers from three different projects that had tried to replicate important social science findings, with varying levels of success.

Citation counts on Google Scholar were significantly higher for the papers that failed to replicate, they report today in Science Advances, with an average boost of 16 extra citations per year. That's a big number, Serra-Garcia and Gneezy say—papers in high-impact journals in the same time period amassed a total of about 40 citations per year on average.

And when the researchers examined citations in papers published after the landmark replication projects, they found that the papers rarely acknowledged the failure to replicate, mentioning it only 12% of the time.

Well, nobody likes a Debbie Downer, do they?

Journal Reference:
Marta Serra-Garcia, Uri Gneezy. Nonreplicable publications are cited more than replicable ones [open], Science Advances (DOI: 10.1126/sciadv.abd1705)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Thursday July 01 2021, @02:59PM (1 child)

    by Anonymous Coward on Thursday July 01 2021, @02:59PM (#1151806)
    >> Let's say 70% of studues are replicable (that's probably lowball)

    People who have done replication stidies found 90% were not replicable.

  • (Score: 2) by Socrastotle on Thursday July 01 2021, @03:19PM

    by Socrastotle (13446) on Thursday July 01 2021, @03:19PM (#1151817) Journal

    I'm not familiar with any that low, but I do know that social psychology in particular has a replication rate of around 25%.

    I was speaking in terms of ideals. In general what percent of our science would we like to be "valid"? Of course 100%, but that's impractical. So what is practical? Perhaps something like 97%. But with such a high figure it becomes intuitively obvious that hitting e.g. 50% over 80 samples is not just noise. But what about hitting 50% when you're only aiming for 70%? It doesn't, intuitively, seem so implausible, especially over "only" 80 samples.

    So the point is that even if you want to set our "real" figure far lower than anybody would ever actually want or think (again - 70% is an abysmal replication rate for idealized "science"), 80 samples is far more than enough to draw some extremely high probability conclusions.