Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday July 01 2021, @11:18AM   Printer-friendly
from the good-science-is-boring dept.

Social science papers that failed to replicate racked up 153 more citations, on average, than papers that replicated successfully.

This latest result is "pretty damning," says University of Maryland, College Park, cognitive scientist Michael Dougherty, who was not involved with the research. "Citation counts have long been treated as a proxy for research quality," he says, so the finding that less reliable research is cited more points to a "fundamental problem" with how such work is evaluated.

[...] University of California, San Diego, economists Marta Serra-Garcia and Uri Gneezy were interested in whether catchy research ideas would get more attention than mundane ones, even if they were less likely to be true. So they gathered data on 80 papers from three different projects that had tried to replicate important social science findings, with varying levels of success.

Citation counts on Google Scholar were significantly higher for the papers that failed to replicate, they report today in Science Advances, with an average boost of 16 extra citations per year. That's a big number, Serra-Garcia and Gneezy say—papers in high-impact journals in the same time period amassed a total of about 40 citations per year on average.

And when the researchers examined citations in papers published after the landmark replication projects, they found that the papers rarely acknowledged the failure to replicate, mentioning it only 12% of the time.

Well, nobody likes a Debbie Downer, do they?

Journal Reference:
Marta Serra-Garcia, Uri Gneezy. Nonreplicable publications are cited more than replicable ones [open], Science Advances (DOI: 10.1126/sciadv.abd1705)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by shrewdsheep on Thursday July 01 2021, @05:05PM (3 children)

    by shrewdsheep (5215) on Thursday July 01 2021, @05:05PM (#1151871)

    I guess your intention is to calculate a P-value for the observation of 40 successes for 80 replications under the null hypothesis of a success rate of .7. To get a meaningful P-value, you would have to calculate the probability P(X ≤ 40). The probability of a simple outcome (i.e. P(X = x)) is almost always meaningless, as a matter of fact it is always zero for continuous distributions. Also for the binomial it tends to zero for any outcome and success probability as N tends to infinity. The P-value is the probability of the event of observing our outcome plus all more extreme outcomes.

  • (Score: 2) by Socrastotle on Thursday July 01 2021, @05:14PM (2 children)

    by Socrastotle (13446) on Thursday July 01 2021, @05:14PM (#1151878) Journal

    It is the P(X <= x) of course. I got HTML'd with the less than sign.

    • (Score: 2) by Anti-aristarchus on Thursday July 01 2021, @09:02PM (1 child)

      by Anti-aristarchus (14390) on Thursday July 01 2021, @09:02PM (#1152006) Journal

      But shouldn't it be:

      P(A|B) = [P(A) P(B|A) /P(B)]

          One must take prior probabilities into account, whether frequentist or subjectivist.

      • (Score: 2, Informative) by shrewdsheep on Monday July 05 2021, @07:19AM

        by shrewdsheep (5215) on Monday July 05 2021, @07:19AM (#1152967)

        As a frequentest your prior would be uniform (and might therefore be improper), if you are an empirical Bayesian you would again have a uniform (improper) prior, but on the hyperparameters, and as a full Bayesian, well, the full fudging would start.