This latest result is "pretty damning," says University of Maryland, College Park, cognitive scientist Michael Dougherty, who was not involved with the research. "Citation counts have long been treated as a proxy for research quality," he says, so the finding that less reliable research is cited more points to a "fundamental problem" with how such work is evaluated.
[...] University of California, San Diego, economists Marta Serra-Garcia and Uri Gneezy were interested in whether catchy research ideas would get more attention than mundane ones, even if they were less likely to be true. So they gathered data on 80 papers from three different projects that had tried to replicate important social science findings, with varying levels of success.
Citation counts on Google Scholar were significantly higher for the papers that failed to replicate, they report today in Science Advances, with an average boost of 16 extra citations per year. That's a big number, Serra-Garcia and Gneezy say—papers in high-impact journals in the same time period amassed a total of about 40 citations per year on average.
And when the researchers examined citations in papers published after the landmark replication projects, they found that the papers rarely acknowledged the failure to replicate, mentioning it only 12% of the time.
Well, nobody likes a Debbie Downer, do they?
Journal Reference:
Marta Serra-Garcia, Uri Gneezy. Nonreplicable publications are cited more than replicable ones [open], Science Advances (DOI: 10.1126/sciadv.abd1705)
(Score: 5, Insightful) by Socrastotle on Thursday July 01 2021, @03:57PM (2 children)
And what would? This is the most insidious problem with pseudo sciences. They generally not only cannot be tested, but they also cannot be falsified. And so their belief or doubt rests largely on cultural, rather than scientific, norms.
Astrology is an obvious example. For the vast majority of its life astrology was a scholarly science, not especially different than astronomy. It was believed that the positioning, and behavior of the stars and bodies in the universe would have an influence on the individuals born under them. Why? Well, let me turn that around - prove it's fake. Simply put, you cannot. You might show that astrological predictions do not hold true, yet the same is true of those within social science. The observations that do come true, if not for noise than because of various confounding factors, will be held up as evidence of its soundness - identical to the social sciences.
In fact it was ultimately only ended by another unfalsifiable entity that was more influential. The Roman Catholic Church around the 17th century felt that the implications of Astrology were incompatible with the notions of church regarding free will and so on. And so it became relegated from science to superstition. Incidentally, that also ties directly back into this issue. "Scientific" astrology briefly made a come back in the late 20th century. Carl Jung, the founder of analytical psychology, was a major advocate for astrology and also pursued it as a scholarly component of psychology. Let us thank our lucky stars that at least this component of psychology was left in the past. Now for the rest of it...
(Score: 1, Insightful) by Anonymous Coward on Thursday July 01 2021, @06:48PM (1 child)
You are holding an impossible standard.
Let's say I gave you a coin, and you flipped it 10000 times, coming up heads 7000 times and tails 3000 times. Can you "prove" that the coin will come up heads next time you flipped it? No... but does your inability to falsify that make the information useless?
I fully agree that social sciences can be more abused than others. I'm thinking particular of "gender studies," among other things. However, those acting in good faith can and do still do quality work which provides value (see: marketing).
That it is difficult-to-impossible to do double-blind studies doesn't make it useless information... any more than the fact that "we can't predict what the precise temperature will be 30 days from now" means that "all climate science is a hoax."
(Note the caveat of "acting in good faith. Those acting in bad faith can do exceptional damage in science in general, and even more in the soft-sciences.)
(Score: 2) by Socrastotle on Friday July 02 2021, @06:31AM
I'm not just poking at the probabilistic nature of things. Quantum mechanics, for instance, is inherently probabilistic - yet few would call it littered with fake assertions. The issue is that if you tell me that you flipped a coin 10,000 times and it came up heads 7,000 times then if I repeat your experiment, and you were honest, then I'm also going to get extremely close to 7,000 heads when I do your experiment - with some slight range for variation. The problem we're running into in the social sciences is that, instead, people are getting 2,300 heads far more often than not. And that means that the original experiment was invalid. The reason for that invalidity can be many things, but one of the biggest concerns is p-hacking. [nih.gov]
p-hacking being one of many practices, but one of the most obvious is after-the-fact data dredging. Imagine you measure a large number of variables about something. And then you run 10,000 trials. As your number of variables increase, you're going to find more and more patterns in the data that mean absolutely nothing. For instance US spending on science is strongly correlated with suicides by hanging. Of course that's obviously a spurious correlation [tylervigen.com] but it's only obvious because those two variables "obviously" (another danger, but that's another topic) have nothing to do with one another. In fact that correlation is far stronger than most published correlations with the ever-implied-but-not-explicitly-stated suggestion of causation - it's a 99.79% correlation rate.
But any good social scientist who wants to engage in proper p-hacking will only be measuring variables that sound viably related to their study. And so when they find these completely unrelated variables that *sound* possibly related: Boom - patch up that hypothesis a bit, and publish. Of course the next scientist who tries to carry out your experiment will find no such pattern, but you published, got some grants, and padded out your CV - so who cares? So poking at astrology again, imagine an astrologer observes a correlation that when people born when Venus was closest to Earth and also blocking out Mars had children who had a fertility rate 37% higher than average. It's pretty easy to see how you can now spin this into an causal astrological effect, where you are using past data (which was clearly just correlational) to make future predictions.
And with enough hand-waving you'll be able to show it again in the future. Perhaps if it doesn't work out one way or another, just add "Ahh! Of course, we also need consider the relationship of Jupiter in the picture." Or perhaps it now has to do with with these two events and their relationship to the Equinox. Just make your model more and more complex so that it always shows what you want it to show, even when it is all based on a completely spurious correlation. This in general is also why the introduction of computer models and computer aided data digging have undoubtedly had a hugely negative impact on science, even though they, on the surface, sound like things that would instead be unimaginably positive for scientific pursuits. Now with computers and computer generated models, find a new spurious correlation can be done at the press of a button. And that's before we even get into "AI"...