This latest result is "pretty damning," says University of Maryland, College Park, cognitive scientist Michael Dougherty, who was not involved with the research. "Citation counts have long been treated as a proxy for research quality," he says, so the finding that less reliable research is cited more points to a "fundamental problem" with how such work is evaluated.
[...] University of California, San Diego, economists Marta Serra-Garcia and Uri Gneezy were interested in whether catchy research ideas would get more attention than mundane ones, even if they were less likely to be true. So they gathered data on 80 papers from three different projects that had tried to replicate important social science findings, with varying levels of success.
Citation counts on Google Scholar were significantly higher for the papers that failed to replicate, they report today in Science Advances, with an average boost of 16 extra citations per year. That's a big number, Serra-Garcia and Gneezy say—papers in high-impact journals in the same time period amassed a total of about 40 citations per year on average.
And when the researchers examined citations in papers published after the landmark replication projects, they found that the papers rarely acknowledged the failure to replicate, mentioning it only 12% of the time.
Well, nobody likes a Debbie Downer, do they?
Journal Reference:
Marta Serra-Garcia, Uri Gneezy. Nonreplicable publications are cited more than replicable ones [open], Science Advances (DOI: 10.1126/sciadv.abd1705)
(Score: 4, Insightful) by Opportunist on Thursday July 01 2021, @11:32AM (15 children)
The reason for this is easy to explain. What do you think gets more spotlight, a paper that basically confirms what was already established or something that claims to fundamentally shake established knowledge and turn the world upside down, because large parts of what we used to think was true have to be rewritten?
Now add that due to decades of science doing rigorous testing results to ensure that what we know is actually more than a bunch of hunches and it should be very obvious why that shouldn't come as any surprise.
(Score: 3, Interesting) by Anonymous Coward on Thursday July 01 2021, @01:06PM (5 children)
This is a good reminder to researchers (of any sort) that citing a paper means you read it. Well...at least you scanned it (one can hope)...and didn't just crib the your list of cites from some other paper.
A cite doesn't mean that you verified the results of that paper. It usually means that you incorporated ideas (or even words, properly referenced of course) from that paper into your own work and paper. It doesn't always mean that you agree with the results of the cite.
My take is that citation counts are about like mod points here--a popularity contest? I may look at mods, but they don't mean all that much.
(Score: 3, Insightful) by looorg on Thursday July 01 2021, @02:10PM (1 child)
That is the best case scenario. A sad and quite likely other outcome is that my research assistant did a search for some papers that could back up what we are doing and your paper was inline with that or related to it somehow on a keyword level or something similar so we are including you to get out citation number up since if we cite you we are then more likely to be cited ourselves when the next person looks to cite someone and finds you and then also us so it's a gigantic citation-circlejerk.
(Score: 4, Interesting) by JoeMerchant on Thursday July 01 2021, @04:12PM
I remember working with "lab partners" in social sciences in Junior College - and this is the best possible behavior I could imagine coming out of any of them. Whatever the minimum possible effort to meet the requirements, and often less, is what I saw the majority of them doing - even the ones who were pursuing it as their major and potential career.
Україна досі не є частиною Росії Слава Україні🌻 https://news.stanford.edu/2023/02/17/will-russia-ukraine-war-end
(Score: 2) by DeathMonkey on Thursday July 01 2021, @03:14PM
And of course we only know it's not reproduceable because somebody tried to reproduce it. And that failed reproduction would also increase the citation count!
(Score: 4, Insightful) by Thexalon on Thursday July 01 2021, @05:28PM
Among other things: "Here's a Detailed Explanation of Why Popular Study X is Wrong" will invariably cite the study that they're presenting an argument attempting to debunk it. So that means that well-hyped-and-wrong research beats obscure-and-right research on citation counts every time. If you're looking for a lot citations, forget trying to get that acceptance at a prominent conference or publication in Nature, what you really want is your study being sensational enough or financially backed enough that it shows up on CNN.
And if you think that leads to bunk research, you're absolutely right.
The only thing that stops a bad guy with a compiler is a good guy with a compiler.
(Score: 2) by Opportunist on Friday July 02 2021, @06:33AM
So far the theory.
In fact, a cite usually means that whoever cited found the paper in a keyword search, did a cursory read to see whether it supports or contradicts him and in the first case, it gets used.
(Score: 4, Interesting) by driverless on Thursday July 01 2021, @02:19PM (7 children)
It's also a bit of a special case, social-science is barely science and more in the realm of woo-woo. Friend of mine started studying it at Uni and got into repeated arguments with the lecturer about her total lack of understanding of even basic statistical methods, and eventually quit and switched to another field. That episode did not inspire confidence in the amount of actual science present in "social science".
(Score: 5, Insightful) by DeathMonkey on Thursday July 01 2021, @03:16PM (5 children)
It's just a lot harder to prove things about such a complicated system. That doesn't make it fake.
(Score: 5, Insightful) by Socrastotle on Thursday July 01 2021, @03:57PM (2 children)
And what would? This is the most insidious problem with pseudo sciences. They generally not only cannot be tested, but they also cannot be falsified. And so their belief or doubt rests largely on cultural, rather than scientific, norms.
Astrology is an obvious example. For the vast majority of its life astrology was a scholarly science, not especially different than astronomy. It was believed that the positioning, and behavior of the stars and bodies in the universe would have an influence on the individuals born under them. Why? Well, let me turn that around - prove it's fake. Simply put, you cannot. You might show that astrological predictions do not hold true, yet the same is true of those within social science. The observations that do come true, if not for noise than because of various confounding factors, will be held up as evidence of its soundness - identical to the social sciences.
In fact it was ultimately only ended by another unfalsifiable entity that was more influential. The Roman Catholic Church around the 17th century felt that the implications of Astrology were incompatible with the notions of church regarding free will and so on. And so it became relegated from science to superstition. Incidentally, that also ties directly back into this issue. "Scientific" astrology briefly made a come back in the late 20th century. Carl Jung, the founder of analytical psychology, was a major advocate for astrology and also pursued it as a scholarly component of psychology. Let us thank our lucky stars that at least this component of psychology was left in the past. Now for the rest of it...
(Score: 1, Insightful) by Anonymous Coward on Thursday July 01 2021, @06:48PM (1 child)
You are holding an impossible standard.
Let's say I gave you a coin, and you flipped it 10000 times, coming up heads 7000 times and tails 3000 times. Can you "prove" that the coin will come up heads next time you flipped it? No... but does your inability to falsify that make the information useless?
I fully agree that social sciences can be more abused than others. I'm thinking particular of "gender studies," among other things. However, those acting in good faith can and do still do quality work which provides value (see: marketing).
That it is difficult-to-impossible to do double-blind studies doesn't make it useless information... any more than the fact that "we can't predict what the precise temperature will be 30 days from now" means that "all climate science is a hoax."
(Note the caveat of "acting in good faith. Those acting in bad faith can do exceptional damage in science in general, and even more in the soft-sciences.)
(Score: 2) by Socrastotle on Friday July 02 2021, @06:31AM
I'm not just poking at the probabilistic nature of things. Quantum mechanics, for instance, is inherently probabilistic - yet few would call it littered with fake assertions. The issue is that if you tell me that you flipped a coin 10,000 times and it came up heads 7,000 times then if I repeat your experiment, and you were honest, then I'm also going to get extremely close to 7,000 heads when I do your experiment - with some slight range for variation. The problem we're running into in the social sciences is that, instead, people are getting 2,300 heads far more often than not. And that means that the original experiment was invalid. The reason for that invalidity can be many things, but one of the biggest concerns is p-hacking. [nih.gov]
p-hacking being one of many practices, but one of the most obvious is after-the-fact data dredging. Imagine you measure a large number of variables about something. And then you run 10,000 trials. As your number of variables increase, you're going to find more and more patterns in the data that mean absolutely nothing. For instance US spending on science is strongly correlated with suicides by hanging. Of course that's obviously a spurious correlation [tylervigen.com] but it's only obvious because those two variables "obviously" (another danger, but that's another topic) have nothing to do with one another. In fact that correlation is far stronger than most published correlations with the ever-implied-but-not-explicitly-stated suggestion of causation - it's a 99.79% correlation rate.
But any good social scientist who wants to engage in proper p-hacking will only be measuring variables that sound viably related to their study. And so when they find these completely unrelated variables that *sound* possibly related: Boom - patch up that hypothesis a bit, and publish. Of course the next scientist who tries to carry out your experiment will find no such pattern, but you published, got some grants, and padded out your CV - so who cares? So poking at astrology again, imagine an astrologer observes a correlation that when people born when Venus was closest to Earth and also blocking out Mars had children who had a fertility rate 37% higher than average. It's pretty easy to see how you can now spin this into an causal astrological effect, where you are using past data (which was clearly just correlational) to make future predictions.
And with enough hand-waving you'll be able to show it again in the future. Perhaps if it doesn't work out one way or another, just add "Ahh! Of course, we also need consider the relationship of Jupiter in the picture." Or perhaps it now has to do with with these two events and their relationship to the Equinox. Just make your model more and more complex so that it always shows what you want it to show, even when it is all based on a completely spurious correlation. This in general is also why the introduction of computer models and computer aided data digging have undoubtedly had a hugely negative impact on science, even though they, on the surface, sound like things that would instead be unimaginably positive for scientific pursuits. Now with computers and computer generated models, find a new spurious correlation can be done at the press of a button. And that's before we even get into "AI"...
(Score: 2) by Freeman on Thursday July 01 2021, @07:37PM
I know, I know, but why are we talking about Facebook's moderation system?
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 2) by driverless on Friday July 02 2021, @06:51AM
It wasn't that they had trouble proving anything, it was that they had no idea how statistics worked. The data was probably all there, but their ability to analyse it and draw conclusions was missing. As a result you couldn't draw any conclusions from any results they published without going through the analysis yourself to see whether they'd got it right. Going from my friend's experience - this was a first-year undergrad student having to stop and correct the errors in analysis being made by a tenured professor - I wouldn't put much faith in the reports being published.
(Score: 0) by Anonymous Coward on Thursday July 01 2021, @09:09PM
https://xkcd.com/435/ [xkcd.com]
(Score: 0) by Anonymous Coward on Thursday July 01 2021, @02:56PM
The approval of an alzheimers drug that has zero proof of working is just the latest. "We have to give people hope." You don't do that by fraud. But of course when hour paycht/bribe depends on it, you will justify anything. Just ask the Nazis - we were nust following orders.
And its only with the discovery of mass graves of children that Canada is finally having to admit that it was straight-up genocide, not "only cultural genoct." Lying smug we're better than you fucks. (- an angry canadian)