Jeremy P. Shapiro, a professor of psychology at Case Western Reserve University, has an article on The Conversation about one of the main cognitive errors at the root of science denial: dichotomous thinking, where entire spectra of possibilities are turned into dichotomies, and the division is usually highly skewed. Either something is perfect or it is a complete failure, either we have perfect knowledge of something or we know nothing.
Currently, there are three important issues on which there is scientific consensus but controversy among laypeople: climate change, biological evolution and childhood vaccination. On all three issues, prominent members of the Trump administration, including the president, have lined up against the conclusions of research.
This widespread rejection of scientific findings presents a perplexing puzzle to those of us who value an evidence-based approach to knowledge and policy.
Yet many science deniers do cite empirical evidence. The problem is that they do so in invalid, misleading ways. Psychological research illuminates these ways.
[...] In my view, science deniers misapply the concept of “proof.”
Proof exists in mathematics and logic but not in science. Research builds knowledge in progressive increments. As empirical evidence accumulates, there are more and more accurate approximations of ultimate truth but no final end point to the process. Deniers exploit the distinction between proof and compelling evidence by categorizing empirically well-supported ideas as “unproven.” Such statements are technically correct but extremely misleading, because there are no proven ideas in science, and evidence-based ideas are the best guides for action we have.
I have observed deniers use a three-step strategy to mislead the scientifically unsophisticated. First, they cite areas of uncertainty or controversy, no matter how minor, within the body of research that invalidates their desired course of action. Second, they categorize the overall scientific status of that body of research as uncertain and controversial. Finally, deniers advocate proceeding as if the research did not exist.
Dr. David "Orac" Gorski has further commentary on the article. Basically, science denialism works by exploiting the very human need for absolute certainty, which science can never truly provide.
(Score: 2) by ikanreed on Thursday November 14 2019, @04:07PM (5 children)
That's quite a large sample size for so few discrete variables, and that's not p-hacking. You said "p-hacking", not "the analysis had subjective inputs, which I find objectionable for reasons vague and unstated reasons".
One is fraud, the other is you objecting to basically sound methodology.
(Score: 1) by khallow on Thursday November 14 2019, @08:01PM (4 children)
Each question would be at least one discrete variable.
Enough "subjective inputs" and you're get spurious outputs just from random chance.
(Score: 3, Informative) by ikanreed on Thursday November 14 2019, @08:49PM (3 children)
She's done the same 4 fucking measures on every one of her previous research papers, and always used the same one primary outcome measure in all of them: intent to purchase.
It's pure fantasy that you've built your sense of "knowing bad science when you see it" out of. Pure fucking fantasy.
She does subsequent studies in the same paper that affirm the original effect and do factor analysis of its causes. Now I suspect we could repeat this whole fucking for any of the studies the original Anon referenced, but the fact is that it won't matter.
You'll still be the same person tomorrow you are today, and I can't imagine this conversation is going to move you towards some reform where you try to do genuine, thorough analysis of methodologies in papers, rather than working backwards from if you like the conclusion*. It doesn't so much bother me that I've wasted so much time with this conversation, nor is it that you won't even consider for a moment what you'd actually want from analytical social psychology and couldn't even begin to describe what standards you would enforce, nor even that you're not going to acknowledge how far the goalposts have slid in just a couple posts. Those are all bog standard problems for internet argument. No, the problem is that in spite of all that, you think your casual examination instantly tells you problems, like this shit is fucking easy.
Dunning Kruger is an overplayed term, but you don't have anywhere near the meta-cognitive skills needed to tell you why your approach sucks so goddamn bad.
(Score: 3, Informative) by barbara hudson on Thursday November 14 2019, @10:49PM
Testing for trust should not include any questions that directly influence trust; not our problem if they are too stupid to test trust in a way that can be shown not to influence the responses. Studies designed to test for trust need to be better designed so that they don't have an observer effect. Any cop / lawyer / hr droid will tell you that the questions you ask determines the answers you get .
"Ceçi n'est pas la science" (with apologies to Rene Magritte and his picture of a pipe similarly captioned) https://en.m.wikipedia.org/wiki/The_Treachery_of_Images. [wikipedia.org]
SoylentNews is social media. Says so right in the slogan. Soylentnews is people, not tech.
(Score: 1) by khallow on Friday November 15 2019, @01:22AM
Nonsense. In addition to the alleged measures, we have that the person taking the exam is male or female, and the target product has a male or female CEO. That increases to at least 16 parameters per paper (and probably a lot more than that). And you admitted there are several papers too. So odds are good even in the complete absence of any sort of correlation that we'd see one or more results at the level of 0.01 significance and quite a few at the 0.05 significance - even in the absence of systemic bias.
Further, there are many questions behind those four measures. That greatly increases the actual number of parameters in this study.
Not at all, though the change over the course of a day is usually slight.
It doesn't bother me either that you've wasted time. What bothers me is the less than a quarter of such papers are reproducible. p-hacking is one of the mechanisms for making this happen.
(Score: 0) by Anonymous Coward on Friday November 15 2019, @09:24AM
Naw. AC here, I benefited from your insight. I may/not be able to bring that improvement in myself back around to bear at soylent, but there's a nonzero chance, in which case you floated all these soyboats a bit higher.