Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday November 14 2019, @12:31AM   Printer-friendly
from the I-don't-want-knowledge-I-want-certainty dept.

Jeremy P. Shapiro, a professor of psychology at Case Western Reserve University, has an article on The Conversation about one of the main cognitive errors at the root of science denial: dichotomous thinking, where entire spectra of possibilities are turned into dichotomies, and the division is usually highly skewed. Either something is perfect or it is a complete failure, either we have perfect knowledge of something or we know nothing.

Currently, there are three important issues on which there is scientific consensus but controversy among laypeople: climate change, biological evolution and childhood vaccination. On all three issues, prominent members of the Trump administration, including the president, have lined up against the conclusions of research.

This widespread rejection of scientific findings presents a perplexing puzzle to those of us who value an evidence-based approach to knowledge and policy.

Yet many science deniers do cite empirical evidence. The problem is that they do so in invalid, misleading ways. Psychological research illuminates these ways.

[...] In my view, science deniers misapply the concept of “proof.”

Proof exists in mathematics and logic but not in science. Research builds knowledge in progressive increments. As empirical evidence accumulates, there are more and more accurate approximations of ultimate truth but no final end point to the process. Deniers exploit the distinction between proof and compelling evidence by categorizing empirically well-supported ideas as “unproven.” Such statements are technically correct but extremely misleading, because there are no proven ideas in science, and evidence-based ideas are the best guides for action we have.

I have observed deniers use a three-step strategy to mislead the scientifically unsophisticated. First, they cite areas of uncertainty or controversy, no matter how minor, within the body of research that invalidates their desired course of action. Second, they categorize the overall scientific status of that body of research as uncertain and controversial. Finally, deniers advocate proceeding as if the research did not exist.

Dr. David "Orac" Gorski has further commentary on the article. Basically, science denialism works by exploiting the very human need for absolute certainty, which science can never truly provide.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by Anonymous Coward on Thursday November 14 2019, @06:25AM (16 children)

    by Anonymous Coward on Thursday November 14 2019, @06:25AM (#920225)

    Almost no bad science is retracted.

    I don't think people quite realize the scope of the replication crisis. [wikipedia.org] One journal, the Journal of Personality and Social Psychology, is quite highly regarded within the field and constantly makes headlines on sites such as the New York Times, or social media - because of its headlines confirming all sorts of rather extreme ideological biases. For instance: 'Why High-Class People Get Away With Incompetence' [nytimes.com], brought to you by the Journal of Personality and Social Psychology.

    That journal has a replication success rate of 23%. In other words, if you took any given study in the article and said it was bunk, you'd be right 77% of the time. All replication efforts across the entire field of social psychology had a replication rate of 25%. In my opinion psychology, and without any doubt social psychology, is modern day astrology. There is absolutely no reason to believe that the interaction of groups of people results in persistent patterns of behavior that can be generalized in any meaningful way. Why do we believe this? Well why did we believe that when you were born had persistent effects on your behaviors and interactions? So long as the things you say don't sound completely wrong and at least occasionally hold true in some situations, it's hard to call them completely wrong. It can't all just be coincidences, can it? Surely, they just need refinement...

    Suffice to say that science today is in quite bad shape. This makes this post, written by a psychologist, all quite ironic in so many ways. The first is that it claims the problem of "science denialism" is one of dichotomous thinking while, presumably without intended irony, implies "science denialism" to be dichotomous. Apparently you must "believe in" all science, or no science? One can only imagine why a psychologist might hope to frame the issue as such... He then next appeals to social psychological research to support his argument. Beautiful!

    Starting Score:    0  points
    Moderation   +4  
       Insightful=1, Interesting=2, Informative=1, Total=4
    Extra 'Interesting' Modifier   0  

    Total Score:   4  
  • (Score: 0) by Anonymous Coward on Thursday November 14 2019, @06:38AM (14 children)

    by Anonymous Coward on Thursday November 14 2019, @06:38AM (#920228)

    As an addendum to this, this [phys.org] is a list of articles from the esteemed Journal of Personality and Social Psychology that made their way onto phys.org. [soylentnews.org]

      - Women CEOs judged more harshly than men for corporate ethical failures

      - Researchers confirm that people judge entire groups of people based on the performance of its 'first member'

      - White people struggle to perceive emotion on black people's faces

      - Love your job? Someone may be taking advantage of you

      - Looks matter when it comes to success in STEM

    And much much undoubtedly unreplicable [click/race/sex/class]baiting. It's real tough to figure out why people have lost faith in science, isn't it? As an aside most of these articles have comparable articles on the NYTimes or other sensationalizing outlets. Phys.org is quite an excellent resource. I'm only referencing them since they provide the ability to sort publications by journal, which makes it easy to see what the replication looks like, without the necessity of bypassing paywalls.

    • (Score: 4, Informative) by ikanreed on Thursday November 14 2019, @02:23PM (13 children)

      by ikanreed (3164) Subscriber Badge on Thursday November 14 2019, @02:23PM (#920336) Journal

      I love it, "because I disagree with what the evidence says, it must be bad evidence" is exactly why I don't trust you dumbfucks to judge a goddamn thing.

      • (Score: 2, Interesting) by khallow on Thursday November 14 2019, @02:53PM (8 children)

        by khallow (3766) Subscriber Badge on Thursday November 14 2019, @02:53PM (#920352) Journal
        Remember, evidence is information that distinguish between hypotheses. The cited research is all p-hacking. It might be true, but there's a huge chance that the research found some green jelly beans [xkcd.com]. That makes it not evidence for those keeping score unless we can get the significance to a probability much smaller than the random chance that one gets a spurious result.
        • (Score: 3, Informative) by ikanreed on Thursday November 14 2019, @03:10PM (7 children)

          by ikanreed (3164) Subscriber Badge on Thursday November 14 2019, @03:10PM (#920357) Journal

          Ah yes, more completely untrue things you "know". Exactly what evidence of p-hacking do you find in The first listed paper [apa.org].

          Their methodology section for the first experiment has two independent variables, very reasonable for the hypothesis they were testing, and two dependent variables. That's quite reasonable. Especially for a p0.01

          They only sample once. With a large population. The effect size for the interaction effect was dramatic, 1 point on a five point scale.

          Exactly what evidence do you have that it incorporates p-hacking besides the fact that it challenges your shitty worldview?

          • (Score: 1) by khallow on Thursday November 14 2019, @03:24PM (6 children)

            by khallow (3766) Subscriber Badge on Thursday November 14 2019, @03:24PM (#920367) Journal

            In the first experiment, 512 participants read a business news article about an auto manufacturer and then filled out a survey about their intent to buy a vehicle from the company. One-third of the participants read about an ethical failure, one-third read about a competence failure and the final third only read the company description. Afterward, the participants were asked how likely they were to purchase a car from the company the next time they were in the market for a vehicle and reported their trust in the organization (e.g., "I feel that XYZ automobiles is very dependable/undependable, very competent/incompetent or of low integrity/high integrity").

            No mention of how many questions were asked or the significance of the alleged results.

            • (Score: 2) by ikanreed on Thursday November 14 2019, @04:07PM (5 children)

              by ikanreed (3164) Subscriber Badge on Thursday November 14 2019, @04:07PM (#920383) Journal

              That's quite a large sample size for so few discrete variables, and that's not p-hacking. You said "p-hacking", not "the analysis had subjective inputs, which I find objectionable for reasons vague and unstated reasons".

              One is fraud, the other is you objecting to basically sound methodology.

              • (Score: 1) by khallow on Thursday November 14 2019, @08:01PM (4 children)

                by khallow (3766) Subscriber Badge on Thursday November 14 2019, @08:01PM (#920477) Journal

                for so few discrete variables

                Each question would be at least one discrete variable.

                You said "p-hacking", not "the analysis had subjective inputs, which I find objectionable for reasons vague and unstated reasons".

                Enough "subjective inputs" and you're get spurious outputs just from random chance.

                • (Score: 3, Informative) by ikanreed on Thursday November 14 2019, @08:49PM (3 children)

                  by ikanreed (3164) Subscriber Badge on Thursday November 14 2019, @08:49PM (#920493) Journal

                  She's done the same 4 fucking measures on every one of her previous research papers, and always used the same one primary outcome measure in all of them: intent to purchase.

                  Brand Attitude Bad/Good (Spears & Singh) (1-7)
                  Unpleasant/Pleasant (Spears & Singh) (1-7)
                  Unfavorable/Favorable (Spears & Singh) (1-7)
                  Purchase Intent Likelihood to purchase this product? (Zafar & Rafique) (1=very unlikely - 7= very likely)

                  It's pure fantasy that you've built your sense of "knowing bad science when you see it" out of. Pure fucking fantasy.

                  She does subsequent studies in the same paper that affirm the original effect and do factor analysis of its causes. Now I suspect we could repeat this whole fucking for any of the studies the original Anon referenced, but the fact is that it won't matter.

                  You'll still be the same person tomorrow you are today, and I can't imagine this conversation is going to move you towards some reform where you try to do genuine, thorough analysis of methodologies in papers, rather than working backwards from if you like the conclusion*. It doesn't so much bother me that I've wasted so much time with this conversation, nor is it that you won't even consider for a moment what you'd actually want from analytical social psychology and couldn't even begin to describe what standards you would enforce, nor even that you're not going to acknowledge how far the goalposts have slid in just a couple posts. Those are all bog standard problems for internet argument. No, the problem is that in spite of all that, you think your casual examination instantly tells you problems, like this shit is fucking easy.

                  Dunning Kruger is an overplayed term, but you don't have anywhere near the meta-cognitive skills needed to tell you why your approach sucks so goddamn bad.

                  • (Score: 3, Informative) by barbara hudson on Thursday November 14 2019, @10:49PM

                    by barbara hudson (6443) <barbara.Jane.hudson@icloud.com> on Thursday November 14 2019, @10:49PM (#920529) Journal
                    The big problem with this is that conducting the study itself changes the results. It's like taking 3 thermometers and using them to test the temperature in a small test tube of water, with one thermometer at room temperature, one pre-chilled with liquid nitrogen, and one preheated in boiling water. The act of putting the thermometers in the test tubes is going to change the temperature of the water unless the water was already at the same temperature as the thermometer.

                    Testing for trust should not include any questions that directly influence trust; not our problem if they are too stupid to test trust in a way that can be shown not to influence the responses. Studies designed to test for trust need to be better designed so that they don't have an observer effect. Any cop / lawyer / hr droid will tell you that the questions you ask determines the answers you get .

                    "Ceçi n'est pas la science" (with apologies to Rene Magritte and his picture of a pipe similarly captioned) https://en.m.wikipedia.org/wiki/The_Treachery_of_Images. [wikipedia.org]

                    --
                    SoylentNews is social media. Says so right in the slogan. Soylentnews is people, not tech.
                  • (Score: 1) by khallow on Friday November 15 2019, @01:22AM

                    by khallow (3766) Subscriber Badge on Friday November 15 2019, @01:22AM (#920565) Journal

                    She's done the same 4 fucking measures

                    Nonsense. In addition to the alleged measures, we have that the person taking the exam is male or female, and the target product has a male or female CEO. That increases to at least 16 parameters per paper (and probably a lot more than that). And you admitted there are several papers too. So odds are good even in the complete absence of any sort of correlation that we'd see one or more results at the level of 0.01 significance and quite a few at the 0.05 significance - even in the absence of systemic bias.

                    Further, there are many questions behind those four measures. That greatly increases the actual number of parameters in this study.

                    You'll still be the same person tomorrow you are today,

                    Not at all, though the change over the course of a day is usually slight.

                    It doesn't so much bother me that I've wasted so much time with this conversation, nor is it that you won't even consider for a moment what you'd actually want from analytical social psychology and couldn't even begin to describe what standards you would enforce, nor even that you're not going to acknowledge how far the goalposts have slid in just a couple posts.

                    It doesn't bother me either that you've wasted time. What bothers me is the less than a quarter of such papers are reproducible. p-hacking is one of the mechanisms for making this happen.

                  • (Score: 0) by Anonymous Coward on Friday November 15 2019, @09:24AM

                    by Anonymous Coward on Friday November 15 2019, @09:24AM (#920640)

                    I've wasted so much time with this conversation

                    Naw. AC here, I benefited from your insight. I may/not be able to bring that improvement in myself back around to bear at soylent, but there's a nonzero chance, in which case you floated all these soyboats a bit higher.

      • (Score: 0) by Anonymous Coward on Thursday November 14 2019, @03:56PM (3 children)

        by Anonymous Coward on Thursday November 14 2019, @03:56PM (#920377)

        Imagine we were discussing an issue, and I decided to cite something from a site where you knew 77% of what was published on the site was fake or, at the minimum, inaccurately represented. Would you think I was concerned about the legitimacy of what was said, or would you think that I was referencing it because it confirms my biases - truthfulness be damned? How then do you not see the irony in suggesting that declaring most of what is said on a site is fake is a generally more valid position than clinging on the 23% that may be accurate?

        And that is a big maybe. The reason is that replication doesn't mean a study is accurate. It simply means that they probably didn't make up or p-hack their data. It says absolutely 0 about the logic or hypothetical validity of what is said. And while such things would ideally be filtered out in peer review, the numerous hoaxes, to which social science journals in particular are especially vulnerable, show that they're happy to publish things that are intentionally nonsensical so long as it seems to confirm the editor and/or reviewers' biases. So the percent of generally reliable and meaningful studies on that site is going to be a subset of the 23% that pass even the most primitive method of testing them.

        • (Score: 2) by ikanreed on Thursday November 14 2019, @06:54PM (2 children)

          by ikanreed (3164) Subscriber Badge on Thursday November 14 2019, @06:54PM (#920465) Journal

          I think 77% fake or inaccurate would be a big deal, and the fact that you're being so fucking bullshit right now is why ignoring you is a good idea.

          • (Score: 1) by khallow on Friday November 15 2019, @01:24AM (1 child)

            by khallow (3766) Subscriber Badge on Friday November 15 2019, @01:24AM (#920570) Journal

            I think 77% fake or inaccurate would be a big deal, and the fact that you're being so fucking bullshit right now is why ignoring you is a good idea.

            So is 77% fake or inaccurate a big deal to you?

  • (Score: 2, Insightful) by Anonymous Coward on Friday November 15 2019, @02:16AM

    by Anonymous Coward on Friday November 15 2019, @02:16AM (#920586)

    The three cornerstones of science are

    1: Predictability - Makes predictions
    2: Repeatability - If I say that if you do A + B + C you get D you should be able to repeat the experiment
    3: Falsifiability