Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday January 22 2016, @10:12PM   Printer-friendly
from the nothing-to-see-here dept.

Paul Meehl is responsible for what is probably the most apt explanation for why some areas of science have made more progress than others over the last 70 years or so. Amazingly, he pointed this out in 1967 and it had seemingly no effect on standard practices:

Because physical theories typically predict numerical values, an improvement in ex-perimental precision reduces the tolerance range and hence increases corroborability. In most psychological research, improved power of a statistical design leads to a prior probability approaching ½ of finding a significant difference in the theoretically predicted direction. Hence the corroboration yielded by "success" is very weak, and becomes weaker with increased precision. "Statistical significance" plays a logical role in psychology precisely the reverse of its role in physics. This problem is worsened by certain unhealthy tendencies prevalent among psychologists, such as a premium placed on experimental "cuteness" and a free reliance upon ad hoc explanations to avoid refuation.

Meehl, Paul E. (1967). "Theory-Testing in Psychology and Physics: A Methodological Paradox" (PDF). Philosophy of Science 34 (2): 103–115.
https://dx.doi.org/10.1086%2F288135 . Free here: http://cerco.ups-tlse.fr/pdf0609/Meehl_1967.pdf

There are many science articles posted to this site that fall foul of his critique probably because researchers are not aware of it. In short, this (putatively fatally flawed) research attempts to disprove a null hypothesis rather than a research hypothesis. Videos of some of his lectures are available online:
http://www.psych.umn.edu/meehlvideos.php

Session 7 starting at ~1hr is especially good.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday January 22 2016, @11:28PM

    by Anonymous Coward on Friday January 22 2016, @11:28PM (#293381)

    In thinking Physics and Psychology should use the same approach. P hacking and poor design are bigger issues.

  • (Score: 1, Interesting) by Anonymous Coward on Friday January 22 2016, @11:41PM

    by Anonymous Coward on Friday January 22 2016, @11:41PM (#293382)

    You can get "reverse" p-hacking, where anomalies (with respect to the research hypothesis or theory) are explained away by adding more and more sources of error as well. Check out the literature on the pioneer anomaly for example (not saying they are wrong in the explanations, just that you could keep adding sources of uncertainty until no result is significantly deviating from the prediction): https://en.wikipedia.org/wiki/Pioneer_anomaly [wikipedia.org]

  • (Score: 1, Touché) by Anonymous Coward on Saturday January 23 2016, @12:06AM

    by Anonymous Coward on Saturday January 23 2016, @12:06AM (#293395)

    Indeed. We should lower our standard of evidence so that junk science like the social sciences put out can meet it.

    • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @01:44AM

      by Anonymous Coward on Saturday January 23 2016, @01:44AM (#293443)

      Do you think that data collected about how a drug affects the severity of psychotic episodes in schizophrenics over the course of a year should be analysed the same as data collected about the weight of an atom or the temperature of a star?

      • (Score: 1, Informative) by Anonymous Coward on Saturday January 23 2016, @01:53AM

        by Anonymous Coward on Saturday January 23 2016, @01:53AM (#293445)

        Not sure if you are making a joke, but schizophrenia was Meehl's clinical area of expertise. If not a joke, watch the videos.

        • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @02:42AM

          by Anonymous Coward on Saturday January 23 2016, @02:42AM (#293458)

          Not a joke. I've bookmarked the video link for when I have time (it's too bad the transcript links are useless).
          People have complex behaviours and are the products of an incredibly noisy system of nature and nurture. Conclusions drawn from even large data sets still have low predictive value for a given individual. I'm sure there is a lot that psychologists can learn from physicists but I'm sceptical that the best way to analyse data in one field would be the same as another that is so different.

          • (Score: 1, Interesting) by Anonymous Coward on Saturday January 23 2016, @03:28AM

            by Anonymous Coward on Saturday January 23 2016, @03:28AM (#293468)

            I was trained to think so too. Then I had data nearly perfectly described by a theory developed in the 1930s. Check out Louis Thurstone and Harold Gulliksen[1], Guliksen also ranted against NHST [2]. It appears to me that progress was being made, then this was largely halted by the adoption of NHST which allowed a lack of mathematical training and corresponding proliferation of BS in psychology and medical fields of research.
            [1] http://link.springer.com/article/10.1007%2FBF02289265 [springer.com]
            [2]http://www.jstor.org/stable/27827302

      • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @02:52PM

        by Anonymous Coward on Saturday January 23 2016, @02:52PM (#293618)

        No, but I don't think that we should let social scientists get away with arbitrarily assuming certain conclusions and disregarding other possibilities, get away with unreproducible studies so often, or get away with pretending the data they gathered was objective when it was about a totally subjective matter that cannot really be objectively measured in the first place (i.e. how people feel).