Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday January 22 2016, @10:12PM   Printer-friendly
from the nothing-to-see-here dept.

Paul Meehl is responsible for what is probably the most apt explanation for why some areas of science have made more progress than others over the last 70 years or so. Amazingly, he pointed this out in 1967 and it had seemingly no effect on standard practices:

Because physical theories typically predict numerical values, an improvement in ex-perimental precision reduces the tolerance range and hence increases corroborability. In most psychological research, improved power of a statistical design leads to a prior probability approaching ½ of finding a significant difference in the theoretically predicted direction. Hence the corroboration yielded by "success" is very weak, and becomes weaker with increased precision. "Statistical significance" plays a logical role in psychology precisely the reverse of its role in physics. This problem is worsened by certain unhealthy tendencies prevalent among psychologists, such as a premium placed on experimental "cuteness" and a free reliance upon ad hoc explanations to avoid refuation.

Meehl, Paul E. (1967). "Theory-Testing in Psychology and Physics: A Methodological Paradox" (PDF). Philosophy of Science 34 (2): 103–115.
https://dx.doi.org/10.1086%2F288135 . Free here: http://cerco.ups-tlse.fr/pdf0609/Meehl_1967.pdf

There are many science articles posted to this site that fall foul of his critique probably because researchers are not aware of it. In short, this (putatively fatally flawed) research attempts to disprove a null hypothesis rather than a research hypothesis. Videos of some of his lectures are available online:
http://www.psych.umn.edu/meehlvideos.php

Session 7 starting at ~1hr is especially good.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday January 22 2016, @11:01PM

    by Anonymous Coward on Friday January 22 2016, @11:01PM (#293374)

    Can anyone explain where he has erred?

    • (Score: 0) by Anonymous Coward on Friday January 22 2016, @11:28PM

      by Anonymous Coward on Friday January 22 2016, @11:28PM (#293381)

      In thinking Physics and Psychology should use the same approach. P hacking and poor design are bigger issues.

      • (Score: 1, Interesting) by Anonymous Coward on Friday January 22 2016, @11:41PM

        by Anonymous Coward on Friday January 22 2016, @11:41PM (#293382)

        You can get "reverse" p-hacking, where anomalies (with respect to the research hypothesis or theory) are explained away by adding more and more sources of error as well. Check out the literature on the pioneer anomaly for example (not saying they are wrong in the explanations, just that you could keep adding sources of uncertainty until no result is significantly deviating from the prediction): https://en.wikipedia.org/wiki/Pioneer_anomaly [wikipedia.org]

      • (Score: 1, Touché) by Anonymous Coward on Saturday January 23 2016, @12:06AM

        by Anonymous Coward on Saturday January 23 2016, @12:06AM (#293395)

        Indeed. We should lower our standard of evidence so that junk science like the social sciences put out can meet it.

        • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @01:44AM

          by Anonymous Coward on Saturday January 23 2016, @01:44AM (#293443)

          Do you think that data collected about how a drug affects the severity of psychotic episodes in schizophrenics over the course of a year should be analysed the same as data collected about the weight of an atom or the temperature of a star?

          • (Score: 1, Informative) by Anonymous Coward on Saturday January 23 2016, @01:53AM

            by Anonymous Coward on Saturday January 23 2016, @01:53AM (#293445)

            Not sure if you are making a joke, but schizophrenia was Meehl's clinical area of expertise. If not a joke, watch the videos.

            • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @02:42AM

              by Anonymous Coward on Saturday January 23 2016, @02:42AM (#293458)

              Not a joke. I've bookmarked the video link for when I have time (it's too bad the transcript links are useless).
              People have complex behaviours and are the products of an incredibly noisy system of nature and nurture. Conclusions drawn from even large data sets still have low predictive value for a given individual. I'm sure there is a lot that psychologists can learn from physicists but I'm sceptical that the best way to analyse data in one field would be the same as another that is so different.

              • (Score: 1, Interesting) by Anonymous Coward on Saturday January 23 2016, @03:28AM

                by Anonymous Coward on Saturday January 23 2016, @03:28AM (#293468)

                I was trained to think so too. Then I had data nearly perfectly described by a theory developed in the 1930s. Check out Louis Thurstone and Harold Gulliksen[1], Guliksen also ranted against NHST [2]. It appears to me that progress was being made, then this was largely halted by the adoption of NHST which allowed a lack of mathematical training and corresponding proliferation of BS in psychology and medical fields of research.
                [1] http://link.springer.com/article/10.1007%2FBF02289265 [springer.com]
                [2]http://www.jstor.org/stable/27827302

          • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @02:52PM

            by Anonymous Coward on Saturday January 23 2016, @02:52PM (#293618)

            No, but I don't think that we should let social scientists get away with arbitrarily assuming certain conclusions and disregarding other possibilities, get away with unreproducible studies so often, or get away with pretending the data they gathered was objective when it was about a totally subjective matter that cannot really be objectively measured in the first place (i.e. how people feel).

  • (Score: 0) by Anonymous Coward on Friday January 22 2016, @11:50PM

    by Anonymous Coward on Friday January 22 2016, @11:50PM (#293383)

    Bullshit fucking his own bullshit.

    • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @01:37AM

      by Anonymous Coward on Saturday January 23 2016, @01:37AM (#293436)

      Cocksucking motherfucker.

  • (Score: 3, Insightful) by wonkey_monkey on Friday January 22 2016, @11:56PM

    by wonkey_monkey (279) on Friday January 22 2016, @11:56PM (#293386) Homepage

    In short, this (putatively fatally flawed) research attempts to disprove a null hypothesis rather than a research hypothesis.

    To what does "this" refer? Is it referring to some hypothetical piece of research (such as those previously posted to Soylent which the previous paragraph refers to) which makes the mistakes this guy is pointing out?

    Without clarification, "this" could be confused to refer to the critiques the guy is making, implying that they are flawed. I'm not even sure this isn't the case.

    --
    systemd is Roko's Basilisk
    • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @12:02AM

      by Anonymous Coward on Saturday January 23 2016, @12:02AM (#293391)

      To clarify: "this" refers to "many science articles posted to this site that fall foul of his critique". Not that that is any fault of the people submitting and approving the science articles. Researchers love their false null hypotheses these days, it becomes clear why if you think about it.

    • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @12:33AM

      by Anonymous Coward on Saturday January 23 2016, @12:33AM (#293406)

      This submission sounds like one of those cases were the submitter is having an imaginary argument with someone and we are only able to hear the submitter's argument.

      • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @01:22AM

        by Anonymous Coward on Saturday January 23 2016, @01:22AM (#293428)

        I presented Paul Meehl's argument for your consumption.

  • (Score: 2, Insightful) by Anonymous Coward on Saturday January 23 2016, @12:20AM

    by Anonymous Coward on Saturday January 23 2016, @12:20AM (#293399)

    If you know anything about the Bayes v. Fisher battle, which has raged since the 1930s. Fisher was a very smart, but very intellectually domineering personality who publicly and brutally railed against Bayes. He essentially bullied into dominance the field, out of which came his many frequentist-based tools that have been much abused ever since. Bayes Theorem came back out of the shadows in the 1950s, and in the 60s you had people like Tukey using it to call elections. Papers like this appeared all the time. They are different tools for different kinds of problems. There is no one approach to data analysis! It depends upon the problem you're facing [xkcd.com]! The problem is that Fisher tools are easy to apply and just "feel" right intellectually (provided you pretend you have an infinite ensemble of random outcomes to pull from), so they get applied everywhere. A physicist makes a measurement and he gets some numbers; just plug them into these formulas and you get an answer back on its significance! In psychology you get problems that don't make sense with frequentist tools, so you use Bayes because it "feels" right because it seems like common sense for these kind of problems.

    • (Score: 1, Interesting) by Anonymous Coward on Saturday January 23 2016, @12:41AM

      by Anonymous Coward on Saturday January 23 2016, @12:41AM (#293411)

      A 1967 article on Bayes v. Fisher? Really?

      No, that is not what this is about at all. Choice of equations to use does not address this problem in any way, only the choice of null hypothesis. Please read the paper, where this "objection" is dismissed at the top of the second page (one of the things that makes it such a great paper is that it preempts so many of these nonsensical objections):

      the point I wish to make is one in logic and methodology of science and, as I think, does not presuppose adoption of any of the current controversial view- points in technical statistics

      Also, Fisher was not at all a frequentist. He railed against use of that philosophy for scientific means for the last half of his life, and I do mean railed against it. He thought it would be the downfall of western civilization.[1] The best term to describe Fisher's statistical philosophy is probably "inductivist".

      [1] Fisher, R N (1958). "The Nature of Probability". Centennial Review 2: 261–274
      http://www.york.ac.uk/depts/maths/histstat/fisher272.pdf [york.ac.uk]

      • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @02:21AM

        by Anonymous Coward on Saturday January 23 2016, @02:21AM (#293454)

        But his later views were modified by the acerbic exchanges he had with Jeffreys in the journals. After those changes, Fisher's arguments started picking up tones reminiscent of Bayes, but he was too proud and bull-headed to have conceded any of Jeffreys points that he ended up co-opting into his own work.

        I'd say Fishers statistical philosophy was anti-inverse-probability, in whatever that form happened to take at the time.

        • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @02:59AM

          by Anonymous Coward on Saturday January 23 2016, @02:59AM (#293462)

          I'd say Fishers statistical philosophy was anti-inverse-probability, in whatever that form happened to take at the time.

          In that paper he disagrees:

          "Now suppose there were knowledge a priori... Then the method of Bayes... would supersede the fiducial value...if there were knowledge a priori, the fiducial method of reasoning would be clearly erroneous because it would have ignored some of the data. I need give no stronger reason than that."

          Obviously check it yourself to see I'm not doing some selective citing. Of course, that is irrelevant to the issue brought up by Meehl which is much more important. However, if you can back some of your claims about Fisher with citations I would be interested.

  • (Score: 5, Insightful) by darkfeline on Saturday January 23 2016, @01:34AM

    by darkfeline (1030) on Saturday January 23 2016, @01:34AM (#293435) Homepage

    Can anyone translate this to English? I'll try, but correct me if I'm wrong.

    Basically in statistics you have this thing where you have this null hypothesis, you gather data, and you see if the data strongly indicates whether your null hypothesis is false.

    The way this is used in physics is that you use your actual theory as the null hypothesis and try to disprove your theory (in the spirit of science).

    The way this is used in psychology is that you use the opposite of your theory as the null hypothesis, you try to disprove the null hypothesis and thus (logical fallacy here) prove your theory.

    The physics way is better than the psychology way.

    --
    Join the SDF Public Access UNIX System today!
    • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @01:41AM

      by Anonymous Coward on Saturday January 23 2016, @01:41AM (#293440)

      100% correct. It is that simple.

    • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @02:20AM

      by Anonymous Coward on Saturday January 23 2016, @02:20AM (#293452)

      Also, if you can explain what was confusing as precisely as possible it would be very appreciated. I have found that no one complains about that regarding my speech or writing except when discussing this issue, so I suspect I am assuming some prior knowledge. It may be something else though. I really would appreciate it if you could help pinpoint the cause.

      • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @02:35AM

        by Anonymous Coward on Saturday January 23 2016, @02:35AM (#293455)

        There are many science articles posted to this site that fall foul of his critique probably because researchers are not aware of it. In short, this (putatively fatally flawed) research attempts to disprove a null hypothesis rather than a research hypothesis. Videos of some of his lectures are available online

        What is "this site"? SN? What is "this (putatively fatally flawed) research?" And who says it is "fatally flawed?"

        • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @03:04AM

          by Anonymous Coward on Saturday January 23 2016, @03:04AM (#293464)

          Thanks. I'm not sure that what you have pointed out was the cause of confusion, but agree ambiguity should be avoided for clear communication. (I replaced multple "its/thats" in this post)

          • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @03:50AM

            by Anonymous Coward on Saturday January 23 2016, @03:50AM (#293473)

            The quoted passage in the summary is a forehead smacker, but the accompanying text, which should provide context and/or explain its meaning, added to confusion. Well, that's how it came across to me.

    • (Score: 2) by http on Saturday January 23 2016, @04:26AM

      by http (1920) on Saturday January 23 2016, @04:26AM (#293483)

      The wikipedia entry on the null hypothesis is, at the moment, unfit for public consumption.

      You're a lot off. Having had extensive training in mathematics (some of which took), I have to remind you that "proof by contradiction" is an actual technique used in mathematics* since a forever ago. I"m not a math teacher, but I'll give it a shot!

      The null hypothesis is rarely "the opposite of your theory", it's more along the lines of "your theory is wrong." Say you think two behaviours are causally connected. Classically, a null hypothesis is the assumption that there's no measureable connection between those two things, so then you design an experiment to measure the connection between them. If it differs noticeably from zero, the null hypothesis is weak (and hopefully your theory is good). If it differs significantly and repeatedly, you get to drop the assumption that your theory is wrong. If it's significantly less than zero, then you know your theory is wack and you need to rethink everything you know and think you know by 90 or 180 degrees.

      * including the math that the physics you venerate so much uses

      --
      I browse at -1 when I have mod points. It's unsettling.
      • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @04:44AM

        by Anonymous Coward on Saturday January 23 2016, @04:44AM (#293485)

        Sure, that will let us accept astrology, extispicy, and everything else that happens to generate data that correlates with something. Instead predict something specific with your theory and test that. This is all explained in the paper, although not with those offensive examples.

        The null hypothesis is not the only alternative to your research hypothesis, there are other research hypotheses to deal with. In fact, usually no one believes the null hypothesis at all (two groups of people are sampled from the exact same distribution...). It is the flimsiest of strawmen arguments to rule out a null hypothesis and take it as evidence for the research hypothesis. It really is that simple.

      • (Score: 2) by darkfeline on Sunday January 24 2016, @12:35AM

        by darkfeline (1030) on Sunday January 24 2016, @12:35AM (#293761) Homepage

        The problem is that "proof by contradiction" only works in extremely specific situations, as defined by classical logic.

        In logic (and by extension math), if you can disprove "neither of these two people are guilty", then you have proved "at least one of these two people are guilty". But in real life, disproving "neither of these two people are guilty" does not prove "at least one of these two people are guilty". Maybe one of them has a stolen identity. Maybe one of them is guilty by association. Maybe one of them is suffering from amnesia. Maybe you're in the Matrix. Maybe the law has been changed. Maybe one of them is Hitler. Maybe the dystopian government says that neither of them is guilty and that's that.

        The problem with logic and math (and by extension logicians and mathematicians) is that they are perfectly, 100% accurate, except they only work in well-defined contexts, and real life is not well-defined. Nothing is well-defined except some make-believe contexts we humans have constructed. The question then is, do they work well enough in this ambiguous context called real life to be useful? For math the answer is generally yes, but I'm guessing that for psychology the answer is generally no.

        --
        Join the SDF Public Access UNIX System today!
        • (Score: 0) by Anonymous Coward on Sunday January 24 2016, @02:53AM

          by Anonymous Coward on Sunday January 24 2016, @02:53AM (#293784)

          But in real life, disproving "neither of these two people are guilty" does not prove "at least one of these two people are guilty". Maybe one of them has a stolen identity. Maybe one of them is guilty by association. Maybe one of them is suffering from amnesia. Maybe you're in the Matrix. Maybe the law has been changed. Maybe one of them is Hitler. Maybe the dystopian government says that neither of them is guilty and that's that.

          Bullshit. Stop playing games. If you've proven that "neither of these two people are guilty" is false, then one of them must be guilty. Unless you mean an entirely different kind of proof, or you're randomly redefining words. Reword it and try again.

  • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @01:47AM

    by Anonymous Coward on Saturday January 23 2016, @01:47AM (#293444)

    Can we get TFS that at least try to make some sense? Do they not teach how to write any more?

    • (Score: 2) by NoMaster on Saturday January 23 2016, @07:28AM

      by NoMaster (3543) on Saturday January 23 2016, @07:28AM (#293535)

      The post itself is typical 1st-year undergraduate fail - find a paper that backs up your belief, quote & cite it as truth, but completely forget to present your point or discuss why you think it supports it.

      The fact that the poster seems to think this is some hidden truth hasn't been discussed in thousands of different ways and domains for the last ... what, 100? ... years is simply icing on the cake.

      Mark: 5/10. I can see the point you're trying to make, but you need to present & expand it. A more comprehensive discussion of the literature, including examination of arguments in up-to-date sources, is required.

      --
      Live free or fuck off and take your naïve Libertarian fantasies with you...
      • (Score: 0) by Anonymous Coward on Saturday January 23 2016, @10:42AM

        by Anonymous Coward on Saturday January 23 2016, @10:42AM (#293585)

        What else can possibly be said? Has someone published a logical proof that strawman arguments are no longer fallacies recently? Please link to some recent literature you think addresses this issue.