Paul Meehl is responsible for what is probably the most apt explanation for why some areas of science have made more progress than others over the last 70 years or so. Amazingly, he pointed this out in 1967 and it had seemingly no effect on standard practices:
Because physical theories typically predict numerical values, an improvement in ex-perimental precision reduces the tolerance range and hence increases corroborability. In most psychological research, improved power of a statistical design leads to a prior probability approaching ½ of finding a significant difference in the theoretically predicted direction. Hence the corroboration yielded by "success" is very weak, and becomes weaker with increased precision. "Statistical significance" plays a logical role in psychology precisely the reverse of its role in physics. This problem is worsened by certain unhealthy tendencies prevalent among psychologists, such as a premium placed on experimental "cuteness" and a free reliance upon ad hoc explanations to avoid refuation.
Meehl, Paul E. (1967). "Theory-Testing in Psychology and Physics: A Methodological Paradox" (PDF). Philosophy of Science 34 (2): 103–115.
https://dx.doi.org/10.1086%2F288135 . Free here: http://cerco.ups-tlse.fr/pdf0609/Meehl_1967.pdf
There are many science articles posted to this site that fall foul of his critique probably because researchers are not aware of it. In short, this (putatively fatally flawed) research attempts to disprove a null hypothesis rather than a research hypothesis. Videos of some of his lectures are available online:
http://www.psych.umn.edu/meehlvideos.php
Session 7 starting at ~1hr is especially good.
(Score: 5, Insightful) by darkfeline on Saturday January 23 2016, @01:34AM
Can anyone translate this to English? I'll try, but correct me if I'm wrong.
Basically in statistics you have this thing where you have this null hypothesis, you gather data, and you see if the data strongly indicates whether your null hypothesis is false.
The way this is used in physics is that you use your actual theory as the null hypothesis and try to disprove your theory (in the spirit of science).
The way this is used in psychology is that you use the opposite of your theory as the null hypothesis, you try to disprove the null hypothesis and thus (logical fallacy here) prove your theory.
The physics way is better than the psychology way.
Join the SDF Public Access UNIX System today!
(Score: 0) by Anonymous Coward on Saturday January 23 2016, @01:41AM
100% correct. It is that simple.
(Score: 0) by Anonymous Coward on Saturday January 23 2016, @02:20AM
Also, if you can explain what was confusing as precisely as possible it would be very appreciated. I have found that no one complains about that regarding my speech or writing except when discussing this issue, so I suspect I am assuming some prior knowledge. It may be something else though. I really would appreciate it if you could help pinpoint the cause.
(Score: 0) by Anonymous Coward on Saturday January 23 2016, @02:35AM
What is "this site"? SN? What is "this (putatively fatally flawed) research?" And who says it is "fatally flawed?"
(Score: 0) by Anonymous Coward on Saturday January 23 2016, @03:04AM
Thanks. I'm not sure that what you have pointed out was the cause of confusion, but agree ambiguity should be avoided for clear communication. (I replaced multple "its/thats" in this post)
(Score: 0) by Anonymous Coward on Saturday January 23 2016, @03:50AM
The quoted passage in the summary is a forehead smacker, but the accompanying text, which should provide context and/or explain its meaning, added to confusion. Well, that's how it came across to me.
(Score: 2) by http on Saturday January 23 2016, @04:26AM
The wikipedia entry on the null hypothesis is, at the moment, unfit for public consumption.
You're a lot off. Having had extensive training in mathematics (some of which took), I have to remind you that "proof by contradiction" is an actual technique used in mathematics* since a forever ago. I"m not a math teacher, but I'll give it a shot!
The null hypothesis is rarely "the opposite of your theory", it's more along the lines of "your theory is wrong." Say you think two behaviours are causally connected. Classically, a null hypothesis is the assumption that there's no measureable connection between those two things, so then you design an experiment to measure the connection between them. If it differs noticeably from zero, the null hypothesis is weak (and hopefully your theory is good). If it differs significantly and repeatedly, you get to drop the assumption that your theory is wrong. If it's significantly less than zero, then you know your theory is wack and you need to rethink everything you know and think you know by 90 or 180 degrees.
* including the math that the physics you venerate so much uses
I browse at -1 when I have mod points. It's unsettling.
(Score: 0) by Anonymous Coward on Saturday January 23 2016, @04:44AM
Sure, that will let us accept astrology, extispicy, and everything else that happens to generate data that correlates with something. Instead predict something specific with your theory and test that. This is all explained in the paper, although not with those offensive examples.
The null hypothesis is not the only alternative to your research hypothesis, there are other research hypotheses to deal with. In fact, usually no one believes the null hypothesis at all (two groups of people are sampled from the exact same distribution...). It is the flimsiest of strawmen arguments to rule out a null hypothesis and take it as evidence for the research hypothesis. It really is that simple.
(Score: 2) by darkfeline on Sunday January 24 2016, @12:35AM
The problem is that "proof by contradiction" only works in extremely specific situations, as defined by classical logic.
In logic (and by extension math), if you can disprove "neither of these two people are guilty", then you have proved "at least one of these two people are guilty". But in real life, disproving "neither of these two people are guilty" does not prove "at least one of these two people are guilty". Maybe one of them has a stolen identity. Maybe one of them is guilty by association. Maybe one of them is suffering from amnesia. Maybe you're in the Matrix. Maybe the law has been changed. Maybe one of them is Hitler. Maybe the dystopian government says that neither of them is guilty and that's that.
The problem with logic and math (and by extension logicians and mathematicians) is that they are perfectly, 100% accurate, except they only work in well-defined contexts, and real life is not well-defined. Nothing is well-defined except some make-believe contexts we humans have constructed. The question then is, do they work well enough in this ambiguous context called real life to be useful? For math the answer is generally yes, but I'm guessing that for psychology the answer is generally no.
Join the SDF Public Access UNIX System today!
(Score: 0) by Anonymous Coward on Sunday January 24 2016, @02:53AM
But in real life, disproving "neither of these two people are guilty" does not prove "at least one of these two people are guilty". Maybe one of them has a stolen identity. Maybe one of them is guilty by association. Maybe one of them is suffering from amnesia. Maybe you're in the Matrix. Maybe the law has been changed. Maybe one of them is Hitler. Maybe the dystopian government says that neither of them is guilty and that's that.
Bullshit. Stop playing games. If you've proven that "neither of these two people are guilty" is false, then one of them must be guilty. Unless you mean an entirely different kind of proof, or you're randomly redefining words. Reword it and try again.