Paul Meehl is responsible for what is probably the most apt explanation for why some areas of science have made more progress than others over the last 70 years or so. Amazingly, he pointed this out in 1967 and it had seemingly no effect on standard practices:
Because physical theories typically predict numerical values, an improvement in ex-perimental precision reduces the tolerance range and hence increases corroborability. In most psychological research, improved power of a statistical design leads to a prior probability approaching ½ of finding a significant difference in the theoretically predicted direction. Hence the corroboration yielded by "success" is very weak, and becomes weaker with increased precision. "Statistical significance" plays a logical role in psychology precisely the reverse of its role in physics. This problem is worsened by certain unhealthy tendencies prevalent among psychologists, such as a premium placed on experimental "cuteness" and a free reliance upon ad hoc explanations to avoid refuation.
Meehl, Paul E. (1967). "Theory-Testing in Psychology and Physics: A Methodological Paradox" (PDF). Philosophy of Science 34 (2): 103–115.
https://dx.doi.org/10.1086%2F288135 . Free here: http://cerco.ups-tlse.fr/pdf0609/Meehl_1967.pdf
There are many science articles posted to this site that fall foul of his critique probably because researchers are not aware of it. In short, this (putatively fatally flawed) research attempts to disprove a null hypothesis rather than a research hypothesis. Videos of some of his lectures are available online:
http://www.psych.umn.edu/meehlvideos.php
Session 7 starting at ~1hr is especially good.
(Score: 2) by http on Saturday January 23 2016, @04:26AM
The wikipedia entry on the null hypothesis is, at the moment, unfit for public consumption.
You're a lot off. Having had extensive training in mathematics (some of which took), I have to remind you that "proof by contradiction" is an actual technique used in mathematics* since a forever ago. I"m not a math teacher, but I'll give it a shot!
The null hypothesis is rarely "the opposite of your theory", it's more along the lines of "your theory is wrong." Say you think two behaviours are causally connected. Classically, a null hypothesis is the assumption that there's no measureable connection between those two things, so then you design an experiment to measure the connection between them. If it differs noticeably from zero, the null hypothesis is weak (and hopefully your theory is good). If it differs significantly and repeatedly, you get to drop the assumption that your theory is wrong. If it's significantly less than zero, then you know your theory is wack and you need to rethink everything you know and think you know by 90 or 180 degrees.
* including the math that the physics you venerate so much uses
I browse at -1 when I have mod points. It's unsettling.
(Score: 0) by Anonymous Coward on Saturday January 23 2016, @04:44AM
Sure, that will let us accept astrology, extispicy, and everything else that happens to generate data that correlates with something. Instead predict something specific with your theory and test that. This is all explained in the paper, although not with those offensive examples.
The null hypothesis is not the only alternative to your research hypothesis, there are other research hypotheses to deal with. In fact, usually no one believes the null hypothesis at all (two groups of people are sampled from the exact same distribution...). It is the flimsiest of strawmen arguments to rule out a null hypothesis and take it as evidence for the research hypothesis. It really is that simple.
(Score: 2) by darkfeline on Sunday January 24 2016, @12:35AM
The problem is that "proof by contradiction" only works in extremely specific situations, as defined by classical logic.
In logic (and by extension math), if you can disprove "neither of these two people are guilty", then you have proved "at least one of these two people are guilty". But in real life, disproving "neither of these two people are guilty" does not prove "at least one of these two people are guilty". Maybe one of them has a stolen identity. Maybe one of them is guilty by association. Maybe one of them is suffering from amnesia. Maybe you're in the Matrix. Maybe the law has been changed. Maybe one of them is Hitler. Maybe the dystopian government says that neither of them is guilty and that's that.
The problem with logic and math (and by extension logicians and mathematicians) is that they are perfectly, 100% accurate, except they only work in well-defined contexts, and real life is not well-defined. Nothing is well-defined except some make-believe contexts we humans have constructed. The question then is, do they work well enough in this ambiguous context called real life to be useful? For math the answer is generally yes, but I'm guessing that for psychology the answer is generally no.
Join the SDF Public Access UNIX System today!
(Score: 0) by Anonymous Coward on Sunday January 24 2016, @02:53AM
But in real life, disproving "neither of these two people are guilty" does not prove "at least one of these two people are guilty". Maybe one of them has a stolen identity. Maybe one of them is guilty by association. Maybe one of them is suffering from amnesia. Maybe you're in the Matrix. Maybe the law has been changed. Maybe one of them is Hitler. Maybe the dystopian government says that neither of them is guilty and that's that.
Bullshit. Stop playing games. If you've proven that "neither of these two people are guilty" is false, then one of them must be guilty. Unless you mean an entirely different kind of proof, or you're randomly redefining words. Reword it and try again.