Paul Meehl is responsible for what is probably the most apt explanation for why some areas of science have made more progress than others over the last 70 years or so. Amazingly, he pointed this out in 1967 and it had seemingly no effect on standard practices:
Because physical theories typically predict numerical values, an improvement in ex-perimental precision reduces the tolerance range and hence increases corroborability. In most psychological research, improved power of a statistical design leads to a prior probability approaching ½ of finding a significant difference in the theoretically predicted direction. Hence the corroboration yielded by "success" is very weak, and becomes weaker with increased precision. "Statistical significance" plays a logical role in psychology precisely the reverse of its role in physics. This problem is worsened by certain unhealthy tendencies prevalent among psychologists, such as a premium placed on experimental "cuteness" and a free reliance upon ad hoc explanations to avoid refuation.
Meehl, Paul E. (1967). "Theory-Testing in Psychology and Physics: A Methodological Paradox" (PDF). Philosophy of Science 34 (2): 103–115.
https://dx.doi.org/10.1086%2F288135 . Free here: http://cerco.ups-tlse.fr/pdf0609/Meehl_1967.pdf
There are many science articles posted to this site that fall foul of his critique probably because researchers are not aware of it. In short, this (putatively fatally flawed) research attempts to disprove a null hypothesis rather than a research hypothesis. Videos of some of his lectures are available online:
http://www.psych.umn.edu/meehlvideos.php
Session 7 starting at ~1hr is especially good.
(Score: 1, Touché) by Anonymous Coward on Saturday January 23 2016, @12:06AM
Indeed. We should lower our standard of evidence so that junk science like the social sciences put out can meet it.
(Score: 0) by Anonymous Coward on Saturday January 23 2016, @01:44AM
Do you think that data collected about how a drug affects the severity of psychotic episodes in schizophrenics over the course of a year should be analysed the same as data collected about the weight of an atom or the temperature of a star?
(Score: 1, Informative) by Anonymous Coward on Saturday January 23 2016, @01:53AM
Not sure if you are making a joke, but schizophrenia was Meehl's clinical area of expertise. If not a joke, watch the videos.
(Score: 0) by Anonymous Coward on Saturday January 23 2016, @02:42AM
Not a joke. I've bookmarked the video link for when I have time (it's too bad the transcript links are useless).
People have complex behaviours and are the products of an incredibly noisy system of nature and nurture. Conclusions drawn from even large data sets still have low predictive value for a given individual. I'm sure there is a lot that psychologists can learn from physicists but I'm sceptical that the best way to analyse data in one field would be the same as another that is so different.
(Score: 1, Interesting) by Anonymous Coward on Saturday January 23 2016, @03:28AM
I was trained to think so too. Then I had data nearly perfectly described by a theory developed in the 1930s. Check out Louis Thurstone and Harold Gulliksen[1], Guliksen also ranted against NHST [2]. It appears to me that progress was being made, then this was largely halted by the adoption of NHST which allowed a lack of mathematical training and corresponding proliferation of BS in psychology and medical fields of research.
[1] http://link.springer.com/article/10.1007%2FBF02289265 [springer.com]
[2]http://www.jstor.org/stable/27827302
(Score: 0) by Anonymous Coward on Saturday January 23 2016, @02:52PM
No, but I don't think that we should let social scientists get away with arbitrarily assuming certain conclusions and disregarding other possibilities, get away with unreproducible studies so often, or get away with pretending the data they gathered was objective when it was about a totally subjective matter that cannot really be objectively measured in the first place (i.e. how people feel).