Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.
posted by martyb on Sunday April 19 2015, @12:37PM   Printer-friendly
from the sometimes-you-DO-have-a-nail dept.

The journal Basic and Applied Social Psychology announced in a February editorial that researchers who submit studies for publication would not be allowed to use common statistical methods, including p-values. While p-values are routinely misused in scientific literature, many researchers who understand its proper role are upset about the ban. Biostatistician Steven Goodman said, "This might be a case in which the cure is worse than the disease. The goal should be the intelligent use of statistics. If the journal is going to take away a tool, however misused, they need to substitute it with something more meaningful."

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by FatPhil on Sunday April 19 2015, @06:55PM

    by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Sunday April 19 2015, @06:55PM (#172898) Homepage
    > So why don't they just demand p=0.01 instead?

    Ah, because "The null hypothesis significance testing procedure is logically invalid, and so it seems sensible to eliminate it from science". I think they're overstating their case there a little but I can see why they say that. It's certainly massively misinterpreted. All the popular press seems to get it wrong ("meaning that there's only a 1 in 20 chance of the conclusion being wrong"-type phrasings), certainly. That's the old Bayesian issue P(B|A) not necessarily being anything like P(A|B). It's just a tool, just because the wrong end fits in your hand too doesn't mean it's intrinsically useless, or "logically invalid". (There are some way worse measures out there, in particular in the field of medicine. E.g. Odds Ratios seem to specifically designed to make small effects look larger. They're easier to manipulate in some equations, that seems to be their only benefit - aside from looking more impressive. http://en.wikipedia.org/wiki/Odds_ratio#Confusion_and_exaggeration )

    However, I'm no stats nerd at all, that was pretty much wooosh for me - I never understood *why* they were doing anything, so in some ways I'm pleased to see some of the foundations of the field being attacked. The Bayesian approach has always been much more intuitive to me, but I'm not sure whether it can provide a bolt-in alternative.

    If what I've said above is ignorant bollocks, and you know better, please don't flame me - educate me.
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    Starting Score:    1  point
    Moderation   +2  
       Interesting=2, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 1, Informative) by Anonymous Coward on Monday April 20 2015, @12:24AM

    by Anonymous Coward on Monday April 20 2015, @12:24AM (#172973)

    Okay, I'll edificate what I can. Your talk about phrasing is both right and wrong. Confidence level 0.05 can mean there is only a 1 in 20 chance that the conclusion is wrong if and only if conditions are met, null and alternative are well and strictly defined, and finally the conclusion must only be to reject the null. Under no other conditions can that phrasing be correct.

    Aside from that I am in the same boat as you. Stats feels more like a curiosity for people that like stats more than anything else, except the Bayesian (and other probability) stuff as you have pointed out.

    • (Score: 2) by NoMaster on Monday April 20 2015, @11:18PM

      by NoMaster (3543) on Monday April 20 2015, @11:18PM (#173318)

      Oh, I dunno about that. I'm not a Frequentist by any means, but Bayesian stats & methods have plenty of equivalent issues of their own e.g. subjectiveness of priors, convergence (esp. in the typical case of chained Monte Carlo analyses), etc., etc.

      Plenty of traps for people to fall into and where ignorance (superstition?) can fester & become 'accepted' wisdom - just like the common misunderstandings (e.g. p values) in Frequentist stats...

      --
      Live free or fuck off and take your naïve Libertarian fantasies with you...