Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Sunday April 19 2015, @12:37PM   Printer-friendly
from the sometimes-you-DO-have-a-nail dept.

The journal Basic and Applied Social Psychology announced in a February editorial that researchers who submit studies for publication would not be allowed to use common statistical methods, including p-values. While p-values are routinely misused in scientific literature, many researchers who understand its proper role are upset about the ban. Biostatistician Steven Goodman said, "This might be a case in which the cure is worse than the disease. The goal should be the intelligent use of statistics. If the journal is going to take away a tool, however misused, they need to substitute it with something more meaningful."

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Snotnose on Sunday April 19 2015, @03:11PM

    by Snotnose (1623) on Sunday April 19 2015, @03:11PM (#172846)

    We don't understand these and don't want to bother learning, so you can't use them.

    / got a BA in Applied Math
    // Some 35 years ago
    /// I can barely add nowdays

    --
    When the dust settled America realized it was saved by a porn star.
    • (Score: 2, Funny) by nitehawk214 on Sunday April 19 2015, @07:04PM

      by nitehawk214 (1304) on Sunday April 19 2015, @07:04PM (#172900)

      Try adding numbers instead.

      --
      "Don't you ever miss the days when you used to be nostalgic?" -Loiosh
  • (Score: 2, Insightful) by Anonymous Coward on Sunday April 19 2015, @03:39PM

    by Anonymous Coward on Sunday April 19 2015, @03:39PM (#172853)

    If the journal wants to have a higher standard, then why don't they have a statistician that reviews papers. They could also require a supplement that justifies the use of whatever statistical tests are in the paper. If authors can justifiably use p-values, then what is the problem?

    • (Score: 2, Interesting) by Anonymous Coward on Sunday April 19 2015, @05:23PM

      by Anonymous Coward on Sunday April 19 2015, @05:23PM (#172880)

      With tweaking you can make p values say whatever you want to say. That is the problem. Any second-year stats or math major that has fun with their work has done it.

  • (Score: 0) by Anonymous Coward on Sunday April 19 2015, @04:35PM

    by Anonymous Coward on Sunday April 19 2015, @04:35PM (#172865)

    The journal is probably looking for cheap copouts.

    • (Score: 0) by Anonymous Coward on Monday April 20 2015, @05:42AM

      by Anonymous Coward on Monday April 20 2015, @05:42AM (#173024)

      prolly mmm, but whatz the pee value??!

  • (Score: 4, Informative) by opinionated_science on Sunday April 19 2015, @05:05PM

    by opinionated_science (4031) on Sunday April 19 2015, @05:05PM (#172875)

    if it has "science" after the heading , it probably isn't one. Just saying...

    • (Score: 2) by kaszz on Monday April 20 2015, @12:06AM

      by kaszz (4211) on Monday April 20 2015, @12:06AM (#172972) Journal

      American scientists have found a gigantic teddy bear behind the moon.

  • (Score: 3, Insightful) by FatPhil on Sunday April 19 2015, @06:37PM

    by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Sunday April 19 2015, @06:37PM (#172895) Homepage
    Firstly as it's usually single-sided, so actually only p=0.10. And secondly as they're probably trawling for the jelly-bean colour which causes cancer.
    So why don't they just demand p=0.01 instead?

    And regarding trawling - I like the idea of putting experiment descriptions in escrow, so that you can only publish that which you previously defined, so you can't just introduce a new variable at a later stage. If something interesting crops up - you need to start a fresh experiment to examine that and only that. Alas, that would make science too much like hard work, rather than just datamining.
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 4, Interesting) by FatPhil on Sunday April 19 2015, @06:55PM

      by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Sunday April 19 2015, @06:55PM (#172898) Homepage
      > So why don't they just demand p=0.01 instead?

      Ah, because "The null hypothesis significance testing procedure is logically invalid, and so it seems sensible to eliminate it from science". I think they're overstating their case there a little but I can see why they say that. It's certainly massively misinterpreted. All the popular press seems to get it wrong ("meaning that there's only a 1 in 20 chance of the conclusion being wrong"-type phrasings), certainly. That's the old Bayesian issue P(B|A) not necessarily being anything like P(A|B). It's just a tool, just because the wrong end fits in your hand too doesn't mean it's intrinsically useless, or "logically invalid". (There are some way worse measures out there, in particular in the field of medicine. E.g. Odds Ratios seem to specifically designed to make small effects look larger. They're easier to manipulate in some equations, that seems to be their only benefit - aside from looking more impressive. http://en.wikipedia.org/wiki/Odds_ratio#Confusion_and_exaggeration )

      However, I'm no stats nerd at all, that was pretty much wooosh for me - I never understood *why* they were doing anything, so in some ways I'm pleased to see some of the foundations of the field being attacked. The Bayesian approach has always been much more intuitive to me, but I'm not sure whether it can provide a bolt-in alternative.

      If what I've said above is ignorant bollocks, and you know better, please don't flame me - educate me.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
      • (Score: 1, Informative) by Anonymous Coward on Monday April 20 2015, @12:24AM

        by Anonymous Coward on Monday April 20 2015, @12:24AM (#172973)

        Okay, I'll edificate what I can. Your talk about phrasing is both right and wrong. Confidence level 0.05 can mean there is only a 1 in 20 chance that the conclusion is wrong if and only if conditions are met, null and alternative are well and strictly defined, and finally the conclusion must only be to reject the null. Under no other conditions can that phrasing be correct.

        Aside from that I am in the same boat as you. Stats feels more like a curiosity for people that like stats more than anything else, except the Bayesian (and other probability) stuff as you have pointed out.

        • (Score: 2) by NoMaster on Monday April 20 2015, @11:18PM

          by NoMaster (3543) on Monday April 20 2015, @11:18PM (#173318)

          Oh, I dunno about that. I'm not a Frequentist by any means, but Bayesian stats & methods have plenty of equivalent issues of their own e.g. subjectiveness of priors, convergence (esp. in the typical case of chained Monte Carlo analyses), etc., etc.

          Plenty of traps for people to fall into and where ignorance (superstition?) can fester & become 'accepted' wisdom - just like the common misunderstandings (e.g. p values) in Frequentist stats...

          --
          Live free or fuck off and take your naïve Libertarian fantasies with you...