Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Monday April 24 2017, @10:48AM   Printer-friendly
from the follow-the-money? dept.

http://www.sciencemag.org/news/2017/04/power-struggle-erupts-utah-cancer-institute-over-director-s-firing

The abrupt dismissal of the head of a Utah cancer center is causing backlash from its faculty—and its major philanthropic funder—in a struggle over the center's autonomy from the University of Utah in Salt Lake City. And nearly 2000 researchers have signed a petition calling on the university to reverse its decision.

For 11 years, prominent cell biologist Mary Beckerle has headed the Huntsman Cancer Institute (HCI), which is based at the university but receives its funding largely from philanthropic donations, revenue from its cancer hospital, and grants from state and from the National institutes of Health. In an email to some clinical staff on Monday, university President David Pershing and Vivian Lee, senior vice president for health sciences, announced that Beckerle would step down "effective yesterday," but would "remain on faculty as a distinguished professor in biology." Beckerle, who has not responded to Science's request for comment, told The Salt Lake Tribune that she had learned of her dismissal in an email less than an hour earlier.

Details have been scant from the university, which also did not respond to a comment request. But Beckerle's colleagues contend that the move amounts to a hostile takeover by the university aimed at capturing the cancer clinic's revenue, and other prominent scientists are rallying unquestioningly around her.

Also at Deseret News. Change.org petition. University of Utah Health press release.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Tuesday April 25 2017, @04:43PM

    by Anonymous Coward on Tuesday April 25 2017, @04:43PM (#499366)

    Are you using the same definition of NHST and do you accept that the authors did not run a significance test for the data in 9B?

    As mentioned, the defining feature of NHST (which is different than Fisher's original significance test, and also different from Neyman/Pearson's original hypothesis test[1]) is that you check whether the data is consistent with a "null" hypothesis (usually a so-called "nil" hypothesis of no difference between two groups). The mathematical details are not a defining feature, there are even "Bayesian significance tests"[2] that are based on an entirely different definition of probability. Just because the method used here is "look, the error bars don't touch", doesn't change that.

    I meant an example of a research article that meets your ideal of no bias.

    The ideal is not "no bias", it is to compare various explanations for the data. NB, if your study can distinguish between any two real explanations it will also always be able to rule out chance as well. This is something done as a matter of course, there is no reason to have a special step devoted to it.

    Also, I've found any study designed to just check for a mere difference between two groups will have to deal with so many alternative explanations that it is pretty much impossible to be confident you are interpreting the difference correctly. Instead you need to think hard about your explanation and get a precise prediction of some sort out.

    As for some papers I like off the top of my head (not that I accept all the conclusions):

    http://www.sciencedirect.com/science/article/pii/S0019103516304869 [sciencedirect.com]
    http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000877 [plos.org]
    http://www.pnas.org/content/101/36/13124 [pnas.org]
    http://iopscience.iop.org/article/10.3847/2041-8205/816/1/L17 [iop.org]
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2007940/ [nih.gov]

    Because biology is such a complex (no spherical cows in a vacuum) and young science (the systems are still being characterized), these models are incredibly imprecise in their predictive power.

    I disagree, that is what they told me, then I found papers from the 1930s that had models capable of describing my system very well. Of course, once I implemented them and showed it around, no one in that area knew wtf I was talking about because they hadn't thought of these things quantitatively for decades. It is not about predicting things exactly, even nowadays they have to include a bunch of ad hoc empirically-defined adjustments to make accurate predictions of solar system dynamics.[3,4] The goal is to find "universalities" in the data that can be modeled by simple processes, not check for differences.

    It seems to me this "it is too complex" idea is just a self-defeating attitude coupled with bad practices in the field (accepting very vague descriptions as "useful")[5]:

    Even if a diagram makes overall sense (Figure 3A), it is usually useless for a quantitative analysis, which limits its predictive or investigative value to a very narrow range. The language used by biologists for verbal communications is not better and is not unlike that used by stock market analysts. Both are vague (e.g., “a balance between pro- and antiapoptotic Bcl-2 proteins appears to control the cell viability, and seems to correlate in the long term with the ability to form tumors”) and avoid clear predictions.

    Then there is of course the issue that "reliance on significance testing retards the growth of cumulative research knowledge"[6] automatically, as if it was designed to do so. I mean not just because people aren't bothering to come up with quantitative models since it is unnecessary for career success, but that when used "correctly" you are destined to either generate conflicting conclusions or always find "significance" (depending on sample size).

    [1] http://library.mpib-berlin.mpg.de/ft/gg/GG_Mindless_2004.pdf [mpib-berlin.mpg.de]
    [2] http://www.tandfonline.com/doi/full/10.1080/03610926.2011.563021 [tandfonline.com]
    [3] https://en.wikipedia.org/wiki/Jet_Propulsion_Laboratory_Development_Ephemeris [wikipedia.org]
    [4] http://www.cv.nrao.edu/~rfisher/Ephemerides/ephem_descr.html [nrao.edu]
    [5] https://www.ncbi.nlm.nih.gov/pubmed/12242150 [nih.gov]
    [6] http://psycnet.apa.org/journals/met/1/2/115.pdf [apa.org]