Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by martyb on Monday April 24 2017, @10:48AM   Printer-friendly
from the follow-the-money? dept.

http://www.sciencemag.org/news/2017/04/power-struggle-erupts-utah-cancer-institute-over-director-s-firing

The abrupt dismissal of the head of a Utah cancer center is causing backlash from its faculty—and its major philanthropic funder—in a struggle over the center's autonomy from the University of Utah in Salt Lake City. And nearly 2000 researchers have signed a petition calling on the university to reverse its decision.

For 11 years, prominent cell biologist Mary Beckerle has headed the Huntsman Cancer Institute (HCI), which is based at the university but receives its funding largely from philanthropic donations, revenue from its cancer hospital, and grants from state and from the National institutes of Health. In an email to some clinical staff on Monday, university President David Pershing and Vivian Lee, senior vice president for health sciences, announced that Beckerle would step down "effective yesterday," but would "remain on faculty as a distinguished professor in biology." Beckerle, who has not responded to Science's request for comment, told The Salt Lake Tribune that she had learned of her dismissal in an email less than an hour earlier.

Details have been scant from the university, which also did not respond to a comment request. But Beckerle's colleagues contend that the move amounts to a hostile takeover by the university aimed at capturing the cancer clinic's revenue, and other prominent scientists are rallying unquestioningly around her.

Also at Deseret News. Change.org petition. University of Utah Health press release.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Monday April 24 2017, @07:11PM (11 children)

    by Anonymous Coward on Monday April 24 2017, @07:11PM (#499005)

    Which field? Extrapolating from your limited anecdotal experience in a particular setting to the research done at the Utah Cancer Center and making a specific numerical conclusion seems to be a bit hypocritical for someone so concerned with proper data analysis.

    Do you have a p-value less than .05?/sarcasm

  • (Score: 0) by Anonymous Coward on Monday April 24 2017, @07:39PM (10 children)

    by Anonymous Coward on Monday April 24 2017, @07:39PM (#499016)

    Biomedical research

    • (Score: 0) by Anonymous Coward on Monday April 24 2017, @07:58PM (9 children)

      by Anonymous Coward on Monday April 24 2017, @07:58PM (#499025)

      Both of the papers below are from research groups at the Utah Cancer Center. You'll notice that they do not use NHST for their data. The first is mainly concerned with medicinal chemistry SAR optimization and the second is the biochemistry of transcription factor-DNA interactions.

      https://www.ncbi.nlm.nih.gov/pubmed/26182238 [nih.gov]
      http://www.sciencedirect.com/science/article/pii/S002228361300747X [sciencedirect.com]

      • (Score: 0) by Anonymous Coward on Monday April 24 2017, @08:29PM (8 children)

        by Anonymous Coward on Monday April 24 2017, @08:29PM (#499039)

        I looked at the first paper, it is still NHST. For example figure 9B shows a difference between control and treatment, then they conclusions drawn is that "Our in vivo data demonstrated that the [compound] 14 treatment resulted in tumor growth inhibition compared to controls".

        This is just NHST, but via "eyeballing" rather than calculating any p-value. They don't consider other explanations for these observations. For example, they don't seem to report blinding, so one explanation can be that the tech/student measuring the tumor volume was simply biased. How much can bias explain with that type of measurement, even just roughly? Another thing that comes to mind is that perhaps the treatment reduced edema within the tumor, rather than slowing tumor growth. Indeed, a quick search shows I am not the first to bring up that issue:

        Even in subcutaneous models, tumor burdens may not be accurately quantified using physical measurements because edema and necrotic centers will contribute to the increase in tumor size[5].

        http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0009364 [plos.org]

        I'm sure I can think up more explanations if I spent the time.

        • (Score: 0) by Anonymous Coward on Monday April 24 2017, @09:08PM (7 children)

          by Anonymous Coward on Monday April 24 2017, @09:08PM (#499049)

          This is just NHST, but via "eyeballing" rather than calculating any p-value.

          No, come up with a different term if NHST doesn't fit what you mean.

          perhaps the treatment reduced edema within the tumor, rather than slowing tumor growth

          They report their observation and define how they measure tumor size. You are free to disagree with any conclusions that the authors present and consider other explanations.

          • (Score: 0) by Anonymous Coward on Monday April 24 2017, @09:22PM (6 children)

            by Anonymous Coward on Monday April 24 2017, @09:22PM (#499052)

            "Eyeballing", in this case, is just another implementation of NHST along with t-tests, anovas, etc. Rather than testing their hypothesis they "test" (eyeball test) the stawman "null" hypothesis, then conclude their favorite explanation is correct. Science is about distinguishing between different explanations for what is observed, not ruling out "no difference between groups".

            • (Score: 0) by Anonymous Coward on Monday April 24 2017, @09:39PM (5 children)

              by Anonymous Coward on Monday April 24 2017, @09:39PM (#499056)

              Are you using a different definition?
              NHST: Null Hypothesis Significance Testing

              their favorite explanation

              Do you expect them to present their least favorite explanation in their discussion/conclusion section?
              You also seem to be assuming that scientific papers are supposed to be unbiased reports of data (they aren't). Can you even point to any examples of your ideal?

              • (Score: 0) by Anonymous Coward on Monday April 24 2017, @10:47PM (4 children)

                by Anonymous Coward on Monday April 24 2017, @10:47PM (#499083)

                Here is a quick example of what I expect. I collected data [grabbed the data using the R digitize package] , fit a model, and plotted it.

                data = structure(list(group = structure(c(2L, 2L, 2L, 2L, 2L, 2L, 1L,
                1L, 1L, 1L, 1L, 1L), .Label = c("control", "treatment"), class = "factor"),
                    x = c(1, 5, 8, 12, 15, 19, 1, 5, 8, 12, 15, 19), y = c(142.570281124498,
                    174.698795180723, 198.795180722892, 265.060240963855, 315.261044176707,
                    379.518072289157, 122.489959839357, 160.642570281124, 216.867469879518,
                    407.630522088353, 574.29718875502, 700.803212851406)), .Names = c("group",
                "x", "y"), row.names = c(NA, -12L), class = "data.frame")

                ### Simple model of tumor growth
                #volPerCell = volume per cell (mm^3)
                #divRate    = divisions per/day
                #n0         = initial number of cells
                #t          = time (days)
                #divN       = Number of divisions since intial
                #tumorVol   = total tumor volume
                tumorGrowthModel <- function(volPerCell = 2e-6,
                                             divRate    = 0.1,
                                             n0         = 1000,
                                             t          = 1:20){
                    divN     = t*divRate
                    tumorVol = volPerCell*n0*2^divN
                }

                # Generate some model fits
                t = 0:20
                tumorVol1 = tumorGrowthModel(volPerCell = 2e-6, divRate = 0.10, n0 = 6e7, t = t)
                tumorVol2 = tumorGrowthModel(volPerCell = 2e-6, divRate = 0.13, n0 = 6e7, t = t)

                # Plot the data w model fits
                sub = data[data$group=="treatment",]
                plot(sub$x, sub$y, type = "b",
                     xlim = c(0, 20), ylim = c(0, 1000),
                     col = "blue", pch = 22,
                     xlab = "Days", ylab = "Tumor Volume (mm^3)")

                sub = data[data$group == "control",]
                lines(sub$x, sub$y, type = "b", pch=16)

                lines(t, tumorVol1, col = "Blue", lty = 2)
                lines(t, tumorVol2, col = "Black", lty = 2)

                The solid lines are the data ( I didn't bother with the error bars) from figure 9B and the dotted are some simple models of tumor growth:
                https://i.imgur.com/detFzzk.png [imgur.com]

                From this simple model we can see that the increase in tumor volume for the controls could be approximately explained by an increase in division rate from 0.1 per day to 0.13 per day. But the curves don't really match up. Instead we could play with the volPerCell parameter, maybe the treatment does not affect growth rate, but does affect the volume per cell. This model does not include cell death, which no doubt happens. So we should probably add that to the model. Also the control tumors look like they are plateauing towards the end, so maybe cell death and replication rates should be functions of the number of cells.

                All these parameters will be relatively unconstrained at first, so we should collect other data to constrain their values. Eventually you get a model that fits the data, then collect new data and see if the model can also fit that given the plausible range of parameter values.

                • (Score: 2) by aristarchus on Tuesday April 25 2017, @05:26AM (1 child)

                  by aristarchus (2645) on Tuesday April 25 2017, @05:26AM (#499150) Journal

                  So we should probably add that to the model.

                  Um, no, we should not? You are not actually a scientist, are you? You are a dear and fluffy AC! Could I interest you in an Electric Universe? One that proves that Einstein was completely wrong? Or how about some nice Homeopathy? No, nothing gay about it. Or some really good Climate Denialism? We are having a sale!

                  New Motto of SN"
                  "NO one expects the Violent Imposition of the Null Hypothesis! Those who do expect it, well, um, . . . ."

                  • (Score: 0) by Anonymous Coward on Tuesday April 25 2017, @11:50AM

                    by Anonymous Coward on Tuesday April 25 2017, @11:50AM (#499236)

                    Why not?

                • (Score: 0) by Anonymous Coward on Tuesday April 25 2017, @02:58PM (1 child)

                  by Anonymous Coward on Tuesday April 25 2017, @02:58PM (#499319)

                  Are you using the same definition of NHST and do you accept that the authors did not run a significance test for the data in 9B?

                  I meant an example of a research article that meets your ideal of no bias.

                  Your analysis of the data does help me see where you are comming from, though. What you are doing seems more related to the field of computational or systems biology. In that field, scientists attempt to model raw data into equations. Because biology is such a complex (no spherical cows in a vacuum) and young science (the systems are still being characterized), these models are incredibly imprecise in their predictive power. This is probably why experimental and observational biology dominate in their ability to produce useful conclusions; however they will not be able to determine the Truth.

                  • (Score: 0) by Anonymous Coward on Tuesday April 25 2017, @04:43PM

                    by Anonymous Coward on Tuesday April 25 2017, @04:43PM (#499366)

                    Are you using the same definition of NHST and do you accept that the authors did not run a significance test for the data in 9B?

                    As mentioned, the defining feature of NHST (which is different than Fisher's original significance test, and also different from Neyman/Pearson's original hypothesis test[1]) is that you check whether the data is consistent with a "null" hypothesis (usually a so-called "nil" hypothesis of no difference between two groups). The mathematical details are not a defining feature, there are even "Bayesian significance tests"[2] that are based on an entirely different definition of probability. Just because the method used here is "look, the error bars don't touch", doesn't change that.

                    I meant an example of a research article that meets your ideal of no bias.

                    The ideal is not "no bias", it is to compare various explanations for the data. NB, if your study can distinguish between any two real explanations it will also always be able to rule out chance as well. This is something done as a matter of course, there is no reason to have a special step devoted to it.

                    Also, I've found any study designed to just check for a mere difference between two groups will have to deal with so many alternative explanations that it is pretty much impossible to be confident you are interpreting the difference correctly. Instead you need to think hard about your explanation and get a precise prediction of some sort out.

                    As for some papers I like off the top of my head (not that I accept all the conclusions):

                    http://www.sciencedirect.com/science/article/pii/S0019103516304869 [sciencedirect.com]
                    http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000877 [plos.org]
                    http://www.pnas.org/content/101/36/13124 [pnas.org]
                    http://iopscience.iop.org/article/10.3847/2041-8205/816/1/L17 [iop.org]
                    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2007940/ [nih.gov]

                    Because biology is such a complex (no spherical cows in a vacuum) and young science (the systems are still being characterized), these models are incredibly imprecise in their predictive power.

                    I disagree, that is what they told me, then I found papers from the 1930s that had models capable of describing my system very well. Of course, once I implemented them and showed it around, no one in that area knew wtf I was talking about because they hadn't thought of these things quantitatively for decades. It is not about predicting things exactly, even nowadays they have to include a bunch of ad hoc empirically-defined adjustments to make accurate predictions of solar system dynamics.[3,4] The goal is to find "universalities" in the data that can be modeled by simple processes, not check for differences.

                    It seems to me this "it is too complex" idea is just a self-defeating attitude coupled with bad practices in the field (accepting very vague descriptions as "useful")[5]:

                    Even if a diagram makes overall sense (Figure 3A), it is usually useless for a quantitative analysis, which limits its predictive or investigative value to a very narrow range. The language used by biologists for verbal communications is not better and is not unlike that used by stock market analysts. Both are vague (e.g., “a balance between pro- and antiapoptotic Bcl-2 proteins appears to control the cell viability, and seems to correlate in the long term with the ability to form tumors”) and avoid clear predictions.

                    Then there is of course the issue that "reliance on significance testing retards the growth of cumulative research knowledge"[6] automatically, as if it was designed to do so. I mean not just because people aren't bothering to come up with quantitative models since it is unnecessary for career success, but that when used "correctly" you are destined to either generate conflicting conclusions or always find "significance" (depending on sample size).

                    [1] http://library.mpib-berlin.mpg.de/ft/gg/GG_Mindless_2004.pdf [mpib-berlin.mpg.de]
                    [2] http://www.tandfonline.com/doi/full/10.1080/03610926.2011.563021 [tandfonline.com]
                    [3] https://en.wikipedia.org/wiki/Jet_Propulsion_Laboratory_Development_Ephemeris [wikipedia.org]
                    [4] http://www.cv.nrao.edu/~rfisher/Ephemerides/ephem_descr.html [nrao.edu]
                    [5] https://www.ncbi.nlm.nih.gov/pubmed/12242150 [nih.gov]
                    [6] http://psycnet.apa.org/journals/met/1/2/115.pdf [apa.org]