Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Saturday July 05 2014, @07:55AM   Printer-friendly
from the a-long-but-thought-provoking-story dept.

In January, a team of scientists from the RIKEN Institute in Kobe, Japan, and Harvard University published two high-profile papers in the prestigious scientific journal Nature (paper 1, paper 2), in which they reported the discovery of a simple method for reprogramming somatic cells (e.g., skin cells, blood cells, etc.) to a totipotent state (i.e., like an early-stage embryo, capable of forming a new copy of the donor organism (a.k.a., a "clone")). The method, which they termed "STAP" (for Stimulus-Triggered Acquisition of Pluripotency) involves using a mildly acidic solution to stress cells taken from any of a variety of tissues (skin, blood, etc.). After stressing, the cells are grown under standard culture conditions for several days, at which point, with no further intervention, the cells become totipotent.

The discovery was celebrated for its broad clinical potential (the resulting cells have all of the capabilities of embryonic stem cells they can be directed to differentiate in culture into any cell type (neurons, bone cells, cardiac cells, etc.)). However, unlike therapies that use embryonic stem cells, because STAP cells are patient-derived, STAP-cell therapies would not carry a risk of transplant rejection. Also, STAP cells are free of the ethical and logistical (i.e., limited supply) issues that plague embryonic stem cell methods.

The reported technique is amazingly fast, shockingly efficient, astoundingly simple, and... wait for it... completely unreproducible. Both papers have now been formally retracted (retraction 1, retraction 2).

However, the bottom line is that this entire episode has affected research elsewhere:

Scientists in the stem-cell field (like this submitter) will recognize that the news here is not (i) that papers purporting to demonstrate a new method for producing stem cells were rushed through the peer-review process (the Nature editor handling these manuscripts could easily have requested evidence of independent replication before publishing), nor (ii) that a new high-efficiency and supposedly simple cell-reprogramming method is, in fact, irreproducible (high-profile protein-based reprogramming method: yet to be reproduced, high-profile microRNA-based reprogramming method (paywalled, sorry): yet to be reproduced, high-profile (one of Science Magazine's Top-Ten Breakthroughs of 2010) messenger RNA-based reprogramming method : yet to be reproduced (PDF)).

What makes this story unusual is the ferocity of the public's response (in Japanese) to what are, quite frankly, levels of data falsification, data fabrication, and plagiarism that are not atypical in this field (see below). As a direct result of this public outcry, the lead author of both papers, Dr. Haruko Obokata (who, since questions about her work first arose in February, has been hospitalized because, according to her lawyer, "her mental and physical condition is unstable") was formally investigated by a committee established by RIKEN comprising senior scientists. In May, the investigative committee found Dr. Obokata guilty of three counts of scientific misconduct. As a telling, and also borderline-farcical aside, during the investigatory committee's investigation of, among other things, alleged image manipulation in one of the papers, an investigative committee was formed to investigate alleged image manipulation in published papers authored by the chair of the original investigative committee, who, unsurprisingly, was forced to step down from his chairmanship as a result. As perhaps the clearest reflection of the overall state of the field, the committee chair was replaced, not by another scientist, but by a lawyer, as apparently no trustworthy scientists could be found.

To restore public trust, the RIKEN Institute tasked an outside panel of experts with (i) investigating the culture at the Center for Developmental Biology (CDB) at RIKEN, where the STAP-cell work took place, and (ii) recommending any policy or structural changes that could be made to help ensure that the science produced at the CDB meets an acceptable level of integrity moving forward. The panel recently concluded that the best course of action would be for the CDB to be "dissolved as soon as possible".

Furthermore, the STAP-cell fiasco has placed in jeopardy RIKEN's bid to be named a Special National Research and Development Corporation by the Japanese government (as well as the increased funding that comes with that special designation).

The aforementioned public outcry has so far been largely limited to Japan. There has yet been no indication of whether Harvard will initiate its own investigation or take any other action in this matter (Dr. Charles Vacanti, of ear-mouse fame, who is affiliated with Harvard, is an author of both retracted papers).

There are many other interesting and relevant aspects of this story that shed light on how things went so wrong, including (i) many pages of Dr. Obokata's doctoral dissertation appear to have been copied and pasted from the website of the U.S. National Institutes of Health, (ii) requirements normally associated with RIKEN's hiring practices (e.g., interviews conducted in English) were disregarded in the hiring of Dr. Obokata, (iii) RIKEN will allow Dr. Obokata to attempt to replicate her experiments, but she will be monitored by video surveillance.

This episode has cast a spotlight on the shortcomings of publicly funded biomedical research, and raises a number of important questions. Starting from the assumption that the primary goal of biomedical research is to improve peoples' health (as opposed to fundamental biology research, which has as its primary goal the generation of knowledge):

1. In light of the following three points, is it a good idea to devote public resources to biomedical research at nonprofit institutions (e.g., universities and research centers like RIKEN)?
a. While researchers at for-profit companies can face financial incentives to generate positive results, researchers at nonprofit institutions also face financial incentives to generate the positive results required to publish high-profile papers (tenure, revenue from patent-licensing, opportunities to engage in outside commercial activity (e.g., consulting, start-up companies, etc.) are all often directly linked to the publication of high-profile papers).
b. While regulatory authorities (such as the Food and Drug Administration in the U.S.) provide strict oversight of clinical research (which constitutes the vast majority of research conducted at for-profit bio-tech and pharmaceutical companies), there is little or no meaningful independent oversight of non-clinical research (which constitutes the vast majority of research conducted at non-profit institutions).
c. There is generally no requirement attached to public funding of non-clinical biomedical research that the funded research be advanced to the point of actually improving anyone's health (e.g., clinical trials). Instead, a successful endpoint is defined as a high-profile publication in a prestigious journal.

2. Can we decouple public funding of biomedical research from potentially corrupting influences, perhaps by avoiding performance-based metrics that are highly susceptible to gaming/fraud (number of publications, publishing in high-impact journals, etc.) when determining the allocation of funding? If so, what metrics, if any, should we use to determine which scientists get public funding, and how much they get (again, focusing on biomedical research, but recognizing that the same principles likely also apply to other fields of applied/commercializable research)?

3. Can we improve the peer-review system, perhaps by requiring independent replication as a matter of course, and/or by requiring papers that have not been independently replicated to carry a disclaimer to that effect (like the albeit often disregarded requirement of a conflict-of-interest disclosure statement)? Would a scientific journal that unilaterally adopts these practices thrive?

4. To disincentivize scientific misconduct, should we try to help the public understand the importance of adhering to the following basic tenets of scientific publication: a. all authors share responsibility for the entire paper, b. irreproducibility of a published method is grounds for retraction, independent of the integrity of the data (i.e., whether or not there is specific evidence of misconduct; think Fleischmann and Pons' cold fusion), and c. fabricated or falsified data in a publication are grounds for retraction, independent of the purported reproducibility of the published method.

Finally, before someone says: "Papers were published, the scientific community couldn't reproduce the results, and the papers were retracted, all within six months. The peer-review process worked! Hugs all around!", please consider the following:

Last December, this submitter's lab submitted a grant application to the U.S. National Institutes of Health, in which we proposed work on a new cell-reprogramming method. Our application was rejected in March (after the publication of the STAP-cell papers, but before the integrity of the papers had been seriously questioned) because, as one reviewer put it, "The [...] field continues to move fast [...] A case in point is the recent successes in reprogramming cells just by stressing the cells in culture" (referencing the now-discredited STAP-cell papers). The point here is that, in our current system, scarce public research funding is allocated before erroneous and/or fraudulent papers can be identified, investigated, and retracted. (btw, our resubmission application just received a great score and will hopefully be funded :), but we (and the patients who may eventually benefit from our work) still lost six months due to the initial rejection :(

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Saturday July 05 2014, @08:39AM

    by Anonymous Coward on Saturday July 05 2014, @08:39AM (#64468)

    The Fine Summary seems kinda contradictory to me. First claim that the method is "completely unreproducible" and then later "nor (ii) that a new high-efficiency and supposedly simple cell-reprogramming method is, in fact, irreproducible"...

    • (Score: 1, Informative) by Anonymous Coward on Saturday July 05 2014, @08:52AM

      by Anonymous Coward on Saturday July 05 2014, @08:52AM (#64471)

      AFAICT, what it says is:

      A) The results are not reproducible

      B) This has happened before in similar cases with much less outcry

      Therefore, what makes this newsworthy is the extent of the outcry happening in Japan

    • (Score: 1) by No.Limit on Saturday July 05 2014, @10:57AM

      by No.Limit (1965) on Saturday July 05 2014, @10:57AM (#64492)

      First it claims that the result is irreproducible.

      Then it goes on that:

      Scientist in the stem-cell field will recognize that the news here is not:

      1. papers were rushed trough the peer review process
      2. a new cell reprogramming method is irreproducible [then a few examples are given of other irreproducible cell reprogramming papers]

      So basically the article doesn't want to focus on the irreproducibility of the results. It then clarifies:

      What makes this story unusual is the ferocity of the public's response (in Japanese) to what are, quite frankly, levels of data falsification, data fabrication, and plagiarism that are not atypical in this field (see below).

  • (Score: 2) by aristarchus on Saturday July 05 2014, @09:12AM

    by aristarchus (2645) on Saturday July 05 2014, @09:12AM (#64474) Journal

    Peter Singer once was going to give at talk in Germany. He is a supporter of euthanasia for the severely disabled. No this is neither here nor there, as an ethical argument, but to make such an argument in Germany, . . . well, it bespeaks a certain lack of social awareness. He complained when students prevented him from giving his talk, saying the Germans have not yet come to realize the importance of freedom of speech. Hmmm. Now with the bio-experiments in Japan, a well know American scholar on Japan once pointed out that organ transplants are far rarer in Japan than in the West. He went into some Buddhist and Confucian beliefs that might have influence on this, but the main thing, he suggested, was the experiments by the Japanese Military in WWII. These things do not just go away, they remain part of a culture's conscience, even if they are never spoken of.

  • (Score: 1) by johaquila on Saturday July 05 2014, @09:34AM

    by johaquila (867) on Saturday July 05 2014, @09:34AM (#64479)

    I disagree with the assumption somewhere in the last paragraphs that the solution is more control. The root of the problem lies in the pressure to publish combined with a culture in which negative results after following a plausible lead somehow don't count and often cannot even be published. Science once worked very well without such external incentives, and I believe we need to get rid of them so that the intrinsic motivation of advancing science can again become the main motivation. Even with that there will always be some fraud because it's hard to admit you have followed a red herring for years. But it will be less than now that such accidents routinely ruin careers if they are mitigated by some 'positive', hence publishable, results on the way.

    Aristarchus als makes a plausible point. It's conceivable to me that in Japan, this kind of research is done primarily by crooks because the research itself is considered unethical, or nearly so.

    • (Score: 2) by Hairyfeet on Saturday July 05 2014, @12:40PM

      by Hairyfeet (75) <bassbeast1968NO@SPAMgmail.com> on Saturday July 05 2014, @12:40PM (#64511) Journal

      I agree with everything you said, but don't forget about the money factor as well. Just insane amounts of money gets thrown around in the form of grants thanks to a single new drug having the ability to make mountains of money if it can be used on the masses (which is why rare diseases seem to get hardly any research) so it stands to reason we'll be seeing this more and more as the amount of money being thrown into the ring increases.

      I have a feeling in an era where a single drug dose can cost thousands of dollars things will only get worse, you can't throw that much money around without things getting out of whack.

      --
      ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
  • (Score: -1) by Anonymous Coward on Saturday July 05 2014, @09:49AM

    by Anonymous Coward on Saturday July 05 2014, @09:49AM (#64483)

    a man using a fusion reactor in his garage made from scrap he collected over years
    shot a kilowatt laser into a a stable moon orbit was declared a hoax by
    a strangely red-light lit physicists:"It is impossible to do anything like that" whilst blinking to
    the intense strange red light emanating from the sky and later on seeing catching a ride in a vehicle
    with military plates ...
    -
    maybe they just want to monetize this correctly and thus need to back pedal?

  • (Score: 2, Interesting) by mojo chan on Saturday July 05 2014, @10:02AM

    by mojo chan (266) on Saturday July 05 2014, @10:02AM (#64485)

    The technique might still work, they just retracted the papers. The scientist at the heart of the controversy is working with others to reproduce the experiments and re-issue the papers with updated results.

    BTW, the summary isn't really a summary if it's tl;dr.

    --
    const int one = 65536; (Silvermoon, Texture.cs)
    • (Score: 3, Insightful) by BlackHole on Saturday July 05 2014, @10:26AM

      by BlackHole (530) on Saturday July 05 2014, @10:26AM (#64489) Journal

      The technique that they reported takes less than a week (according to their papers). Labs around the world have been trying to reproduce it for months now, with zero success. See one open-sourced replication attempt here:
      http://www.researchgate.net/publication/259984904_Stimulus-triggered_fate_conversion_of_somatic_cells_into_pluripotency/reviews/103 [researchgate.net]

      The authors of the retracted papers no doubt wanted to claim that they discovered "the technique" of making stem cells using stress, so they published their results prematurely. The problem is, now there is little incentive for others to even try to make such a technique work, as the original authors would likely claim credit. So, net loss for everyone involved, I think.

  • (Score: 0, Offtopic) by BlackHole on Saturday July 05 2014, @11:09AM

    by BlackHole (530) on Saturday July 05 2014, @11:09AM (#64494) Journal

    A recent SN poll asked: "What is your favo(u)rite topic for articles?" http://soylentnews.org/pollBooth.pl?qid=28&display=full [soylentnews.org]

    "Science" won with more than twice as many votes as the #2 choice. The problem is that SN is about the discussion, and hard science stories seem to generate little discussion.

    At the same time, there has been community interest in more original content on SN (see this comment, for example: http://soylentnews.org/comments.pl?sid=2408&cid=56223 [soylentnews.org]), and the vision for SN includes: "bring[ing] us up to standards on par with ArsTechnica, Engadget, and other large names in this field": http://soylentnews.org/article.pl?sid=14/03/24/080215 [soylentnews.org]

    So, in that spirit, this submission was a piece of original reporting on a hard science topic with a special discussion-promoting SN-flavo(u)r (and it was not co-submitted to any other sites which shall not be named). Of note, it was submitted at 10:58am on Wed., July 2, roughly the same time that the story was picked up by most major English-language news outlets:

    http://www.nytimes.com/2014/07/03/business/stem-cell-research-papers-are-retracted.html?_r=0 [nytimes.com]
    http://online.wsj.com/articles/science-journal-nature-retracts-stem-cell-research-studies-1404308718 [wsj.com]
    http://www.bostonglobe.com/news/science/2014/07/02/controversial-stem-cell-creation-method-retracted/NiScjZhcPcaopw7ziGvWaN/story.html [bostonglobe.com]

    I thought it would be cool if this was posted at the same time that it was hitting the other big sites, so we could link into our discussion from those sites, and possibly help bring some more people to SN. So, a question for the editors: Do you want submitters to preface their subject with "Breaking:" if they think it is time-sensitive or would that just be annoying?

    Anyway, what do you think?
    A) Good, want more.
    B) tl;dr.
    C) This comment was tl;dr.
    D) Other (please specify).

    • (Score: 1, Offtopic) by tynin on Saturday July 05 2014, @12:08PM

      by tynin (2013) on Saturday July 05 2014, @12:08PM (#64500) Journal

      I would say A) Good, want more. However as this topic is highly specific, and deeply seated in a profession I have only superficial knowledge on, so I suspect the number of comments for this will be limited. Still, I always enjoy delving into the comments on these to find those from someone within the field that has a gestalt viewpoint, and can articulate it well.

    • (Score: 1, Offtopic) by kebes on Saturday July 05 2014, @02:20PM

      by kebes (1505) on Saturday July 05 2014, @02:20PM (#64529)
      I would say articles of this sort are good. I would like to see more of them. However, I think they should always include a short (one paragraph) executive summary, clearly indicated as such. I.e.: the article should summarize the main point, and then can give a long/detailed analysis. (Of course scientists should be quite comfortable adding such a summary; we typically call it "the abstract".)

      The advantages of such a summary are well known: readers want a way to know whether they should bother reading the detailed article (some readers can skip it because they are already experts and agree with the thesis; others can skip because they realize they aren't interested). It is also easier to read something when you already know what the author is trying to convince you of (some authors hate 'giving away the punchline', but for non-fiction it's usually best to lay out the conclusion right away).

      The downside of an executive summary is that you will comments nitpicking things you said in the summary, even though you addressed them fully in the detailed section. But, I think this is okay, because: (1) it's inevitable (no matter what, some fraction of readers will skim and yet feel qualified to comment); (2) it's okay since misguided comments will get shot-down by others, and may spur useful discussion in any case.
    • (Score: 2) by janrinok on Saturday July 05 2014, @02:56PM

      by janrinok (52) Subscriber Badge on Saturday July 05 2014, @02:56PM (#64545) Journal

      I am the editor responsible for releasing this story. Your story presented me with several problems - the type I am more than happy to have - and I would like to respond to your comments personally. Most stories reach us with a small summary and a link or two. Yours was a well written article, considerably longer than we usually receive and, although there were links, none of them summed up the article as well as your own work did. The links did what they were supposed to do - they supported what you had written. Additionally, you provided exactly what we had been hoping to receive - original and thought-provoking material. Furthermore, I feared that, had I released the same story during the week when there are more people to read it but when most of them have less time to spend digesting a long article, the story would not have received the same exposure or generated the same level of interest.

      I tried to write a concise summary that would compel someone to want to continue to read what you had written - I was unable to do your article justice. There would also have been the problem then of where to place your article so that I could subsequently link to it in my summary. Should I put it in the first comment? After my summary? Eventually, I decided that your article would speak for itself.

      You mentioned timeliness in your own comments, pointing out the the story 'broke' on the 2nd July. It was not, in my opinion, time critical to the degree that we should have interrupted other stories to publish yours and, as I have mentioned, it required handling differently to how we normally approach things. It was edited by me on 4 July which, in SN terms, is not at all unusual and placed in the next available release slot. It is perhaps unfortunate, because of the holiday celebrations in the US, that all stories over about 36 hours have been released by 3 editors 'across the pond' to enable our US colleagues to enjoy some time with their families and friends. We worked hard and were having to queue stories to go out over the period that we would normally be enjoying our own weekend and sleeping. It added, perhaps, another 8 hours to the release time. That is the explanation and I can only apologise if you consider that I did not treat your story in the manner it deserved. Another weekend and it might have been different, but even so perhaps not as much as you would have wished.

      Finally, you asked for constructive criticism of your story. I can only suggest, if it is at all possible, to try to shave a few words off subsequent submissions but I cannot see how you could have done so in this instance. As I stated at the start, you provided an excellent example of original material presented at the right level for the target community, and included your own personal experiences to make the points stick. We would strongly encourage the submission of more material of this standard, not to replace the usual much shorter items that are more reliant on the material to which they link, but as stories that will engage many on this site and cause a high quality discussion to ensue.

      Thank you for your submission - I am genuinely sorry if you feel I should have handled it differently.

      • (Score: 1) by BlackHole on Saturday July 05 2014, @04:36PM

        by BlackHole (530) on Saturday July 05 2014, @04:36PM (#64565) Journal

        First, to keep this comment on-topic for our exacting moderators, here is a great summary of the fallout that has occurred over the last couple of days since the retractions: http://retractionwatch.com/2014/07/05/weekend-reads-fallout-from-stap-stem-cell-retractions-confessed-hiv-vaccine-fraudster-pleads-not-guilty/#more-21331 [retractionwatch.com]

        Second, to janrinok, thank you so much for the detailed (and very timely) response :) I was hoping that SN would like something of this nature, and it seems from your comment, and from the others so far that it does. Regarding the timeliness issue, I did not mean to suggest for one moment that I thought this submission should have been handled differently. Really, I was just wondering if you felt that there was anything more that _we_ the submitters could do to help _you_ the editors, say by pointing out the "breaking" nature of a story in some way. A comment on the recent "How to Get Better Stories onto Soylent" story suggested one possible method: http://soylentnews.org/comments.pl?sid=2408&cid=56943 [soylentnews.org] Again, the quality of the discussion is key, so I fully acknowledge that timeliness is less critical at SN than it would be elsewhere. However, it does seem that there could be some benefits to SN having its discussion at roughly the same time everyone else is. In the case of this story, for example, we could drop a comment with a link to our discussion in the stem-cell blog that is frequented by the experts in this field (http://www.ipscell.com), which could help draw some of them here. Just my $0.02.

      • (Score: 2) by c0lo on Sunday July 06 2014, @12:35AM

        by c0lo (156) Subscriber Badge on Sunday July 06 2014, @12:35AM (#64670) Journal

        I tried to write a concise summary that would compel someone to want to continue to read what you had written - I was unable to do your article justice.

        Let me try

        <cynical mode='on'>
        You know about how important stem cells are, right? Well (in case you lived under a rock last years, we'll tell you now) obtaining stem cells for research on personalized medicine have been quite challenging - the typical source, the umbilical cord, may not be available.
        Ok, now hear this: two bastard - one from Kobe Japan, the other from Harvard US - claimed they manage to convince normal cells to become stem cells with inexpensive and simple procedures, and so they published two papers. Problems?

        1. minor problem - nobody managed to reproduce. Well, happens all the time, the process to deal with is already in place for some time: general indignation and the retraction of the articles (this is where we are now), stripping the authors of titles, future funding and credibility etc (in its course to completion), the business settles on its track in a couple of months time, maybe a wikipedia article pops up as a tombstone on the matter.
        2. big problem - my lab didn't get the money last grants round in March, because of these bloody clowns that made it look simple.
          We knew and we told them so: it is not that simple and one need money to look into it; but did they listen to us?

          Well, yeah, now we are getting those money we asked (and maybe something extra) but we still wished they'd came our way earlier: its for the patients, you see?
          (sure thing, it's stating-the-obvious that research is a risky business, many a-time we come back empty handed from our journeys. But... look at this shiny pendulum and think like this: if we would be successful, wouldn't it be in your interest that we succeed 6 month earlier?)

        </cynical>

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 1) by cubancigar11 on Saturday July 05 2014, @10:53PM

      by cubancigar11 (330) on Saturday July 05 2014, @10:53PM (#64653) Homepage Journal

      A. Good good extremely good. I felt I am at the right place reading this article.

  • (Score: 1) by No.Limit on Saturday July 05 2014, @11:24AM

    by No.Limit (1965) on Saturday July 05 2014, @11:24AM (#64496)

    I don't think the listed shortcomings are limited to publicly funded biomedical research.

    Many of those shortcomings apply also to pretty much all of academia (e.g. pressure to publish).
    Of course some fields like computer science don't have the problem of reproducing results since most cs related research can be easily reproduced with little effort (say running a static analyzer on a normal home-desktop vs collecting a lot of data over a long time from a complicated chemical experiment which requires specialized tools).

    The current way academia works is quite unsatisfactory. It works and produces ground breaking results, but it has some negative side effects (as highlighted in the article).

    When talking about the pressure to publish there is a pretty good example: Peter Higgs (yea, that's THE Higgs) says himself that he wouldn't be able to publish enough to get a university position had he not postulated the Higgs boson before. [theguardian.com]

    I think the main problem lies in the limited funding and how scientists success is measured. It's kind of similar to how higher management in cooperations get promoted:

    With the limited resource (research funding money, higher management positions) there is a fierce competition around it. Then the the distribution of the limited resource is based on suboptimal criteria (publishing quantity, publishing quality and in which journals papers get published. As for managers it's mostly about short-term profits instead of long-term performance). Which leads to a strong incentive to reach these criteria in any way possible (manipulating data, publishing false results. And the managers use bad practices to achieve their performance or even go further i.e. into the illegal).

  • (Score: 0) by Anonymous Coward on Saturday July 05 2014, @02:17PM

    by Anonymous Coward on Saturday July 05 2014, @02:17PM (#64528)

    You can't fool the scientific method. People are either going to be able to reproduce your results confirming your success, or they won't.

    If you want to remove the pressure to publish, simply bring back the tenure system, which as far as I can tell has been gutted from the American system.

  • (Score: 2) by kebes on Saturday July 05 2014, @02:44PM

    by kebes (1505) on Saturday July 05 2014, @02:44PM (#64540)
    The article makes lots of good points. However I'm going to argue against this suggestion:

    3. Can we improve the peer-review system, perhaps by requiring independent replication

    I think this is a bad idea for a few reasons:
    1. Publishing, especially in 'high-impact' journals, is already a gauntlet. Adding yet another burden will make publishing even more onerous, which will have negative consequences in terms of timeliness and scope of publications. (E.g. the amount of data that is collected and never published, especially null results, is actually a problem as it skews the literature.)
    2. Related to #1, increasing the publication burden might actually increase the amount of fraud. Scientists (especially early career) need to demonstrate productivity. If you make publishing even harder, then you actually increase the pressure for them to engage in misconduct. They will even find a way to pervert an 'independent replication' rule. (E.g. self-citation schemes to skew impact factors [nature.com] or the hilarious (and sad) scheme where a researcher created fake identities so he could peer-review his own papers [thewire.com].)
    3. Results are already replicated; just not prior to publication. What I mean is that if a result is interesting and significant, other researchers will try to build upon it. This will inherently involve confirming or invalidating the prior claims. So the claims in papers do get checked; it just takes some time.
    4. The purpose of a publication is to report a claim of a result, in order to get feedback from the broader community. Of course researchers should do their absolute best to make their papers be robust and error-free. But, the scientific literature doesn't have to be perfect to be useful. Erroneous papers are okay; they are part of the process wherein science iterates and converges on 'the right answer'. (Of course there is no excuse for fraud.)

    In this regard, I think the solution is not to require replication, but just to remind the community what it means to get a paper published. It does not mean that the claims of the paper are correct. It simply means that the methods and claims seem plausible and are worthy of further consideration. In principle scientists know this: the reports in a paper are not facts, merely claims. As the claims are verified, they become promoted through the literature (being cited positively, then built upon, then appearing in review articles, then in book chapters, eventually in textbooks, and finally in encyclopedias, when the claims are so verified and agreed-upon that they are mundane).

    One problem, however, is that the mass media does not seem to appreciate this distinction. The scientific community would do well to remind people not to put too much stock into a result until it has been independently verified. Practising doctors certainly shouldn't be basing treatment on unverified results. As the post notes, even scientists too often forget this distinction.

    Yes, my claim here amounts to saying "the system works!" I'm not saying it's perfect. It's a slow process, and honest scientists will sometimes suffer as the system is correcting itself. Certainly it could be improved: peer-reviewers for publishing or funding should bear in mind that published papers may be wrong; there should be more rewards for 'mere replication' of prior work; there shouldn't be a stigma against publishing negative or null results; etc. But overall, the current system works surprisingly well.

    • (Score: 2) by kebes on Saturday July 05 2014, @02:53PM

      by kebes (1505) on Saturday July 05 2014, @02:53PM (#64544)
      This is a more minor issue, but I also question this bit from the post:

      all authors share responsibility for the entire paper

      I'm not sure that's a realistic requirement. Of course all co-authors of a paper share responsibility; every author should read the entire manuscript critically, if any author suspects fraud (or even just sloppiness) on the part of another co-author, they should sound the alarm. However, much modern research is highly interdisciplinary: the researcher making some new material may have zero expertise on a characterization tool that another co-author is adding to the research (and vice-versa). The whole point of collaborations is to bring different skill-sets together, so obviously a combined paper will include results/claims that not all co-authors can possibly understand or verify.

      As a result, I don't think it's realistic to expect every co-author to share equal responsibility for all parts of a collaborative paper. It should be possible for a narrow expert to lend their expertise to a project without fully understanding the other parts of the project.

      Instead, I prefer the emerging approach wherein a declaration is added to the paper explaining what each co-author added to the manuscript. That way, if fraud is discovered in some part of the work, it is immediately apparent which co-authors are implicated. Additionally, this makes it easier to figure out who deserves credit for a publication (co-authors can claim credit only in proportion to their claims of responsibility).

  • (Score: 2) by Magic Oddball on Sunday July 06 2014, @09:38AM

    by Magic Oddball (3847) on Sunday July 06 2014, @09:38AM (#64785) Journal

    Your piece seems to work on the premise that there have been no branches of medical science that were primarily profit-motivated, which is inaccurate: corporations have been behind the majority of work on pharmaceuticals for at least the past couple of decades. Rather than being scrupulous about accuracy, each year we hear from the FDA/media about several that falsified data, buried results or otherwise manipulated research to get their drug on the market.

    We almost never hear about those in the context of a voluntary paper retraction; it's typically because enough patients had been hurt or killed for the FDA to step in -- if you do a few web searches, you'll see what I mean pretty quickly. Come to think of it, it's very rare that we hear of serious 'cheating' in the public medical research sector. As the stem cell case illustrated, problems appear to be caught and handled before harm can be done; I'm sure there are examples where that system failed so patients were hurt or killed, but they must be very unusual as I can't recall hearing of any.

    Another thing to consider: corporations have no interest in anything that they won't have a good chance of profiting from, which includes the majority of serious medical problems. That would result in a lot of people being unnecessarily left with disabling, painful or deadly conditions that otherwise could be alleviated or cured, with a major impact on how much support they need and how much they can contribute to their community. Even if you don't view the issue as a matter of ethics or basic compassion/empathy as I do, a situation like that would have such a profoundly negative effect on society that it's a bad idea from a utilitarian standpoint.