from the clearing-up-transparency dept.
Attendees of a Howard Hughes Medical Institute meeting debated whether or not science journals should publish the text of peer reviews, or even require peer reviewers to publicly sign their paper critiques:
Scientific journals should start routinely publishing the text of peer reviews for each paper they accept, said attendees at a meeting last week of scientists, academic publishers, and funding organizations. But there was little consensus on whether reviewers should have to publicly sign their critiques, which traditionally are accessible only to editors and authors.
The meeting—hosted by the Howard Hughes Medical Institute (HHMI) here, and sponsored by HHMI; ASAPbio, a group that promotes the use of life sciences preprints; and the London-based Wellcome Trust—drew more than 100 participants interested in catalyzing efforts to improve the vetting of manuscripts and exploring ways to open up what many called an excessively opaque and slow system of peer review. The crowd heard presentations and held small group discussions on an array of issues. One hot topic: whether journals should publish the analyses of submitted papers written by peer reviewers.
Publishing the reviews would advance training and understanding about how the peer-review system works, many speakers argued. Some noted that the evaluations sometimes contain insights that can prompt scientists to think about their field in new ways. And the reviews can serve as models for early career researchers, demonstrating how to write thorough evaluations. "We saw huge benefits to [publishing reviews] that outweigh the risks," said Sue Biggins, a genetics researcher at the Fred Hutchinson Cancer Research Center in Seattle, Washington, summarizing one discussion.
But attendees also highlighted potential problems. For example, someone could cherry pick critical comments on clinical research studies that are involved in litigation or public controversy, potentially skewing perceptions of the studies. A possible solution? Scientists should work to "make the public understand that [peer review] is a fault-finding process and that criticism is part of and expected in that process," said Veronique Kiermer, executive editor of the PLOS suite of journals, based in San Francisco, California.
Related: Peer Review is Fraught with Problems, and We Need a Fix
Odd Requirement for Journal Author: Name Other Domain Experts
Gambling Can Save Science!
Wellcome Trust Recommends Free Scientific Journals
Medical Research Discovered to Have Been Peer Reviewed by a Dog
Should Scientists Be Posting Their Work Online Before Peer Review?
Judge Orders Unmasking of Anonymous Peer Reviewers in CrossFit Lawsuit
Phys.org is running a story on some of the issues with modern peer review:
Once published, the quality of any particular piece of research is often measured by citations, that is, the number of times that a paper is formally mentioned in a later piece of published research. In theory, this aims to highlight how important, useful or interesting a previous piece of work is. More citations are usually better for the author, although that is not always the case.
Take, for instance, Andrew Wakefield's controversial paper on the association between the MMR jab and autism, published in leading medical journal The Lancet. This paper has received nearly two thousand citations – most authors would be thrilled to receive a hundred. However, the quality of Wakefield's research is not at all reflected by this large number. Many of these citations are a product of the storm of controversy surrounding the work, and are contained within papers which are critical of the methods used. Wakefield's research has now been robustly discredited, and the paper was retracted by the Lancet in 2010. Nevertheless, this extreme case highlights serious problems with judging a paper or an academic by number of citations.
Personally, I've been of the opinion that peer review is all but worthless for quite a while. It's nice to know I'm not the only one who has issues with the process.
An Anonymous Coward write:
A friend from academia recently invited me to write a paper for a journal that he is guest editing. I don't write many papers (not in academia), so I figured I better look through the Author Guidelines to see what formats they would accept, etc.
Here is the Inderscience author faq page.
This one stopped me in my tracks:
Why am I asked to identify four experts?
You must identify four experts in the subject of your article, details of which will be requested during online submission. The experts must not be members of the editorial board of any Inderscience journal, must not be from your* institution, and at least two of them must be from a different country from you*.
The purpose of this request is ensure your familiarity with the latest research literature in the field and to identify suitable experts who can be added to our Experts Database and who may be asked if they are willing to review articles for Inderscience journals; we are unlikely to ask them to referee your article.
(*"you" refers to all authors of the paper)
Has anyone else been asked to identify professional friends by a journal publisher?
Needless to say, I'm not writing anything for Inderscience until this request is removed. Or maybe I'll write the paper as a favor to my friend...and provide names of experts from my field who are deceased.
The field of psychology has recently been embarrassed by failed attempts to repeat the results of classic textbook experiments, and a mounting realization that many papers are the result of commonly accepted statistical shenanigans rather than careful attempts to test hypotheses.
Now Ed Yong writes at The Atlantic that Anna Dreber at the Stockholm School of Economics has created a stock market for scientific publications, where psychologists bet on published studies based on how reproducible they deemed the findings. Based on Robin Hanson's classic paper "Could Gambling Save Science," that proposed a market-based alternative to peer review called "idea futures," the market would allow scientists to formally "stake their reputation", and offer clear incentives to be careful and honest while contributing to a visible, self-consistent consensus on controversial (or routine) scientific questions.
Here's how it works. Each of 92 participants received $100 for buying or selling stocks on 41 studies that were in the process of being replicated. At the start of the trading window, each stock cost $0.50. If the study replicated successfully, they would get $1. If it didn't, they'd get nothing. As time went by, the market prices for the studies rose and fell depending on how much the traders bought or sold. The participants tried to maximize their profits by betting on studies they thought would pan out, and they could see the collective decisions of their peers in real time. The final price of the stocks, at the end of two-week experiment, reflected the probability that each study would be successfully replicated, as determined by the collective actions of the traders. In the end, the markets correctly predicted the outcomes of 71 percent of the replications—a statistically significant, if not mind-blowing score.
"It blew us all away," says Dreber. "There is some wisdom of crowds; people have some intuition about which results are true and which are not," adds Dreber. "Which makes me wonder: What's going on with peer review? If people know which results are really not likely to be real, why are they allowing them to be published?"
Expensive research journal subscriptions could be on the way out, if the Wellcome Trust has its way. The moneybags UK research foundation has published a report favoring free, so-called open access, journals over those that charge a fee for access. The report reviewed the activities of research institutions that received funding from the trust. It found that it is cheaper, and thus a better use of grants, to place papers in freely available journals.
Meanwhile, the trust feels it's not getting enough bang for its bucks from hybrid publications. These hybrids charge scientists a decent wedge of cash to publish their work, charge people for journal subscriptions, and offer access to individual articles for free. In other words, the foundation would rather scientists submit their work to open-access journals, which are cheaper than hybrids in terms of publication and subscription costs. "We find that hybrid open access continues to be significantly more expensive than fully open access journals, and that as a whole, the level of service provided by hybrid publishers is poor and is not delivering what we are paying for," the trust said.
Local "academic" Dr Olivia Doll — also known as Staffordshire terrier Ollie — sits on the editorial boards of seven international medical journals and has just been asked to review a research paper on the management of tumours.
Her impressive curriculum vitae lists her current role as senior lecturer at the Subiaco College of Veterinary Science and past associate of the Shenton Park Institute for Canine Refuge Studies — which is code for her earlier life in the dog refuge.
Ollie's owner, veteran public health expert Mike Daube, decided to test how carefully some journals scrutinised their editorial reviewers, by inventing Dr Doll and making up her credentials.
The five-year-old pooch has managed to dupe a range of publications specialising in drug abuse, psychiatry and respiratory medicine into appointing her to their editorial boards.
Dr Doll has even been fast-tracked to the position of associate editor of the Global Journal of Addiction and Rehabilitation Medicine.
Earlier this month, when the biotech firm Human Longevity published a controversial paper claiming that it could predict what a person looks like based on only a teeny bit of DNA, it was just a little over a week before a second paper was published discrediting it as flawed and false. The lightening[sic] speed with which the rebuttal was delivered was thanks to bioRxiv, a server where scientists can publish pre-prints of papers before they have gone through the lengthy peer-review process. It took only four more days before a rebuttal to the rebuttal was up on bioRxiv, too.
This tit-for-tat biological warfare was only the latest in a series of scientific kerfuffles that have played out on pre-print servers like bioRxiv. In a piece that examines the boom of biology pre-prints, Science questions their impact on the field. In a time when a scandal can unfold and resolve in a single day's news cycle, pre-prints can lead to science feuds that go viral, unfolding without the oversight of peer-review at a rapid speed.
"Such online squabbles could leave the public bewildered and erode trust in scientists," Science argued. Many within the scientific community agree.
[Source Article (PDF)]: THE PREPRINT DILEMMA
What do you think ??
A judge has ordered that anonymous peer reviewers for an article in a science journal be unmasked on behalf of the exercise regimen company CrossFit, Inc.. The journal is published by a competitor of CrossFit:
In what appears to be a first, a U.S. court is forcing a journal publisher to breach its confidentiality policy and identify an article's anonymous peer reviewers.
The novel order, issued last month by a state judge in California, has alarmed some publishers, who fear it could deter scientists from agreeing to review draft manuscripts. Legal experts say the case, involving two warring fitness enterprises, isn't likely to unleash widespread unmasking. But some scientists are watching closely.
The dispute revolves around a 2013 paper, since retracted, that appeared in The Journal of Strength and Conditioning Research. In the study, researchers at The Ohio State University in Columbus evaluated physical and physiological changes in several dozen volunteers who participated for 10 weeks in a training regimen developed by CrossFit Inc. of Washington, D.C. Among other results, they reported that 16% of participants dropped out because of injury.
In public and in court, CrossFit has alleged that the injury statistic is false. CrossFit also claims that the journal's publisher, the National Strength and Conditioning Association (NSCA) of Colorado Springs, Colorado—which is a competitor in the fitness business—intentionally skewed the study to damage CrossFit. NSCA in turn has countersued, accusing CrossFit executives of defamation. Amid the legal crossfire, the journal first corrected the paper to reduce the number of injuries associated with CrossFit, then retracted it last year, citing changes to a study protocol that were not first approved by a university review board.
CrossFit suspects the paper's reviewers and editors worked to play up injuries associated with its regimen, and it has asked both federal and state judges to force the publisher to unmask the reviewers. In 2014, a federal judge refused that request. But last month, Judge Joel Wohlfeil of the San Diego Superior Court in California, who is overseeing NSCA's defamation suit against CrossFit, ordered the association to provide the names.