Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.
posted by CoolHand on Tuesday February 13 2018, @10:11PM   Printer-friendly
from the clearing-up-transparency dept.

Attendees of a Howard Hughes Medical Institute meeting debated whether or not science journals should publish the text of peer reviews, or even require peer reviewers to publicly sign their paper critiques:

Scientific journals should start routinely publishing the text of peer reviews for each paper they accept, said attendees at a meeting last week of scientists, academic publishers, and funding organizations. But there was little consensus on whether reviewers should have to publicly sign their critiques, which traditionally are accessible only to editors and authors.

The meeting—hosted by the Howard Hughes Medical Institute (HHMI) here, and sponsored by HHMI; ASAPbio, a group that promotes the use of life sciences preprints; and the London-based Wellcome Trust—drew more than 100 participants interested in catalyzing efforts to improve the vetting of manuscripts and exploring ways to open up what many called an excessively opaque and slow system of peer review. The crowd heard presentations and held small group discussions on an array of issues. One hot topic: whether journals should publish the analyses of submitted papers written by peer reviewers.

Publishing the reviews would advance training and understanding about how the peer-review system works, many speakers argued. Some noted that the evaluations sometimes contain insights that can prompt scientists to think about their field in new ways. And the reviews can serve as models for early career researchers, demonstrating how to write thorough evaluations. "We saw huge benefits to [publishing reviews] that outweigh the risks," said Sue Biggins, a genetics researcher at the Fred Hutchinson Cancer Research Center in Seattle, Washington, summarizing one discussion.

But attendees also highlighted potential problems. For example, someone could cherry pick critical comments on clinical research studies that are involved in litigation or public controversy, potentially skewing perceptions of the studies. A possible solution? Scientists should work to "make the public understand that [peer review] is a fault-finding process and that criticism is part of and expected in that process," said Veronique Kiermer, executive editor of the PLOS suite of journals, based in San Francisco, California.

Related: Peer Review is Fraught with Problems, and We Need a Fix
Odd Requirement for Journal Author: Name Other Domain Experts
Gambling Can Save Science!
Wellcome Trust Recommends Free Scientific Journals
Medical Research Discovered to Have Been Peer Reviewed by a Dog
Should Scientists Be Posting Their Work Online Before Peer Review?
Judge Orders Unmasking of Anonymous Peer Reviewers in CrossFit Lawsuit


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by crafoo on Wednesday February 14 2018, @04:06AM (2 children)

    by crafoo (6639) on Wednesday February 14 2018, @04:06AM (#637447)

    The only drawback mentioned in the quoted text essentially boils down to, "judges and lawyers are dumb". I don't necessarily agree but I also don't think it's a reasonable motivation for not making the reviews public.

    As for "Scientists should work to "make the public understand that [peer review] is a fault-finding process .." Uhhh no. Science educators should. Scientists should be doing science. In fact, the less administrative bullshit scientists are required to do the better for all of us. There are many things about science and many other important fields that the public should be better educated on. They aren't, and it's a direct consequence of our failure of an education system. Not the particular fields themselves.

    Starting Score:    1  point
    Moderation   +2  
       Insightful=2, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 3, Informative) by FakeBeldin on Wednesday February 14 2018, @10:49AM (1 child)

    by FakeBeldin (3360) on Wednesday February 14 2018, @10:49AM (#637546) Journal

    Couldn't agree more.
    I believe more transparency in governance is a good thing, and reviews are a governance tool.

    There are plenty of unsavoury things happening in reviews, e.g.: a reviewer who decided to hate the paper/premise, a reviewer who doesn't get the paper (but overestimates his/her understanding), a reviewer who seems to have hardly put in any effort into the paper.
    There's also plenty of silver linings: reviews that help to improve the paper substantially, reviews that open up interesting avenues for thought, short reviews whose every word is worth its weight in gold, etc.

    Show the reviews, so that I as a submitter know what to expect. Hell, journals (and conferences) could start competing on review quality for good papers. I would quite likely pick out above-average reviewing venues for any paper that is not being submitted to top venues.

    • (Score: 3, Interesting) by FatPhil on Saturday February 17 2018, @04:56AM

      by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Saturday February 17 2018, @04:56AM (#639216) Homepage
      Agree wholeheartedly, with pseudonomised reviews, but never identified, as that would have chilling effects.

      I've never been in acacemia, but I contributed to a bunch of mathematical monkeywork for someone else's paper once (he wanted to make me co-author, I'd have earnt an Erdos number for it, but I decided to just be in the thank-yous instead). The paper got rejected from 4 different journals with comments like "investigating the corner case where you don't just ignore a standard assumption, but actually assume it's false, is dumb". Finally a 5th journal published it, and the reviewer's comments were along the line of "this work turns the field on it's head - this is so important it should be in the textbooks". I sooooooo want to know who all those reviewera were. But not for the right reasons. Pseudonymising, so that trends can be spotted, and biases dealt with, is as far as it should go. There should be accountability.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves