Stories
Slash Boxes
Comments

SoylentNews is people

posted by CoolHand on Tuesday February 13 2018, @10:11PM   Printer-friendly
from the clearing-up-transparency dept.

Attendees of a Howard Hughes Medical Institute meeting debated whether or not science journals should publish the text of peer reviews, or even require peer reviewers to publicly sign their paper critiques:

Scientific journals should start routinely publishing the text of peer reviews for each paper they accept, said attendees at a meeting last week of scientists, academic publishers, and funding organizations. But there was little consensus on whether reviewers should have to publicly sign their critiques, which traditionally are accessible only to editors and authors.

The meeting—hosted by the Howard Hughes Medical Institute (HHMI) here, and sponsored by HHMI; ASAPbio, a group that promotes the use of life sciences preprints; and the London-based Wellcome Trust—drew more than 100 participants interested in catalyzing efforts to improve the vetting of manuscripts and exploring ways to open up what many called an excessively opaque and slow system of peer review. The crowd heard presentations and held small group discussions on an array of issues. One hot topic: whether journals should publish the analyses of submitted papers written by peer reviewers.

Publishing the reviews would advance training and understanding about how the peer-review system works, many speakers argued. Some noted that the evaluations sometimes contain insights that can prompt scientists to think about their field in new ways. And the reviews can serve as models for early career researchers, demonstrating how to write thorough evaluations. "We saw huge benefits to [publishing reviews] that outweigh the risks," said Sue Biggins, a genetics researcher at the Fred Hutchinson Cancer Research Center in Seattle, Washington, summarizing one discussion.

But attendees also highlighted potential problems. For example, someone could cherry pick critical comments on clinical research studies that are involved in litigation or public controversy, potentially skewing perceptions of the studies. A possible solution? Scientists should work to "make the public understand that [peer review] is a fault-finding process and that criticism is part of and expected in that process," said Veronique Kiermer, executive editor of the PLOS suite of journals, based in San Francisco, California.

Related: Peer Review is Fraught with Problems, and We Need a Fix
Odd Requirement for Journal Author: Name Other Domain Experts
Gambling Can Save Science!
Wellcome Trust Recommends Free Scientific Journals
Medical Research Discovered to Have Been Peer Reviewed by a Dog
Should Scientists Be Posting Their Work Online Before Peer Review?
Judge Orders Unmasking of Anonymous Peer Reviewers in CrossFit Lawsuit


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by AthanasiusKircher on Wednesday February 14 2018, @01:10AM (2 children)

    by AthanasiusKircher (5291) on Wednesday February 14 2018, @01:10AM (#637387) Journal

    I am not in academia, but aren't those peer reviews provided verbatim to the author.

    Depends on the practice of the journal and editor. Many journals pass along reviews verbatim. Some editors exercise discretion in editing what is sent on to the author. I've even seen journals make this explicit in a sort of "comments to the author" vs. "comments to the editor" submission. The latter is presumably for comments like, "I tried to be somewhat nice in evaluating the paper on its own terms and not being overtly insulting to the author, but honestly the main argument makes no sense and this should NOT be published." Some editors will even group comments from multiple reviewers, passing along mainly the points that the author needs to address to make the paper suitable for publication. (This tends to happen sometimes with more casual publications like edited collections, rather than top-tier peer-reviewed journals.)

    If the text was identifying, that damage is already done. If it wasn't identifying, then what is the harm to anonymity in printing it in the same format it is presented to the author?

    No harm is necessarily done in terms of anonymity, though authorship metrics and big data may be able to break down that anonymity a bit.

    The question, though, is whether there are other harms. I'd say the biggest is misperception. Reviewing is generally unpaid, and reviewers will undertake reviews with various levels of attention. Sometimes reviews may be very short and overly terse (either positive or negative) -- which doesn't necessarily indicate anything. If the paper is summarily good or summarily bad, that may be enough. On the other hand, in a lower-tier journal that basically publishes almost anything it's sent, a short positive review could be an indication that peer review wasn't thorough. It all depends on context that the general public probably won't be able to interpret.

    But more often, you get reviewers who are overly critical on aspects that ultimately may not matter. Say a paper is targeting issue A, but also in passing makes some remarks about B, C, D, and E. Suppose a reviewer is an expert in C. Often you get reviews where such a reviewer will go into great detail about everything that's wrong about two paragraphs dealing with C, even if it's not part of the main paper argument. The reviewer may want 75% of the paper to be about clarifying C, even though that's not the point. But to an ignorant outside observer, it may look like the reviewer completely trashed the paper, ranting about all the errors. We see these sorts of things happen in public debates and political arguments all the time -- you attack some minor detail that may not even be essential to the argument, but thereby draw suspicion on the whole thing.

    And the reviewer may not even be intending to trash the whole paper. They may just be using their expertise (in an overbearing way) to draw the author's attention to related work. Nevertheless, to an external reader who doesn't know the field, this may look like very harsh criticism.

    Good editors will often spot such things and know how to put them into context. If a review is excessively negative because it "didn't get the point," the editor may even send it out to more reviewers for clarification. Or just summarily tell the author they don't need to worry so much about revising the section on part C other than some minor corrections.

    Moreover, some reviewers are just nasty for no apparent reason. Or just refuse to consider the merits of the author's main point because they think something about the way it's presented is silly. Again, editors often have to make judgments about when this indicates serious flaws in the paper vs. just a ranting reviewer.

    The point is that anonymity in reviewing -- like anonymity on the internet -- breeds trolls occasionally too. But as GP pointed out here, removing anonymity also introduces other serious issues. Airing all of this stuff in public will often muddy the waters, rather than "educating the public" on critical reviews. Anyone can see what the general public does with criticism -- it breeds conspiracy theories in so many areas where even the most tiny discrepancy or error (or even perceived error) is found. Not that science should be presented as infallible -- obviously that's bad too. But there are better ways to introduce transparency or give critical counterpoint. For example, I've seen a few journals that frequently publish responses or comments from other scholars on papers in the same issue as new research. There criticism is actually intended for public consumption -- not just to improve the paper for publication or render a verdict on its worthiness (the ultimate goals of peer review).

    Also, one has to consider the various outcomes of the peer review process. Surely journals aren't going to publish results of negative reviews that lead to rejections... at least I assume. Which leads to selection bias and an inability for the public to evaluate the significance of reviews. If, as in the example gives in the summary, a lawyer brings up a critical review, it would be helpful to compare the level of criticism for accepted papers vs. those which were rejected. Can we do this?

    And what about other options for review? Practice from journal to journal varies, but reviewers often render a verdict that could take many different forms, including:

    (1) Accept basically as written
    (2) Accept with clarifications (often need to be addressed from review)
    (3) Recommend to revise and resubmit (paper is not without merit, but currently has serious flaws)
    (4) Reject outright

    (These are common options, but others may happen at journals too.)

    Only in the case of (1) are we really seeing a potential dialogue about the article as it will be printed. And in cases where no changes are recommended by reviewers, you're more likely to see brief reviews that aren't that useful anyway, because the thing is solid. If (2), are journals going to publish both the original version and any revised version? If not, now there's new pressure on authors to take into account everything in a review, or else you look stupid... and it makes the editorial job of sorting out which criticisms are essential for publication a lot harder, because now that guy who ranted in a review about issue C for a paper about issue A creates a potential misperception. And if (3) happens, this all gets a lot worse, because sometimes this will happen because of some stupid or embarrassing errors that an author can revisit. But now does it become public record? Or only reviews on the resubmitted version? It seems disingenuous to the notion of "transparency" to hide the reviews of the initial flawed version... but are they still relevant if significant revisions are made? Sometimes the errors may not have to do with fundamental ideas in the research, but subtle flaws in the methodology or analysis or argument. Yet airing this dialogue of repeated submissions publicly may ultimately draw unnecessary criticism to the final research.

    All that said, there are potentially very good reasons for more transparency in peer reviewing. For example, there are journals that may have editors who "go easy" on some authors and choose reviewers who are "friendly." But the general public probably won't be able to pick up on these subtleties. And some of the greatest unfairness may happen in rejected papers that just happened to draw bad reviewers -- but I doubt journals are going to start publishing rejected papers and their reviews somewhere...

    So there may be reason to introduce more oversight in the reviewing process. But just publishing documents that are meant to be primarily private criticisms/evaluations to individual authors is likely to have a bunch of unforeseen negative effects, probably more than positive ones. After all, I'm sure we could gain some insight into the research process by publishing memos and emails sent from lab directors to researchers as they're working on their research too. Or private correspondence among authors as they are finalizing a paper for submission. But these are also private documents about the process of improving the paper for publication, which is ultimately the goal of peer review... not to air public critique. Even if journals don't have a formalized process of publishing critiques along with new articles, almost every reputable journal has some sort of "comment" or "correspondence" or "letters to the editor" section which are often used for the very purpose brought up in the summary, i.e., public critique.

    I mean, I suppose to those "in the know" who can actual evaluate what reviews mean, making this public will likely cement views that are generally already well-known within fields, e.g., journal A is top-tier because it has rigorous standards, journal B is sometimes mediocre, and journal C publishes almost everything it receives. This will likely be reflected in the quality of the reviews, but again, the quality of the research in top journals often already speaks for itself. If anything, in terms of the general public, it's likely to create more distrust of better research, since reviews in top journals are likely to be more thorough and more critical (even on minor points, but the general public won't get that), whereas journal C that accepts everything will probably have glowing -- if less thorough -- reviews for every paper...

    Starting Score:    1  point
    Moderation   +4  
       Insightful=2, Interesting=1, Informative=1, Total=4
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 0) by Anonymous Coward on Wednesday February 14 2018, @02:41AM

    by Anonymous Coward on Wednesday February 14 2018, @02:41AM (#637417)

    > publish responses or comments from other scholars on papers in the same issue as new research.

    I greatly enjoy reading old engineering papers from the Institute of Mechanical Engineers (UK) -- papers were published after they were given in front of an attentive audience, the discussion afterwords was transcribed, and the whole session (original paper, comments from audience and responses by the author) was all published together.

    But I suppose this would be much too expensive and slow to do today. Sad how much we've lost to the gods of cost and speed.

  • (Score: 2) by insanumingenium on Wednesday February 14 2018, @06:07PM

    by insanumingenium (4824) on Wednesday February 14 2018, @06:07PM (#637712) Journal
    First of all, I am pleasantly surprised a glib comment on my part could lead to so much information. I acknowledge up front, I am talking out my ass here. Thanks for taking your time with me.

    I had assumed that any request for clarifications (cases 2 and 3) would generate a new draft which gets re-reviewed. I was further assuming only the final round of reviews based on the final document would be published. To carry it to absurdity, do we need a live transcript of their document being written to be transparent? What it seems to me that they are presenting for public discourse is their finished idea with all the supporting facts they can generate, clarifications and rewrites along the way to that finished product seem incidental to me. But the thoughts of experts on the subject of that final work are hardly incidental.

    I spoke specifically about anonymity because I don't think there is a clearcut answer to the issue at hand, I see that there is a great argument for increasing public distrust. The issue for me is, we already have public distrust of science in many areas, and I don't think concealing scientific discourse is the path to making that problem better. Call it a meaningless value judgement, but I really don't see how science can work optimally without transparency. The obvious cost of that is you have to know how to read academic publications, which is already the case, and which I don't think the counter argument would alleviate.

    Coming from that point of view not wanting to publish the petty infighting and cronyism you are worried about exposing seems like a very shallow excuse to me. If anything having to write your response knowing that it will represent your field should help with this I would hope. As for the pigeonholed specialists comments, while potentially off-topic, that seems like a way to generate inter-domain knowledge, even if point C is totally trivial, if it made it into the final product it must have some relevance, and further discourse on it could be edifying.

    I don't envy the jobs of these editors after reading all of the above. This all leads us to the problem I hadn't considered until I wrote this, how do you publish a response to the response? How deep will this rabbit hole go, at what point do you have to give someone a final word without a review? Frankly the inability to infinitely recurse the reviews seems like the best argument I have heard yet against publishing the peer reviews.