Stories
Slash Boxes
Comments

SoylentNews is people

posted by CoolHand on Tuesday February 13 2018, @10:11PM   Printer-friendly
from the clearing-up-transparency dept.

Attendees of a Howard Hughes Medical Institute meeting debated whether or not science journals should publish the text of peer reviews, or even require peer reviewers to publicly sign their paper critiques:

Scientific journals should start routinely publishing the text of peer reviews for each paper they accept, said attendees at a meeting last week of scientists, academic publishers, and funding organizations. But there was little consensus on whether reviewers should have to publicly sign their critiques, which traditionally are accessible only to editors and authors.

The meeting—hosted by the Howard Hughes Medical Institute (HHMI) here, and sponsored by HHMI; ASAPbio, a group that promotes the use of life sciences preprints; and the London-based Wellcome Trust—drew more than 100 participants interested in catalyzing efforts to improve the vetting of manuscripts and exploring ways to open up what many called an excessively opaque and slow system of peer review. The crowd heard presentations and held small group discussions on an array of issues. One hot topic: whether journals should publish the analyses of submitted papers written by peer reviewers.

Publishing the reviews would advance training and understanding about how the peer-review system works, many speakers argued. Some noted that the evaluations sometimes contain insights that can prompt scientists to think about their field in new ways. And the reviews can serve as models for early career researchers, demonstrating how to write thorough evaluations. "We saw huge benefits to [publishing reviews] that outweigh the risks," said Sue Biggins, a genetics researcher at the Fred Hutchinson Cancer Research Center in Seattle, Washington, summarizing one discussion.

But attendees also highlighted potential problems. For example, someone could cherry pick critical comments on clinical research studies that are involved in litigation or public controversy, potentially skewing perceptions of the studies. A possible solution? Scientists should work to "make the public understand that [peer review] is a fault-finding process and that criticism is part of and expected in that process," said Veronique Kiermer, executive editor of the PLOS suite of journals, based in San Francisco, California.

Related: Peer Review is Fraught with Problems, and We Need a Fix
Odd Requirement for Journal Author: Name Other Domain Experts
Gambling Can Save Science!
Wellcome Trust Recommends Free Scientific Journals
Medical Research Discovered to Have Been Peer Reviewed by a Dog
Should Scientists Be Posting Their Work Online Before Peer Review?
Judge Orders Unmasking of Anonymous Peer Reviewers in CrossFit Lawsuit


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Informative) by JoeMerchant on Tuesday February 13 2018, @10:14PM (13 children)

    by JoeMerchant (3937) on Tuesday February 13 2018, @10:14PM (#637299)

    I'm sure different journals and different fields each have their own traditions of "how it is done" - but the folks I worked with would often be anonymously reviewing each others' papers, and that anonymity was key to the system. A reviewer will often critique' the completeness of an analysis and call for significant extra work to be done before publication. When the person being reviewed is your boss in real-life, that could get rather awkward.

    --
    🌻🌻 [google.com]
    Starting Score:    1  point
    Moderation   +2  
       Informative=2, Total=2
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 1, Interesting) by Anonymous Coward on Tuesday February 13 2018, @10:29PM

    by Anonymous Coward on Tuesday February 13 2018, @10:29PM (#637307)

    didnt thompson reuters (or whatever they are now) have a product that indexed peer reviewers so they could sell them to marketers

  • (Score: 3, Insightful) by maxwell demon on Tuesday February 13 2018, @10:31PM

    by maxwell demon (1608) on Tuesday February 13 2018, @10:31PM (#637308) Journal

    Even if it is not your boss, it might be the one who will be deciding on your next grant application. The anonymity of the reviewer is exactly in order to allow them to give honest reviews without fear of retaliation.

    Note however that the reviewer is anonymous only to the paper author and general public; the editor of course knows quite well who was reviewing the paper.

    --
    The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 4, Informative) by insanumingenium on Tuesday February 13 2018, @11:12PM (3 children)

    by insanumingenium (4824) on Tuesday February 13 2018, @11:12PM (#637327) Journal

    I am not in academia, but aren't those peer reviews provided verbatim to the author. If the text was identifying, that damage is already done. If it wasn't identifying, then what is the harm to anonymity in printing it in the same format it is presented to the author?

    • (Score: 5, Insightful) by AthanasiusKircher on Wednesday February 14 2018, @01:10AM (2 children)

      by AthanasiusKircher (5291) on Wednesday February 14 2018, @01:10AM (#637387) Journal

      I am not in academia, but aren't those peer reviews provided verbatim to the author.

      Depends on the practice of the journal and editor. Many journals pass along reviews verbatim. Some editors exercise discretion in editing what is sent on to the author. I've even seen journals make this explicit in a sort of "comments to the author" vs. "comments to the editor" submission. The latter is presumably for comments like, "I tried to be somewhat nice in evaluating the paper on its own terms and not being overtly insulting to the author, but honestly the main argument makes no sense and this should NOT be published." Some editors will even group comments from multiple reviewers, passing along mainly the points that the author needs to address to make the paper suitable for publication. (This tends to happen sometimes with more casual publications like edited collections, rather than top-tier peer-reviewed journals.)

      If the text was identifying, that damage is already done. If it wasn't identifying, then what is the harm to anonymity in printing it in the same format it is presented to the author?

      No harm is necessarily done in terms of anonymity, though authorship metrics and big data may be able to break down that anonymity a bit.

      The question, though, is whether there are other harms. I'd say the biggest is misperception. Reviewing is generally unpaid, and reviewers will undertake reviews with various levels of attention. Sometimes reviews may be very short and overly terse (either positive or negative) -- which doesn't necessarily indicate anything. If the paper is summarily good or summarily bad, that may be enough. On the other hand, in a lower-tier journal that basically publishes almost anything it's sent, a short positive review could be an indication that peer review wasn't thorough. It all depends on context that the general public probably won't be able to interpret.

      But more often, you get reviewers who are overly critical on aspects that ultimately may not matter. Say a paper is targeting issue A, but also in passing makes some remarks about B, C, D, and E. Suppose a reviewer is an expert in C. Often you get reviews where such a reviewer will go into great detail about everything that's wrong about two paragraphs dealing with C, even if it's not part of the main paper argument. The reviewer may want 75% of the paper to be about clarifying C, even though that's not the point. But to an ignorant outside observer, it may look like the reviewer completely trashed the paper, ranting about all the errors. We see these sorts of things happen in public debates and political arguments all the time -- you attack some minor detail that may not even be essential to the argument, but thereby draw suspicion on the whole thing.

      And the reviewer may not even be intending to trash the whole paper. They may just be using their expertise (in an overbearing way) to draw the author's attention to related work. Nevertheless, to an external reader who doesn't know the field, this may look like very harsh criticism.

      Good editors will often spot such things and know how to put them into context. If a review is excessively negative because it "didn't get the point," the editor may even send it out to more reviewers for clarification. Or just summarily tell the author they don't need to worry so much about revising the section on part C other than some minor corrections.

      Moreover, some reviewers are just nasty for no apparent reason. Or just refuse to consider the merits of the author's main point because they think something about the way it's presented is silly. Again, editors often have to make judgments about when this indicates serious flaws in the paper vs. just a ranting reviewer.

      The point is that anonymity in reviewing -- like anonymity on the internet -- breeds trolls occasionally too. But as GP pointed out here, removing anonymity also introduces other serious issues. Airing all of this stuff in public will often muddy the waters, rather than "educating the public" on critical reviews. Anyone can see what the general public does with criticism -- it breeds conspiracy theories in so many areas where even the most tiny discrepancy or error (or even perceived error) is found. Not that science should be presented as infallible -- obviously that's bad too. But there are better ways to introduce transparency or give critical counterpoint. For example, I've seen a few journals that frequently publish responses or comments from other scholars on papers in the same issue as new research. There criticism is actually intended for public consumption -- not just to improve the paper for publication or render a verdict on its worthiness (the ultimate goals of peer review).

      Also, one has to consider the various outcomes of the peer review process. Surely journals aren't going to publish results of negative reviews that lead to rejections... at least I assume. Which leads to selection bias and an inability for the public to evaluate the significance of reviews. If, as in the example gives in the summary, a lawyer brings up a critical review, it would be helpful to compare the level of criticism for accepted papers vs. those which were rejected. Can we do this?

      And what about other options for review? Practice from journal to journal varies, but reviewers often render a verdict that could take many different forms, including:

      (1) Accept basically as written
      (2) Accept with clarifications (often need to be addressed from review)
      (3) Recommend to revise and resubmit (paper is not without merit, but currently has serious flaws)
      (4) Reject outright

      (These are common options, but others may happen at journals too.)

      Only in the case of (1) are we really seeing a potential dialogue about the article as it will be printed. And in cases where no changes are recommended by reviewers, you're more likely to see brief reviews that aren't that useful anyway, because the thing is solid. If (2), are journals going to publish both the original version and any revised version? If not, now there's new pressure on authors to take into account everything in a review, or else you look stupid... and it makes the editorial job of sorting out which criticisms are essential for publication a lot harder, because now that guy who ranted in a review about issue C for a paper about issue A creates a potential misperception. And if (3) happens, this all gets a lot worse, because sometimes this will happen because of some stupid or embarrassing errors that an author can revisit. But now does it become public record? Or only reviews on the resubmitted version? It seems disingenuous to the notion of "transparency" to hide the reviews of the initial flawed version... but are they still relevant if significant revisions are made? Sometimes the errors may not have to do with fundamental ideas in the research, but subtle flaws in the methodology or analysis or argument. Yet airing this dialogue of repeated submissions publicly may ultimately draw unnecessary criticism to the final research.

      All that said, there are potentially very good reasons for more transparency in peer reviewing. For example, there are journals that may have editors who "go easy" on some authors and choose reviewers who are "friendly." But the general public probably won't be able to pick up on these subtleties. And some of the greatest unfairness may happen in rejected papers that just happened to draw bad reviewers -- but I doubt journals are going to start publishing rejected papers and their reviews somewhere...

      So there may be reason to introduce more oversight in the reviewing process. But just publishing documents that are meant to be primarily private criticisms/evaluations to individual authors is likely to have a bunch of unforeseen negative effects, probably more than positive ones. After all, I'm sure we could gain some insight into the research process by publishing memos and emails sent from lab directors to researchers as they're working on their research too. Or private correspondence among authors as they are finalizing a paper for submission. But these are also private documents about the process of improving the paper for publication, which is ultimately the goal of peer review... not to air public critique. Even if journals don't have a formalized process of publishing critiques along with new articles, almost every reputable journal has some sort of "comment" or "correspondence" or "letters to the editor" section which are often used for the very purpose brought up in the summary, i.e., public critique.

      I mean, I suppose to those "in the know" who can actual evaluate what reviews mean, making this public will likely cement views that are generally already well-known within fields, e.g., journal A is top-tier because it has rigorous standards, journal B is sometimes mediocre, and journal C publishes almost everything it receives. This will likely be reflected in the quality of the reviews, but again, the quality of the research in top journals often already speaks for itself. If anything, in terms of the general public, it's likely to create more distrust of better research, since reviews in top journals are likely to be more thorough and more critical (even on minor points, but the general public won't get that), whereas journal C that accepts everything will probably have glowing -- if less thorough -- reviews for every paper...

      • (Score: 0) by Anonymous Coward on Wednesday February 14 2018, @02:41AM

        by Anonymous Coward on Wednesday February 14 2018, @02:41AM (#637417)

        > publish responses or comments from other scholars on papers in the same issue as new research.

        I greatly enjoy reading old engineering papers from the Institute of Mechanical Engineers (UK) -- papers were published after they were given in front of an attentive audience, the discussion afterwords was transcribed, and the whole session (original paper, comments from audience and responses by the author) was all published together.

        But I suppose this would be much too expensive and slow to do today. Sad how much we've lost to the gods of cost and speed.

      • (Score: 2) by insanumingenium on Wednesday February 14 2018, @06:07PM

        by insanumingenium (4824) on Wednesday February 14 2018, @06:07PM (#637712) Journal
        First of all, I am pleasantly surprised a glib comment on my part could lead to so much information. I acknowledge up front, I am talking out my ass here. Thanks for taking your time with me.

        I had assumed that any request for clarifications (cases 2 and 3) would generate a new draft which gets re-reviewed. I was further assuming only the final round of reviews based on the final document would be published. To carry it to absurdity, do we need a live transcript of their document being written to be transparent? What it seems to me that they are presenting for public discourse is their finished idea with all the supporting facts they can generate, clarifications and rewrites along the way to that finished product seem incidental to me. But the thoughts of experts on the subject of that final work are hardly incidental.

        I spoke specifically about anonymity because I don't think there is a clearcut answer to the issue at hand, I see that there is a great argument for increasing public distrust. The issue for me is, we already have public distrust of science in many areas, and I don't think concealing scientific discourse is the path to making that problem better. Call it a meaningless value judgement, but I really don't see how science can work optimally without transparency. The obvious cost of that is you have to know how to read academic publications, which is already the case, and which I don't think the counter argument would alleviate.

        Coming from that point of view not wanting to publish the petty infighting and cronyism you are worried about exposing seems like a very shallow excuse to me. If anything having to write your response knowing that it will represent your field should help with this I would hope. As for the pigeonholed specialists comments, while potentially off-topic, that seems like a way to generate inter-domain knowledge, even if point C is totally trivial, if it made it into the final product it must have some relevance, and further discourse on it could be edifying.

        I don't envy the jobs of these editors after reading all of the above. This all leads us to the problem I hadn't considered until I wrote this, how do you publish a response to the response? How deep will this rabbit hole go, at what point do you have to give someone a final word without a review? Frankly the inability to infinitely recurse the reviews seems like the best argument I have heard yet against publishing the peer reviews.
  • (Score: 5, Informative) by bzipitidoo on Wednesday February 14 2018, @12:51AM (5 children)

    by bzipitidoo (4388) on Wednesday February 14 2018, @12:51AM (#637380) Journal

    They are talking about making the reviews public, not losing the reviewers' anonymity.

    And I think it's a good idea. Reviewers can get lazy, and reject papers because that's less work for them. Just say the authors didn't do enough work, didn't do a validation study, or a survey, or whatever. More work can always be done. Last paper I submitted was rejected, in part because the reviewers thought the material was too trivial. They wanted more. Guess I didn't dazzle them with enough formal proofs and heavy duty math. They feel more comfortable accepting a paper if it looks more "sophisticated", so to speak. There are an awful lot of unwritten conventions, such as how a research paper should be laid out and organized, and how to hype the discoveries. If not followed, reviewers become uncomfortable, suspect the authors are noobs and fear they don't know what they're talking about, and it greatly increases the odds of a rejection, regardless of the actual merits of the discoveries and work.

    There's also a problem with cliques. One of the worst papers I saw in a major conference was "Protein is incompressible", which was obviously accepted because of who the authors were. It was nothing more than "we tried to zip a DNA sequence and it didn't save any space, therefore evolution must have resulted in DNA already being pretty well compressed." The better journals try to handle that problem by keeping the authors' names from the reviewers, but in a small enough circle that sometimes doesn't work. Not so good journals are, unfortunately, legion, and the papers in them can be pretty bad. Like, there's the IEE-- yeah, 2 E's, not 3 E's. Don't know who they thought they were fooling, but the few papers I looked at in their journals were awful.

    All those issues are well-known problems with the system. Exposing reviews to the public would make it a lot harder for a lame review to pass muster. A good review should be able to withstand public scrutiny.

    • (Score: 3, Informative) by takyon on Wednesday February 14 2018, @01:05AM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday February 14 2018, @01:05AM (#637383) Journal

      The article also discusses losing the reviewers' anonymity, under the heading To name or not?

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 3, Informative) by AthanasiusKircher on Wednesday February 14 2018, @01:28AM (2 children)

      by AthanasiusKircher (5291) on Wednesday February 14 2018, @01:28AM (#637394) Journal

      I completely agree with many of your criticisms of the system. I just question whether this will actually fix them.

      -- First, why would this "make it a lot harder for a lame review to pass muster"? With anonymity intact (as you assume), if the paper is actually published, a "lame review" will only serve to make the journal look bad or cast doubt on the final paper (from ignorant readers who look at the review and don't understand why it lacks subtlety). So only the journal/editor or author is harmed. The reviewer isn't punished in any way for writing a "lame review." Perhaps the editor will stop sending things to them to be reviewed, but if the editors are ignoring the reviews and publishing papers anyway, they are probably already likely to stop sending as many things to that reviewer, regardless of whether the reviews are aired in public.

      -- Second, your system I think only really functions is reviews for REJECTED papers are made public. And really, in order for that system to function, the journal would have to publish the rejected submissions in order to understand the reviews -- which is a pretty weird concept. In that case, we might be able to evaluate whether papers are summarily rejected for bad reasons -- but at the cost of exposing authors to serious public ridicule. (Top reviewers in top journals often don't mince words in rejections.)

      I absolutely agree with you that it would be good to catch cases where papers are rejected for bad reasons, or perhaps accepted without sufficient scrutiny. The question is whether "airing the dirty laundry" of everything else in the review process is actually the best way to go about that.

      And I agree that "a good review should be able to withstand public scrutiny." (And by "public" I'm just going to assume you mean the researcher community in that field, because I don't think anything good is going to come of the general public scrutinizing a bunch of technical reviews they won't understand... they just have no metric to evaluate it, and it will likely lead to more misperceptions than insight about research.) I just don't see that publicizing reviews creates a very strong incentive for ANONYMOUS reviewers to change their patterns. After all, look at what AC says on here sometimes...

      • (Score: 2) by bzipitidoo on Wednesday February 14 2018, @07:25PM (1 child)

        by bzipitidoo (4388) on Wednesday February 14 2018, @07:25PM (#637799) Journal

        Going off on a seeming tangent here, but it'll make sense, just bear with me. Once read a critique in praise of email, for eliminating the stilted formality that had become customary in handwritten (or hand typed) letters, stuff such as starting the letter with "Dear". Scientific research is still stuck with a lot of formality. Journals typically require of authors such trivia as using only approved fonts and sizes and other typesetting details that simply are not relevant to research. Some even provide LaTeX templates, which certainly makes it easier, but doesn't satisfy the issue that scientists should not have to spend time on text formatting issues. There is also a strict page limit that can be impossible to check without formatting the text. While the page limit can help authors stay focused, much like the much lauded 140 character text message and tweet limit, it really does not seem necessary any more, not with the incredible amounts of digital storage at our disposal.

        Also traditionally excluded but now being welcomed yet still not part of the article, are such things as source code and data sets. Of course data sets can be way too large for presentation in the same manner as an article. However, mathematical formulae are not only wanted, they are darn near required. But source code? The way a journal article should be is not just in a digital format, but in a format that allows easy transferral of math to suitable mathematical software, which could be MatLab except that's proprietary, and data to a database or spreadsheet or just a file, and source code to the relevant language compiler or interpreter. The "finished" product being a pretty PDF file was the wrong direction. They're still thinking of the ultimate destination being a printed book. A format such as EPUB (which is really just HTML in a zip file) is a better direction to go. It would of course be trivial to include the reviews.

        So, thank you for your patience, and what does all that have to do with the subject? It's kind of a slippery slope, a good one. Challenge the assumption that reviews should be private, and maybe that will lead to these other issues being challenged. Or the other way around, break this addiction to PDF, typesetting, and printing on dead trees, and maybe the idea that reviews shouldn't be private any more will follow. There's a lot about scientific publication that needs scrutiny and improvement. I suspect one of the motivations for clinging to these outmoded printing considerations is academic publishers who wish to maintain their current stranglehold. Ending the reign of PDF isn't only about freeing scientists from mere text formatting problems, it's about freeing us all from these academic publishers who have degenerated into nothing more than rent seeking parasites.

        Another issue is that publishers don't pay reviewers, we, the public, pay for that, and only indirectly. But reviewers not employed at universities may well get nothing at all, other than brownie points or karma or the satisfaction of knowing that they helped advance mankind's knowledge, or some other vague, ephemeral compensation, for doing a review. I've done reviews where I pointed out very specific things that the authors didn't mention and should have, and even cited the relevant papers. And what did the bastards do but copy my review, nearly verbatim, into their paper! I know, because the revised version was kicked back to me for further review. Felt like I should have been a co-author. In that case, yeah, I think I would've liked not only my review to be public, but my name too.

        This brings up yet another problem, that of science being too solitary. You toil away on a paper all by yourself, sweat over whether it's correct, and whether it's good enough. The uncertainty is great because no one else has seen any of it. But of course in the world of "publish or perish", many are happy to get a free ride and add their names and not contribute, so collaboration isn't all roses. Then you finally submit it to a journal, and wait months for a review. And then it gets savaged. Maybe you missed some simple little thing. It's like playing chess, thinking and thinking about a move, and all the while you've overlooked an easy checkmate your opponent has. Chess grandmasters have lost games because of bad blunders like that. The entire education system is too biased towards lone wolf science rather than real collaboration.

        I think reviews being public is fair, as long as reviewers know that their reviews will not be private. The dirty laundry you speak of, well, if it's "bad" dirt, like an unfair rejection because the reviewer was too busy and gave the submission short shrift, then good riddance. If it's honest and good criticism of problems with the submission, then I am not afraid of the public misunderstanding. There've been many problems where secrecy merely fueled wild conspiracy theories, and that is worse.

        • (Score: 2) by AthanasiusKircher on Thursday February 15 2018, @02:49AM

          by AthanasiusKircher (5291) on Thursday February 15 2018, @02:49AM (#638036) Journal

          This is an interesting reply; thanks for taking the time. And again, I agree with a lot of what you said -- especially in terms of problems in academic publishing -- so I'm not going to really argue against it (since I already made my thoughts clear in another long post).

    • (Score: 2) by TheRaven on Wednesday February 14 2018, @02:54PM

      by TheRaven (270) on Wednesday February 14 2018, @02:54PM (#637605) Journal

      They are talking about making the reviews public, not losing the reviewers' anonymity.

      One is likely a subset of the other. If you analyse writing styles and correlate with the set of reviewers for each paper, using the conflicts to eliminate possible solutions, then you'll be able to deanonymise a lot of reviews.

      Reviewers can get lazy, and reject papers because that's less work for them.

      That's a problem for the PC. A poor review should be ignored and in good venues usually is. A really good review is a lot of effort to write, and will include a detailed suggestions for improvements. I strongly suspect that people would be less willing to commit all of this to a public form that is potentially vulnerable to deanonymisation than they would be to just write things down than they would to something that is shared only with the authors.

      --
      sudo mod me up
  • (Score: 2) by driverless on Thursday February 15 2018, @05:50AM

    by driverless (4770) on Thursday February 15 2018, @05:50AM (#638096)

    It's also not good for the people whose paper is being reviewed. Having it publicly pointed out that the review copy was full of mistakes, missed important research, or had other major problems, isn't going to do the authors any good. And arguments like "the reviews can serve as models for early career researchers, demonstrating how to write thorough evaluations" are completely bogus, I've written or helped write a number of articles specifically intended to help guide new authors towards getting their papers accepted, and yet I keep seeing the same mistakes made over and over again. It's not that the material isn't already out there, it's that it mostly gets ignored. And that gets back to my first point, having authors errors aired in public like this doesn't help the authors, and having the reviewers' names published alongside their comments doesn't help the reviewers. So who is being helped by this when it's neither the reviewers nor the authors?