Slash Boxes

SoylentNews is people

posted by CoolHand on Tuesday February 13 2018, @10:11PM   Printer-friendly
from the clearing-up-transparency dept.

Attendees of a Howard Hughes Medical Institute meeting debated whether or not science journals should publish the text of peer reviews, or even require peer reviewers to publicly sign their paper critiques:

Scientific journals should start routinely publishing the text of peer reviews for each paper they accept, said attendees at a meeting last week of scientists, academic publishers, and funding organizations. But there was little consensus on whether reviewers should have to publicly sign their critiques, which traditionally are accessible only to editors and authors.

The meeting—hosted by the Howard Hughes Medical Institute (HHMI) here, and sponsored by HHMI; ASAPbio, a group that promotes the use of life sciences preprints; and the London-based Wellcome Trust—drew more than 100 participants interested in catalyzing efforts to improve the vetting of manuscripts and exploring ways to open up what many called an excessively opaque and slow system of peer review. The crowd heard presentations and held small group discussions on an array of issues. One hot topic: whether journals should publish the analyses of submitted papers written by peer reviewers.

Publishing the reviews would advance training and understanding about how the peer-review system works, many speakers argued. Some noted that the evaluations sometimes contain insights that can prompt scientists to think about their field in new ways. And the reviews can serve as models for early career researchers, demonstrating how to write thorough evaluations. "We saw huge benefits to [publishing reviews] that outweigh the risks," said Sue Biggins, a genetics researcher at the Fred Hutchinson Cancer Research Center in Seattle, Washington, summarizing one discussion.

But attendees also highlighted potential problems. For example, someone could cherry pick critical comments on clinical research studies that are involved in litigation or public controversy, potentially skewing perceptions of the studies. A possible solution? Scientists should work to "make the public understand that [peer review] is a fault-finding process and that criticism is part of and expected in that process," said Veronique Kiermer, executive editor of the PLOS suite of journals, based in San Francisco, California.

Related: Peer Review is Fraught with Problems, and We Need a Fix
Odd Requirement for Journal Author: Name Other Domain Experts
Gambling Can Save Science!
Wellcome Trust Recommends Free Scientific Journals
Medical Research Discovered to Have Been Peer Reviewed by a Dog
Should Scientists Be Posting Their Work Online Before Peer Review?
Judge Orders Unmasking of Anonymous Peer Reviewers in CrossFit Lawsuit

Original Submission

Related Stories

Peer Review is Fraught with Problems, and We Need a Fix 52 comments is running a story on some of the issues with modern peer review:

Once published, the quality of any particular piece of research is often measured by citations, that is, the number of times that a paper is formally mentioned in a later piece of published research. In theory, this aims to highlight how important, useful or interesting a previous piece of work is. More citations are usually better for the author, although that is not always the case.

Take, for instance, Andrew Wakefield's controversial paper on the association between the MMR jab and autism, published in leading medical journal The Lancet. This paper has received nearly two thousand citations – most authors would be thrilled to receive a hundred. However, the quality of Wakefield's research is not at all reflected by this large number. Many of these citations are a product of the storm of controversy surrounding the work, and are contained within papers which are critical of the methods used. Wakefield's research has now been robustly discredited, and the paper was retracted by the Lancet in 2010. Nevertheless, this extreme case highlights serious problems with judging a paper or an academic by number of citations.

Personally, I've been of the opinion that peer review is all but worthless for quite a while. It's nice to know I'm not the only one who has issues with the process.

Odd Requirement for Journal Author: Name Other Domain Experts 24 comments

An Anonymous Coward write:

A friend from academia recently invited me to write a paper for a journal that he is guest editing. I don't write many papers (not in academia), so I figured I better look through the Author Guidelines to see what formats they would accept, etc.

Here is the Inderscience author faq page.

This one stopped me in my tracks:

Why am I asked to identify four experts?

You must identify four experts in the subject of your article, details of which will be requested during online submission. The experts must not be members of the editorial board of any Inderscience journal, must not be from your* institution, and at least two of them must be from a different country from you*.

The purpose of this request is ensure your familiarity with the latest research literature in the field and to identify suitable experts who can be added to our Experts Database and who may be asked if they are willing to review articles for Inderscience journals; we are unlikely to ask them to referee your article.
(*"you" refers to all authors of the paper)

Has anyone else been asked to identify professional friends by a journal publisher?

Needless to say, I'm not writing anything for Inderscience until this request is removed. Or maybe I'll write the paper as a favor to my friend...and provide names of experts from my field who are deceased.

Original Submission

Gambling Can Save Science! 16 comments

The field of psychology has recently been embarrassed by failed attempts to repeat the results of classic textbook experiments, and a mounting realization that many papers are the result of commonly accepted statistical shenanigans rather than careful attempts to test hypotheses.

Now Ed Yong writes at The Atlantic that Anna Dreber at the Stockholm School of Economics has created a stock market for scientific publications, where psychologists bet on published studies based on how reproducible they deemed the findings. Based on Robin Hanson's classic paper "Could Gambling Save Science," that proposed a market-based alternative to peer review called "idea futures," the market would allow scientists to formally "stake their reputation", and offer clear incentives to be careful and honest while contributing to a visible, self-consistent consensus on controversial (or routine) scientific questions.

Here's how it works. Each of 92 participants received $100 for buying or selling stocks on 41 studies that were in the process of being replicated. At the start of the trading window, each stock cost $0.50. If the study replicated successfully, they would get $1. If it didn't, they'd get nothing. As time went by, the market prices for the studies rose and fell depending on how much the traders bought or sold. The participants tried to maximize their profits by betting on studies they thought would pan out, and they could see the collective decisions of their peers in real time. The final price of the stocks, at the end of two-week experiment, reflected the probability that each study would be successfully replicated, as determined by the collective actions of the traders. In the end, the markets correctly predicted the outcomes of 71 percent of the replications—a statistically significant, if not mind-blowing score.

"It blew us all away," says Dreber. "There is some wisdom of crowds; people have some intuition about which results are true and which are not," adds Dreber. "Which makes me wonder: What's going on with peer review? If people know which results are really not likely to be real, why are they allowing them to be published?"

Original Submission

Wellcome Trust Recommends Free Scientific Journals 8 comments

The Wellcome Trust has recommended that scientists publish their research in free, open access journals, rather than "hybrid" publications it operates:

Expensive research journal subscriptions could be on the way out, if the Wellcome Trust has its way. The moneybags UK research foundation has published a report favoring free, so-called open access, journals over those that charge a fee for access. The report reviewed the activities of research institutions that received funding from the trust. It found that it is cheaper, and thus a better use of grants, to place papers in freely available journals.

Meanwhile, the trust feels it's not getting enough bang for its bucks from hybrid publications. These hybrids charge scientists a decent wedge of cash to publish their work, charge people for journal subscriptions, and offer access to individual articles for free. In other words, the foundation would rather scientists submit their work to open-access journals, which are cheaper than hybrids in terms of publication and subscription costs. "We find that hybrid open access continues to be significantly more expensive than fully open access journals, and that as a whole, the level of service provided by hybrid publishers is poor and is not delivering what we are paying for," the trust said.

Related: Wellcome Trust and COAF Open Access Spend, 2014-15

Original Submission

Medical Research Discovered to Have Been Peer Reviewed by a Dog 18 comments

Local "academic" Dr Olivia Doll — also known as Staffordshire terrier Ollie — sits on the editorial boards of seven international medical journals and has just been asked to review a research paper on the management of tumours.

Her impressive curriculum vitae lists her current role as senior lecturer at the Subiaco College of Veterinary Science and past associate of the Shenton Park Institute for Canine Refuge Studies — which is code for her earlier life in the dog refuge.

Ollie's owner, veteran public health expert Mike Daube, decided to test how carefully some journals scrutinised their editorial reviewers, by inventing Dr Doll and making up her credentials.

The five-year-old pooch has managed to dupe a range of publications specialising in drug abuse, psychiatry and respiratory medicine into appointing her to their editorial boards.

Dr Doll has even been fast-tracked to the position of associate editor of the Global Journal of Addiction and Rehabilitation Medicine.

Original Submission

Should Scientists Be Posting Their Work Online Before Peer Review? 29 comments

Earlier this month, when the biotech firm Human Longevity published a controversial paper claiming that it could predict what a person looks like based on only a teeny bit of DNA, it was just a little over a week before a second paper was published discrediting it as flawed and false. The lightening[sic] speed with which the rebuttal was delivered was thanks to bioRxiv, a server where scientists can publish pre-prints of papers before they have gone through the lengthy peer-review process. It took only four more days before a rebuttal to the rebuttal was up on bioRxiv, too.

This tit-for-tat biological warfare was only the latest in a series of scientific kerfuffles that have played out on pre-print servers like bioRxiv. In a piece that examines the boom of biology pre-prints, Science questions their impact on the field. In a time when a scandal can unfold and resolve in a single day's news cycle, pre-prints can lead to science feuds that go viral, unfolding without the oversight of peer-review at a rapid speed.

"Such online squabbles could leave the public bewildered and erode trust in scientists," Science argued. Many within the scientific community agree.

Should Scientists Be Posting Their Work Online Before Peer Review?


What do you think ??

Original Submission

Judge Orders Unmasking of Anonymous Peer Reviewers in CrossFit Lawsuit 10 comments

A judge has ordered that anonymous peer reviewers for an article in a science journal be unmasked on behalf of the exercise regimen company CrossFit, Inc.. The journal is published by a competitor of CrossFit:

In what appears to be a first, a U.S. court is forcing a journal publisher to breach its confidentiality policy and identify an article's anonymous peer reviewers.

The novel order, issued last month by a state judge in California, has alarmed some publishers, who fear it could deter scientists from agreeing to review draft manuscripts. Legal experts say the case, involving two warring fitness enterprises, isn't likely to unleash widespread unmasking. But some scientists are watching closely.

The dispute revolves around a 2013 paper, since retracted, that appeared in The Journal of Strength and Conditioning Research. In the study, researchers at The Ohio State University in Columbus evaluated physical and physiological changes in several dozen volunteers who participated for 10 weeks in a training regimen developed by CrossFit Inc. of Washington, D.C. Among other results, they reported that 16% of participants dropped out because of injury.

In public and in court, CrossFit has alleged that the injury statistic is false. CrossFit also claims that the journal's publisher, the National Strength and Conditioning Association (NSCA) of Colorado Springs, Colorado—which is a competitor in the fitness business—intentionally skewed the study to damage CrossFit. NSCA in turn has countersued, accusing CrossFit executives of defamation. Amid the legal crossfire, the journal first corrected the paper to reduce the number of injuries associated with CrossFit, then retracted it last year, citing changes to a study protocol that were not first approved by a university review board.

CrossFit suspects the paper's reviewers and editors worked to play up injuries associated with its regimen, and it has asked both federal and state judges to force the publisher to unmask the reviewers. In 2014, a federal judge refused that request. But last month, Judge Joel Wohlfeil of the San Diego Superior Court in California, who is overseeing NSCA's defamation suit against CrossFit, ordered the association to provide the names.

Original Submission

U.S. National Institutes of Health Punishes Researchers Who Break Confidentiality of Peer Review 14 comments

NIH moves to punish researchers who violate confidentiality in proposal reviews

When a scientist sends a grant application to the U.S. National Institutes of Health (NIH) in Bethesda, Maryland, and it goes through peer review, the entire process is supposed to be shrouded in secrecy. But late last year, NIH officials disclosed that they had discovered that someone involved in the proposal review process had violated confidentiality rules designed to protect its integrity. As a result, the agency announced in December 2017 that it would rereview dozens of applications that might have been compromised.

Now, NIH says it has completed re-evaluating 60 applications and has also begun taking disciplinary action against researchers who broke its rules. "We are beginning a process of really coming down on reviewers and applicants who do anything to break confidentiality of review," Richard Nakamura, director of NIH's Center for Scientific Review (CSR), said at a meeting of the center's advisory council earlier this week. (CSR manages most of NIH's peer reviews.) Targets could include "applicants who try to influence reviewers ... [or] try to get favors from reviewers."

[...] The agency provided few details about the transgressions after Michael Lauer, NIH's deputy director for extramural research, published a blog post on the matter on 22 December 2017.

Related: Should Scientific Journals Publish Text of Peer Reviews?

Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Informative) by JoeMerchant on Tuesday February 13 2018, @10:14PM (13 children)

    by JoeMerchant (3937) on Tuesday February 13 2018, @10:14PM (#637299)

    I'm sure different journals and different fields each have their own traditions of "how it is done" - but the folks I worked with would often be anonymously reviewing each others' papers, and that anonymity was key to the system. A reviewer will often critique' the completeness of an analysis and call for significant extra work to be done before publication. When the person being reviewed is your boss in real-life, that could get rather awkward.

    • (Score: 1, Interesting) by Anonymous Coward on Tuesday February 13 2018, @10:29PM

      by Anonymous Coward on Tuesday February 13 2018, @10:29PM (#637307)

      didnt thompson reuters (or whatever they are now) have a product that indexed peer reviewers so they could sell them to marketers

    • (Score: 3, Insightful) by maxwell demon on Tuesday February 13 2018, @10:31PM

      by maxwell demon (1608) Subscriber Badge on Tuesday February 13 2018, @10:31PM (#637308) Journal

      Even if it is not your boss, it might be the one who will be deciding on your next grant application. The anonymity of the reviewer is exactly in order to allow them to give honest reviews without fear of retaliation.

      Note however that the reviewer is anonymous only to the paper author and general public; the editor of course knows quite well who was reviewing the paper.

      The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 4, Informative) by insanumingenium on Tuesday February 13 2018, @11:12PM (3 children)

      by insanumingenium (4824) Subscriber Badge on Tuesday February 13 2018, @11:12PM (#637327)

      I am not in academia, but aren't those peer reviews provided verbatim to the author. If the text was identifying, that damage is already done. If it wasn't identifying, then what is the harm to anonymity in printing it in the same format it is presented to the author?

      • (Score: 5, Insightful) by AthanasiusKircher on Wednesday February 14 2018, @01:10AM (2 children)

        by AthanasiusKircher (5291) on Wednesday February 14 2018, @01:10AM (#637387) Journal

        I am not in academia, but aren't those peer reviews provided verbatim to the author.

        Depends on the practice of the journal and editor. Many journals pass along reviews verbatim. Some editors exercise discretion in editing what is sent on to the author. I've even seen journals make this explicit in a sort of "comments to the author" vs. "comments to the editor" submission. The latter is presumably for comments like, "I tried to be somewhat nice in evaluating the paper on its own terms and not being overtly insulting to the author, but honestly the main argument makes no sense and this should NOT be published." Some editors will even group comments from multiple reviewers, passing along mainly the points that the author needs to address to make the paper suitable for publication. (This tends to happen sometimes with more casual publications like edited collections, rather than top-tier peer-reviewed journals.)

        If the text was identifying, that damage is already done. If it wasn't identifying, then what is the harm to anonymity in printing it in the same format it is presented to the author?

        No harm is necessarily done in terms of anonymity, though authorship metrics and big data may be able to break down that anonymity a bit.

        The question, though, is whether there are other harms. I'd say the biggest is misperception. Reviewing is generally unpaid, and reviewers will undertake reviews with various levels of attention. Sometimes reviews may be very short and overly terse (either positive or negative) -- which doesn't necessarily indicate anything. If the paper is summarily good or summarily bad, that may be enough. On the other hand, in a lower-tier journal that basically publishes almost anything it's sent, a short positive review could be an indication that peer review wasn't thorough. It all depends on context that the general public probably won't be able to interpret.

        But more often, you get reviewers who are overly critical on aspects that ultimately may not matter. Say a paper is targeting issue A, but also in passing makes some remarks about B, C, D, and E. Suppose a reviewer is an expert in C. Often you get reviews where such a reviewer will go into great detail about everything that's wrong about two paragraphs dealing with C, even if it's not part of the main paper argument. The reviewer may want 75% of the paper to be about clarifying C, even though that's not the point. But to an ignorant outside observer, it may look like the reviewer completely trashed the paper, ranting about all the errors. We see these sorts of things happen in public debates and political arguments all the time -- you attack some minor detail that may not even be essential to the argument, but thereby draw suspicion on the whole thing.

        And the reviewer may not even be intending to trash the whole paper. They may just be using their expertise (in an overbearing way) to draw the author's attention to related work. Nevertheless, to an external reader who doesn't know the field, this may look like very harsh criticism.

        Good editors will often spot such things and know how to put them into context. If a review is excessively negative because it "didn't get the point," the editor may even send it out to more reviewers for clarification. Or just summarily tell the author they don't need to worry so much about revising the section on part C other than some minor corrections.

        Moreover, some reviewers are just nasty for no apparent reason. Or just refuse to consider the merits of the author's main point because they think something about the way it's presented is silly. Again, editors often have to make judgments about when this indicates serious flaws in the paper vs. just a ranting reviewer.

        The point is that anonymity in reviewing -- like anonymity on the internet -- breeds trolls occasionally too. But as GP pointed out here, removing anonymity also introduces other serious issues. Airing all of this stuff in public will often muddy the waters, rather than "educating the public" on critical reviews. Anyone can see what the general public does with criticism -- it breeds conspiracy theories in so many areas where even the most tiny discrepancy or error (or even perceived error) is found. Not that science should be presented as infallible -- obviously that's bad too. But there are better ways to introduce transparency or give critical counterpoint. For example, I've seen a few journals that frequently publish responses or comments from other scholars on papers in the same issue as new research. There criticism is actually intended for public consumption -- not just to improve the paper for publication or render a verdict on its worthiness (the ultimate goals of peer review).

        Also, one has to consider the various outcomes of the peer review process. Surely journals aren't going to publish results of negative reviews that lead to rejections... at least I assume. Which leads to selection bias and an inability for the public to evaluate the significance of reviews. If, as in the example gives in the summary, a lawyer brings up a critical review, it would be helpful to compare the level of criticism for accepted papers vs. those which were rejected. Can we do this?

        And what about other options for review? Practice from journal to journal varies, but reviewers often render a verdict that could take many different forms, including:

        (1) Accept basically as written
        (2) Accept with clarifications (often need to be addressed from review)
        (3) Recommend to revise and resubmit (paper is not without merit, but currently has serious flaws)
        (4) Reject outright

        (These are common options, but others may happen at journals too.)

        Only in the case of (1) are we really seeing a potential dialogue about the article as it will be printed. And in cases where no changes are recommended by reviewers, you're more likely to see brief reviews that aren't that useful anyway, because the thing is solid. If (2), are journals going to publish both the original version and any revised version? If not, now there's new pressure on authors to take into account everything in a review, or else you look stupid... and it makes the editorial job of sorting out which criticisms are essential for publication a lot harder, because now that guy who ranted in a review about issue C for a paper about issue A creates a potential misperception. And if (3) happens, this all gets a lot worse, because sometimes this will happen because of some stupid or embarrassing errors that an author can revisit. But now does it become public record? Or only reviews on the resubmitted version? It seems disingenuous to the notion of "transparency" to hide the reviews of the initial flawed version... but are they still relevant if significant revisions are made? Sometimes the errors may not have to do with fundamental ideas in the research, but subtle flaws in the methodology or analysis or argument. Yet airing this dialogue of repeated submissions publicly may ultimately draw unnecessary criticism to the final research.

        All that said, there are potentially very good reasons for more transparency in peer reviewing. For example, there are journals that may have editors who "go easy" on some authors and choose reviewers who are "friendly." But the general public probably won't be able to pick up on these subtleties. And some of the greatest unfairness may happen in rejected papers that just happened to draw bad reviewers -- but I doubt journals are going to start publishing rejected papers and their reviews somewhere...

        So there may be reason to introduce more oversight in the reviewing process. But just publishing documents that are meant to be primarily private criticisms/evaluations to individual authors is likely to have a bunch of unforeseen negative effects, probably more than positive ones. After all, I'm sure we could gain some insight into the research process by publishing memos and emails sent from lab directors to researchers as they're working on their research too. Or private correspondence among authors as they are finalizing a paper for submission. But these are also private documents about the process of improving the paper for publication, which is ultimately the goal of peer review... not to air public critique. Even if journals don't have a formalized process of publishing critiques along with new articles, almost every reputable journal has some sort of "comment" or "correspondence" or "letters to the editor" section which are often used for the very purpose brought up in the summary, i.e., public critique.

        I mean, I suppose to those "in the know" who can actual evaluate what reviews mean, making this public will likely cement views that are generally already well-known within fields, e.g., journal A is top-tier because it has rigorous standards, journal B is sometimes mediocre, and journal C publishes almost everything it receives. This will likely be reflected in the quality of the reviews, but again, the quality of the research in top journals often already speaks for itself. If anything, in terms of the general public, it's likely to create more distrust of better research, since reviews in top journals are likely to be more thorough and more critical (even on minor points, but the general public won't get that), whereas journal C that accepts everything will probably have glowing -- if less thorough -- reviews for every paper...

        • (Score: 0) by Anonymous Coward on Wednesday February 14 2018, @02:41AM

          by Anonymous Coward on Wednesday February 14 2018, @02:41AM (#637417)

          > publish responses or comments from other scholars on papers in the same issue as new research.

          I greatly enjoy reading old engineering papers from the Institute of Mechanical Engineers (UK) -- papers were published after they were given in front of an attentive audience, the discussion afterwords was transcribed, and the whole session (original paper, comments from audience and responses by the author) was all published together.

          But I suppose this would be much too expensive and slow to do today. Sad how much we've lost to the gods of cost and speed.

        • (Score: 2) by insanumingenium on Wednesday February 14 2018, @06:07PM

          by insanumingenium (4824) Subscriber Badge on Wednesday February 14 2018, @06:07PM (#637712)
          First of all, I am pleasantly surprised a glib comment on my part could lead to so much information. I acknowledge up front, I am talking out my ass here. Thanks for taking your time with me.

          I had assumed that any request for clarifications (cases 2 and 3) would generate a new draft which gets re-reviewed. I was further assuming only the final round of reviews based on the final document would be published. To carry it to absurdity, do we need a live transcript of their document being written to be transparent? What it seems to me that they are presenting for public discourse is their finished idea with all the supporting facts they can generate, clarifications and rewrites along the way to that finished product seem incidental to me. But the thoughts of experts on the subject of that final work are hardly incidental.

          I spoke specifically about anonymity because I don't think there is a clearcut answer to the issue at hand, I see that there is a great argument for increasing public distrust. The issue for me is, we already have public distrust of science in many areas, and I don't think concealing scientific discourse is the path to making that problem better. Call it a meaningless value judgement, but I really don't see how science can work optimally without transparency. The obvious cost of that is you have to know how to read academic publications, which is already the case, and which I don't think the counter argument would alleviate.

          Coming from that point of view not wanting to publish the petty infighting and cronyism you are worried about exposing seems like a very shallow excuse to me. If anything having to write your response knowing that it will represent your field should help with this I would hope. As for the pigeonholed specialists comments, while potentially off-topic, that seems like a way to generate inter-domain knowledge, even if point C is totally trivial, if it made it into the final product it must have some relevance, and further discourse on it could be edifying.

          I don't envy the jobs of these editors after reading all of the above. This all leads us to the problem I hadn't considered until I wrote this, how do you publish a response to the response? How deep will this rabbit hole go, at what point do you have to give someone a final word without a review? Frankly the inability to infinitely recurse the reviews seems like the best argument I have heard yet against publishing the peer reviews.
    • (Score: 5, Informative) by bzipitidoo on Wednesday February 14 2018, @12:51AM (5 children)

      by bzipitidoo (4388) Subscriber Badge on Wednesday February 14 2018, @12:51AM (#637380) Journal

      They are talking about making the reviews public, not losing the reviewers' anonymity.

      And I think it's a good idea. Reviewers can get lazy, and reject papers because that's less work for them. Just say the authors didn't do enough work, didn't do a validation study, or a survey, or whatever. More work can always be done. Last paper I submitted was rejected, in part because the reviewers thought the material was too trivial. They wanted more. Guess I didn't dazzle them with enough formal proofs and heavy duty math. They feel more comfortable accepting a paper if it looks more "sophisticated", so to speak. There are an awful lot of unwritten conventions, such as how a research paper should be laid out and organized, and how to hype the discoveries. If not followed, reviewers become uncomfortable, suspect the authors are noobs and fear they don't know what they're talking about, and it greatly increases the odds of a rejection, regardless of the actual merits of the discoveries and work.

      There's also a problem with cliques. One of the worst papers I saw in a major conference was "Protein is incompressible", which was obviously accepted because of who the authors were. It was nothing more than "we tried to zip a DNA sequence and it didn't save any space, therefore evolution must have resulted in DNA already being pretty well compressed." The better journals try to handle that problem by keeping the authors' names from the reviewers, but in a small enough circle that sometimes doesn't work. Not so good journals are, unfortunately, legion, and the papers in them can be pretty bad. Like, there's the IEE-- yeah, 2 E's, not 3 E's. Don't know who they thought they were fooling, but the few papers I looked at in their journals were awful.

      All those issues are well-known problems with the system. Exposing reviews to the public would make it a lot harder for a lame review to pass muster. A good review should be able to withstand public scrutiny.

      • (Score: 3, Informative) by takyon on Wednesday February 14 2018, @01:05AM

        by takyon (881) Subscriber Badge <> on Wednesday February 14 2018, @01:05AM (#637383) Journal

        The article also discusses losing the reviewers' anonymity, under the heading To name or not?

        [SIG] 10/28/2017: Soylent Upgrade v14 []
      • (Score: 3, Informative) by AthanasiusKircher on Wednesday February 14 2018, @01:28AM (2 children)

        by AthanasiusKircher (5291) on Wednesday February 14 2018, @01:28AM (#637394) Journal

        I completely agree with many of your criticisms of the system. I just question whether this will actually fix them.

        -- First, why would this "make it a lot harder for a lame review to pass muster"? With anonymity intact (as you assume), if the paper is actually published, a "lame review" will only serve to make the journal look bad or cast doubt on the final paper (from ignorant readers who look at the review and don't understand why it lacks subtlety). So only the journal/editor or author is harmed. The reviewer isn't punished in any way for writing a "lame review." Perhaps the editor will stop sending things to them to be reviewed, but if the editors are ignoring the reviews and publishing papers anyway, they are probably already likely to stop sending as many things to that reviewer, regardless of whether the reviews are aired in public.

        -- Second, your system I think only really functions is reviews for REJECTED papers are made public. And really, in order for that system to function, the journal would have to publish the rejected submissions in order to understand the reviews -- which is a pretty weird concept. In that case, we might be able to evaluate whether papers are summarily rejected for bad reasons -- but at the cost of exposing authors to serious public ridicule. (Top reviewers in top journals often don't mince words in rejections.)

        I absolutely agree with you that it would be good to catch cases where papers are rejected for bad reasons, or perhaps accepted without sufficient scrutiny. The question is whether "airing the dirty laundry" of everything else in the review process is actually the best way to go about that.

        And I agree that "a good review should be able to withstand public scrutiny." (And by "public" I'm just going to assume you mean the researcher community in that field, because I don't think anything good is going to come of the general public scrutinizing a bunch of technical reviews they won't understand... they just have no metric to evaluate it, and it will likely lead to more misperceptions than insight about research.) I just don't see that publicizing reviews creates a very strong incentive for ANONYMOUS reviewers to change their patterns. After all, look at what AC says on here sometimes...

        • (Score: 2) by bzipitidoo on Wednesday February 14 2018, @07:25PM (1 child)

          by bzipitidoo (4388) Subscriber Badge on Wednesday February 14 2018, @07:25PM (#637799) Journal

          Going off on a seeming tangent here, but it'll make sense, just bear with me. Once read a critique in praise of email, for eliminating the stilted formality that had become customary in handwritten (or hand typed) letters, stuff such as starting the letter with "Dear". Scientific research is still stuck with a lot of formality. Journals typically require of authors such trivia as using only approved fonts and sizes and other typesetting details that simply are not relevant to research. Some even provide LaTeX templates, which certainly makes it easier, but doesn't satisfy the issue that scientists should not have to spend time on text formatting issues. There is also a strict page limit that can be impossible to check without formatting the text. While the page limit can help authors stay focused, much like the much lauded 140 character text message and tweet limit, it really does not seem necessary any more, not with the incredible amounts of digital storage at our disposal.

          Also traditionally excluded but now being welcomed yet still not part of the article, are such things as source code and data sets. Of course data sets can be way too large for presentation in the same manner as an article. However, mathematical formulae are not only wanted, they are darn near required. But source code? The way a journal article should be is not just in a digital format, but in a format that allows easy transferral of math to suitable mathematical software, which could be MatLab except that's proprietary, and data to a database or spreadsheet or just a file, and source code to the relevant language compiler or interpreter. The "finished" product being a pretty PDF file was the wrong direction. They're still thinking of the ultimate destination being a printed book. A format such as EPUB (which is really just HTML in a zip file) is a better direction to go. It would of course be trivial to include the reviews.

          So, thank you for your patience, and what does all that have to do with the subject? It's kind of a slippery slope, a good one. Challenge the assumption that reviews should be private, and maybe that will lead to these other issues being challenged. Or the other way around, break this addiction to PDF, typesetting, and printing on dead trees, and maybe the idea that reviews shouldn't be private any more will follow. There's a lot about scientific publication that needs scrutiny and improvement. I suspect one of the motivations for clinging to these outmoded printing considerations is academic publishers who wish to maintain their current stranglehold. Ending the reign of PDF isn't only about freeing scientists from mere text formatting problems, it's about freeing us all from these academic publishers who have degenerated into nothing more than rent seeking parasites.

          Another issue is that publishers don't pay reviewers, we, the public, pay for that, and only indirectly. But reviewers not employed at universities may well get nothing at all, other than brownie points or karma or the satisfaction of knowing that they helped advance mankind's knowledge, or some other vague, ephemeral compensation, for doing a review. I've done reviews where I pointed out very specific things that the authors didn't mention and should have, and even cited the relevant papers. And what did the bastards do but copy my review, nearly verbatim, into their paper! I know, because the revised version was kicked back to me for further review. Felt like I should have been a co-author. In that case, yeah, I think I would've liked not only my review to be public, but my name too.

          This brings up yet another problem, that of science being too solitary. You toil away on a paper all by yourself, sweat over whether it's correct, and whether it's good enough. The uncertainty is great because no one else has seen any of it. But of course in the world of "publish or perish", many are happy to get a free ride and add their names and not contribute, so collaboration isn't all roses. Then you finally submit it to a journal, and wait months for a review. And then it gets savaged. Maybe you missed some simple little thing. It's like playing chess, thinking and thinking about a move, and all the while you've overlooked an easy checkmate your opponent has. Chess grandmasters have lost games because of bad blunders like that. The entire education system is too biased towards lone wolf science rather than real collaboration.

          I think reviews being public is fair, as long as reviewers know that their reviews will not be private. The dirty laundry you speak of, well, if it's "bad" dirt, like an unfair rejection because the reviewer was too busy and gave the submission short shrift, then good riddance. If it's honest and good criticism of problems with the submission, then I am not afraid of the public misunderstanding. There've been many problems where secrecy merely fueled wild conspiracy theories, and that is worse.

          • (Score: 2) by AthanasiusKircher on Thursday February 15 2018, @02:49AM

            by AthanasiusKircher (5291) on Thursday February 15 2018, @02:49AM (#638036) Journal

            This is an interesting reply; thanks for taking the time. And again, I agree with a lot of what you said -- especially in terms of problems in academic publishing -- so I'm not going to really argue against it (since I already made my thoughts clear in another long post).

      • (Score: 2) by TheRaven on Wednesday February 14 2018, @02:54PM

        by TheRaven (270) on Wednesday February 14 2018, @02:54PM (#637605) Journal

        They are talking about making the reviews public, not losing the reviewers' anonymity.

        One is likely a subset of the other. If you analyse writing styles and correlate with the set of reviewers for each paper, using the conflicts to eliminate possible solutions, then you'll be able to deanonymise a lot of reviews.

        Reviewers can get lazy, and reject papers because that's less work for them.

        That's a problem for the PC. A poor review should be ignored and in good venues usually is. A really good review is a lot of effort to write, and will include a detailed suggestions for improvements. I strongly suspect that people would be less willing to commit all of this to a public form that is potentially vulnerable to deanonymisation than they would be to just write things down than they would to something that is shared only with the authors.

        sudo mod me up
    • (Score: 2) by driverless on Thursday February 15 2018, @05:50AM

      by driverless (4770) on Thursday February 15 2018, @05:50AM (#638096)

      It's also not good for the people whose paper is being reviewed. Having it publicly pointed out that the review copy was full of mistakes, missed important research, or had other major problems, isn't going to do the authors any good. And arguments like "the reviews can serve as models for early career researchers, demonstrating how to write thorough evaluations" are completely bogus, I've written or helped write a number of articles specifically intended to help guide new authors towards getting their papers accepted, and yet I keep seeing the same mistakes made over and over again. It's not that the material isn't already out there, it's that it mostly gets ignored. And that gets back to my first point, having authors errors aired in public like this doesn't help the authors, and having the reviewers' names published alongside their comments doesn't help the reviewers. So who is being helped by this when it's neither the reviewers nor the authors?

  • (Score: 0) by Anonymous Coward on Tuesday February 13 2018, @10:24PM (2 children)

    by Anonymous Coward on Tuesday February 13 2018, @10:24PM (#637302)

    A possible solution? Scientists should work to "make the public understand that [peer review] is a fault-finding process and that criticism is part of and expected in that process

    So who hasn't been paying attention to the last 40 years of "teaching the controversy" from the tobacco and oil companies? Just think of all the "shocking" and "bombshell" comments that they'll find! Release the Memo!!

    You probably have a few that hang around here who can still get frothing at the mouth about the East Anglica emails on global warming "conspiracy".

    This is a disaster in the making.

    • (Score: 0) by Anonymous Coward on Tuesday February 13 2018, @10:47PM

      by Anonymous Coward on Tuesday February 13 2018, @10:47PM (#637314)

      This is a disaster in the making.

      You're not kidding. There's enough trolling in this business already. Imagine how "flamboyant" these reviewers will get if they think the public will view it. The whole thing will turn into 4chan...

    • (Score: 2) by requerdanos on Tuesday February 13 2018, @11:53PM

      by requerdanos (5997) Subscriber Badge on Tuesday February 13 2018, @11:53PM (#637340) Journal

      A possible solution? Scientists should work to "make the public understand that [peer review] is a fault-finding process and that criticism is part of and expected in that process

      This is a disaster in the making.

      So one says it's "a possible solution," and another says it's a "disaster in the making." Who's right?

      Answer: Our insightful AC, who rightly points out that this is a disaster in the making.

      Scientists should make the public work to understand that criticism is normal and expected, and somehow not critical, in the face of a plaintiff's lawyers screaming that "two reviewers found fault right here! They already knew this was a dangerous, deadly killer!"

      Now let's contrast that with a more reasonable approach: Non-deluded people should work to make scientists understand that factors such as the plaintiff's lawyers above, and other entrenched aspects of our over-blaming poor-pitiful-me culture (and the media's love of same, don't forget that) make any but the slightest criticism the kiss of death to that which is being criticized (unless it's an underwhelming nonperforming politician backed by political power, those somehow are the only things that seem to withstand even spot-on criticism).

  • (Score: -1, Troll) by Anonymous Coward on Tuesday February 13 2018, @10:45PM

    by Anonymous Coward on Tuesday February 13 2018, @10:45PM (#637313)

    Absolutely yes. We need to see pics of bodily fluids on the faces of peer reviewed scientists to make sure they are sucking enough dick/pussy to justify continued funding of grants.

  • (Score: 2) by MostCynical on Tuesday February 13 2018, @11:00PM (1 child)

    by MostCynical (2589) on Tuesday February 13 2018, @11:00PM (#637320)

    so, the scientists and other notables want the general public educated on the nuances of correction and critical discussion about research?

    General education is working so well, lets add more! [] [] []

    tau = 300. Greek circles must have been weird.
    • (Score: 2) by Arik on Wednesday February 14 2018, @12:22AM

      by Arik (4543) on Wednesday February 14 2018, @12:22AM (#637363) Journal
      Working as designed.
      "Grasp the essence, seize the root."
  • (Score: 4, Insightful) by crafoo on Wednesday February 14 2018, @04:06AM (2 children)

    by crafoo (6639) Subscriber Badge on Wednesday February 14 2018, @04:06AM (#637447)

    The only drawback mentioned in the quoted text essentially boils down to, "judges and lawyers are dumb". I don't necessarily agree but I also don't think it's a reasonable motivation for not making the reviews public.

    As for "Scientists should work to "make the public understand that [peer review] is a fault-finding process .." Uhhh no. Science educators should. Scientists should be doing science. In fact, the less administrative bullshit scientists are required to do the better for all of us. There are many things about science and many other important fields that the public should be better educated on. They aren't, and it's a direct consequence of our failure of an education system. Not the particular fields themselves.

    • (Score: 3, Informative) by FakeBeldin on Wednesday February 14 2018, @10:49AM (1 child)

      by FakeBeldin (3360) on Wednesday February 14 2018, @10:49AM (#637546) Journal

      Couldn't agree more.
      I believe more transparency in governance is a good thing, and reviews are a governance tool.

      There are plenty of unsavoury things happening in reviews, e.g.: a reviewer who decided to hate the paper/premise, a reviewer who doesn't get the paper (but overestimates his/her understanding), a reviewer who seems to have hardly put in any effort into the paper.
      There's also plenty of silver linings: reviews that help to improve the paper substantially, reviews that open up interesting avenues for thought, short reviews whose every word is worth its weight in gold, etc.

      Show the reviews, so that I as a submitter know what to expect. Hell, journals (and conferences) could start competing on review quality for good papers. I would quite likely pick out above-average reviewing venues for any paper that is not being submitted to top venues.

      • (Score: 3, Interesting) by FatPhil on Saturday February 17 2018, @04:56AM

        by FatPhil (863) <{pc-soylent} {at} {}> on Saturday February 17 2018, @04:56AM (#639216) Homepage
        Agree wholeheartedly, with pseudonomised reviews, but never identified, as that would have chilling effects.

        I've never been in acacemia, but I contributed to a bunch of mathematical monkeywork for someone else's paper once (he wanted to make me co-author, I'd have earnt an Erdos number for it, but I decided to just be in the thank-yous instead). The paper got rejected from 4 different journals with comments like "investigating the corner case where you don't just ignore a standard assumption, but actually assume it's false, is dumb". Finally a 5th journal published it, and the reviewer's comments were along the line of "this work turns the field on it's head - this is so important it should be in the textbooks". I sooooooo want to know who all those reviewera were. But not for the right reasons. Pseudonymising, so that trends can be spotted, and biases dealt with, is as far as it should go. There should be accountability.
        If vaccination works, then why doesn't eucharist protect kids against Christianity?