Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday May 11 2017, @06:47PM   Printer-friendly
from the I-read-that-somewhere dept.

Ross Mounce knows that when he shares his research papers online, he may be doing something illegal — if he uploads the final version of a paper that has appeared in a subscription-based journal. Publishers who own copyright on such papers frown on their unauthorized appearance online. Yet when Mounce has uploaded his paywalled articles to ResearchGate, a scholarly social network likened to Facebook for scientists, publishers haven't asked him to take them down. "I'm aware that I might be breaching copyright," says Mounce, an evolutionary biologist at the University of Cambridge, UK. "But I don't really care."

Mounce isn't alone in his insouciance. The unauthorized sharing of copyrighted research papers is on the rise, say analysts who track the publishing industry. Faced with this problem, science publishers seem to be changing tack in their approach to researchers who breach copyright. Instead of demanding that scientists or network operators take their papers down, some publishers are clubbing together to create systems for legal sharing of articles — called fair sharing — which could also help them to track the extent to which scientists share paywalled articles online.

Sharing information is antithetical to scientific progress.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Thursday May 11 2017, @07:37PM (7 children)

    by Anonymous Coward on Thursday May 11 2017, @07:37PM (#508262)

    Let the scientists do the science. Meanwhile, the publishers should be hacked into submission and everything leaked.

    Open access publishers with endowments could handle the costs of publishing and peer review.

  • (Score: 4, Insightful) by melikamp on Thursday May 11 2017, @08:13PM (6 children)

    by melikamp (1886) on Thursday May 11 2017, @08:13PM (#508291) Journal

    The tax payers are already paying for fundamental research, as many people think they should. Peer review is an crucial part of that research, and should be fully funded from the same source. If corporations want to pitch in, they are welcome to, but we should not depend on them for anything important. It is a travesty that reviewers today work essentially gratis on a task which is an integral part of the progress of science.

    The peer review process is broken silly even beyond it being a hostage to the slimy publishing houses. As it stands, we can't even be sure much of anything is being reviewed at all. The review process may begin and conclude behind closed doors, blind or not, but it is clear as day we should be gaining access to the complete review record upon either acceptance or refusal. Who reviewed, what comments they made, how those comments were addressed, the whole shebang. That way researchers who disagree with the peer review outcome can self-publish the complete record, which will become a huge motivating factor for the judges to do a decent job (for which they will be paid regardless of the outcome). So with rejections, at least the author should get the option of sharing the full record, whereas accepted articles should not even be seen without a review record attached.

    Bad: https://en.wikipedia.org/wiki/Scholarly_peer_review#Anonymous_and_attributed [wikipedia.org]

    Better: https://en.wikipedia.org/wiki/Scholarly_peer_review#Open_peer_review [wikipedia.org]

    • (Score: 4, Insightful) by AthanasiusKircher on Thursday May 11 2017, @08:54PM (5 children)

      by AthanasiusKircher (5291) on Thursday May 11 2017, @08:54PM (#508318) Journal

      While I completely understand the impetus behind your ideas (and agree with a lot of the principles), I'm not sure completely "open" peer review is the best idea. The way we know peer review is done and that it is done well is by trusting editors at reputable journals. Everyone in most fields knows which journals are the ones that actually practice rigorous review, which ones are a little "easier," and which ones will basically publish anything they receive even if they claim to be "peer reviewed." The latter case isn't really much better for your scholarly reputation than just posting an article on your own blog. Yes, there are failures of peer review, but less so at top journals. And good editors at top journals who don't receive sufficient or clear comments on the merits will generally reach out to more reviewers.

      I agree that the system is a bit crazy by depending on reviewers to work for free.

      But here's a few issues that arise with completely "open" peer review:

      (1) Reviewers are more likely to vote to accept articles. Several studies have shown this. It's kind of like writing a book review of a famous scholar in your field: unless you're an equally famous scholar, you're not likely to really "take them to task" publicly and write something that maybe points out some minor issues but doesn't torpedo their whole argument (even if you don't buy it).

      (2) This is a particular issue the smaller the subspecialty. Other studies on open review show that larger numbers of reviewers will simply decline to review. It'd be interesting to note at what stage such reviewers decline to do it. Nowadays peer reviewers can pretty much drop out at any stage of the process, since it's voluntary. If reviewers can drop out once they see an article, it might be because they figure out who the author might be and don't want to write something against them. Or because they realize the paper is bad, but don't want to risk public criticism of a colleague/peer. And then you end up making "publication bias" (which is already a problem) worse because editors end up "shopping around" until they find a reviewer who actually agrees to do it... and is probably more likely to make positive comments. Either that, or you end up with editors having to draw on more reviewers outside the immediate subspecialty who will be more willing to "ruffle feathers" of people they don't know as well -- but that risks worse review quality, since then you're dealing with people who aren't experts on that subspeciality.

      These are not just theoretical concerns. I saw something very much like it happen in an electronic journal that started in a field I'm familiar with about a decade ago. They didn't exactly practice "open review," but they would have reviewers or members of the editorial board publish a public "commentary" on the article along with the publication itself. The idea in theory was to do something like what you're asking for: to have an expert respond publicly and point out some issues with the article (both strengths and weaknesses).

      What actually happened in practice is that all the "comments" were pretty much positive and rarely engaged very deeply or critically, because it was a new journal in a sort of emerging field, so they packed the editorial board with people who were eager to get more research out in this area anyway. The quality of the journal was pretty poor, even when it received contributions from established scholars.

      Over the years, it's gotten better, but the public comments have morphed into something different -- they're more about scholars trying to publish their own ideas while vaguely "attaching" them to the other "main" articles. There's still relatively little serious criticism.

      So, while in theory I'm in favor of many of your ideas, I'm not sure it can actually work everywhere. And it likely will prevent some abuses of the current system and reveal crappy journals to be clearly the crappy things everyone in a field knows them already to be. But for the serious, rigorous journals, I worry it will prevent many reviewers from being open and honest -- or worse, even cause a lot of experts to summarily refuse to do reviews, particularly early in their careers or against someone or in an area where they don't want to risk insulting anyone. Sometimes anonymous review is actually helpful -- particularly when you're reviewing a truly awful paper and need to frank.

      • (Score: 3, Insightful) by aristarchus on Thursday May 11 2017, @09:13PM (1 child)

        by aristarchus (2645) on Thursday May 11 2017, @09:13PM (#508329) Journal

        The way we know peer review is done and that it is done well is by trusting editors at reputable journals.

        Yes, trusted people respected by their peers in the field, not some bloodthirsty, money-grubbing corporation! Why do you think this would change with open access?

        • (Score: 2) by AthanasiusKircher on Friday May 12 2017, @01:21AM

          by AthanasiusKircher (5291) on Friday May 12 2017, @01:21AM (#508421) Journal

          I don't think editorial ethics would change; I merely included that statement to reply to the previous post's assertion that we don't know if peer review is done well or at all. Good journals already do it well.

          The rest of my post is about a few potential negative effects of open peer review. Having been on both sides of peer review, I think many people would be less frank and critical if they knew comments would be later posted publicly under their name. But the parent is right -- it would likely also encourage reviewers to take work seriously. The net effect would probably be to improve bad journals and make it somewhat easier to get accepted to good ones. But it could also get more mediocre research published if people are less critical.

      • (Score: 2) by melikamp on Thursday May 11 2017, @09:45PM

        by melikamp (1886) on Thursday May 11 2017, @09:45PM (#508345) Journal

        I am not actually advocating "open review" as described on the wiki, but I do think it's better than a traditional model where we know nothing at all. I actually agree with your points, especially (1), and do believe that making reviews blind may help to reduce the "nice guy" bias. But once the process is done, and the study is accepted, all records must be unsealed and attached to the online publication. And if the study was rejected, all records must be unsealed and handed over to the authors, to use as they please.

        And another thing, all that bias and attrition are indeed very real and nasty, but we don't really know whether they will persist if universities force the faculty to do the review duty at their standard rate. I am trying to imagine myself in their shoes: peer review is written into my job description, I get paid no matter what, I am only one person on a panel of 3 or 4, and my only real concern is that my peers will at some point go over the unsealed review record and see that I personally didn't do anything of value. I think I would just try to be fair and do a decent job, but that's just me :)

      • (Score: 3, Interesting) by bradley13 on Friday May 12 2017, @06:25AM (1 child)

        by bradley13 (3053) on Friday May 12 2017, @06:25AM (#508514) Homepage Journal

        I agree with essentially all of your points. When I was a grad student, and later a post-doc, part of my duties involved reviewing papers in our group's area of research (under the auspices of my supervisor). I certainly wrote some critical comments, and I remember one particular research group whose ideas I found just totally useless. It was, however, quite a famous research group. If my comments on their papers had been published under my name, I would have hesitated to criticize them, because doing so would surely have hurt me more than them.

        tl;dr: It's important for the editor to know who made certain comments, but those comments need to be passed on anonymously.

        The other puzzle piece missing is replication. Replication works needs to be honored. Imagine a journal that publishes papers, and includes links to the replication studies: A paper with 0 replications could be looked at askance. A paper with 2-3 replications would be viewed as solid. A paper with 2-3 failed replication attempts would be seen for what it is.

        Who wants to do replication? Me. In my current position, I teach far too much to have time for cutting-edge research. But following someone else's recipe? What a great source for student projects - which we always need - and a great way for me to at least stay in shouting distance of the cutting edge. Unfortunately, the culture just isn't there: replication won't get any funding, and serious journals aren't interesting in publishing replication reports. So we muddle along...

        --
        Everyone is somebody else's weirdo.