Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Friday March 16 2018, @06:39AM   Printer-friendly
from the editor-lives-matter dept.

In the ongoing open access debate, which oldmedia publishers have been able to drag out for decades, oldmedia publishers have repeatedly made the assertion that articles in their very expensive journals are greatly improved during the publication process. Glyn Moody, writing at Techdirt, discusses the lack of value added by expensive, subscription-only journals over the original, freely-available pre-prints of the very same papers, thus negating the claims from the oldmedia publishers.

Such caveats aside, this is an important result that has not received the attention it deserves. It provides hard evidence of something that many have long felt: that academic publishers add almost nothing during the process of disseminating research in their high-profile products. The implications are that libraries should not be paying for expensive subscriptions to academic journals, but simply providing access to the equivalent preprints, which offer almost identical texts free of charge, and that researchers should concentrate on preprints, and forget about journals. Of course, that means that academic institutions must do the same when it comes to evaluating the publications of scholars applying for posts.

Scientific method requires that hypotheses be testable, and that means publishing anything necessary for a third party to reproduce an experiment. So some might even say that if your research ends up behind a paywall, then what you are doing is not even science in the formal sense of the concept.

Previously on SN :
New York Times Opinion Piece on Open Access Publishing (2016)
India's Ministry of Science & Technology Join Open-Access Push (2015)
Open Access Papers Read and Cited More (2014)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by AthanasiusKircher on Friday March 16 2018, @12:54PM (2 children)

    by AthanasiusKircher (5291) on Friday March 16 2018, @12:54PM (#653554) Journal

    Oh and sorry for the self-reply -- but to make something explicit that was implicit in my argument: the study mentioned in TFA obviously doesn't account for articles sent to the journal but REJECTED. In the QA analogy, it's like an executive looking only at the products that made it through QA and saying, "that department doesn't add anything."

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Friday March 16 2018, @01:20PM (1 child)

    by Anonymous Coward on Friday March 16 2018, @01:20PM (#653575)

    There was that editor of a medical journal who said they selected papers by throwing them down a flight of stairs and publishing whichever reached the bottom. Then for one issue they only published papers that failed peer review and nobody noticed.

    https://en.m.wikipedia.org/wiki/Richard_Smith_(editor) [wikipedia.org]

    • (Score: 2) by AthanasiusKircher on Friday March 16 2018, @02:16PM

      by AthanasiusKircher (5291) on Friday March 16 2018, @02:16PM (#653600) Journal

      To be clear, I didn't say that peer review process is good. There in fact seems to be conflicting evidence of its value, and I know many studies question it. My point was that the research mentioned in TFA is a poor way to judge the potential value of the journal process. (That is -- just because we know peer review is often broken doesn't necessarily mean that we should accept the conclusion of even a poor study with bad methodology because it agrees with what we expect. I'm sure there's a meta-moral of sorts in this.)