Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Friday March 16 2018, @06:39AM   Printer-friendly
from the editor-lives-matter dept.

In the ongoing open access debate, which oldmedia publishers have been able to drag out for decades, oldmedia publishers have repeatedly made the assertion that articles in their very expensive journals are greatly improved during the publication process. Glyn Moody, writing at Techdirt, discusses the lack of value added by expensive, subscription-only journals over the original, freely-available pre-prints of the very same papers, thus negating the claims from the oldmedia publishers.

Such caveats aside, this is an important result that has not received the attention it deserves. It provides hard evidence of something that many have long felt: that academic publishers add almost nothing during the process of disseminating research in their high-profile products. The implications are that libraries should not be paying for expensive subscriptions to academic journals, but simply providing access to the equivalent preprints, which offer almost identical texts free of charge, and that researchers should concentrate on preprints, and forget about journals. Of course, that means that academic institutions must do the same when it comes to evaluating the publications of scholars applying for posts.

Scientific method requires that hypotheses be testable, and that means publishing anything necessary for a third party to reproduce an experiment. So some might even say that if your research ends up behind a paywall, then what you are doing is not even science in the formal sense of the concept.

Previously on SN :
New York Times Opinion Piece on Open Access Publishing (2016)
India's Ministry of Science & Technology Join Open-Access Push (2015)
Open Access Papers Read and Cited More (2014)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by AthanasiusKircher on Friday March 16 2018, @12:50PM (3 children)

    by AthanasiusKircher (5291) on Friday March 16 2018, @12:50PM (#653548) Journal

    I agree with a lot of other comments here that journals mostly add only a content-filtering mechanism. But the research cited in TFA is fundamentally flawed, since it compares pre-print versions with final published versions of articles accepted for publication.

    Which means the accepted articles were good enough to be accepted, so they met certain expected professional standards, because authors were motivated to improve their research/writing to get it published.

    You can't go from the observation that high-quality pre-prints are similar to high-quality published journal versions to the idea that if you just removed content and quality filtering, it would still all just "be good."

    To put it another way -- authors aspire to be published in a reputable journal. Take away that motivation and those standards, and you now have authors who don't take as much care preparing their articles, perhaps not even as much care in their research itself.

    TFA is like a business executive arguing that we don't need Quality Assurance in manufacturing, because we don't see any significant improvement in product quality by having it. Except, that analogy is a little flawed, given the power journals have over academic careers. Imagine if to get another job, you needed a recommendation from the people in your Quality Assurance department. Imagine if promotions at a future company were contingent on stellar reviews from former Quality Assurance personnel. I think you'd be darn sure when you sent your products to QA, they were already strongly vetted and were solid work.

    That's effectively the situation scientists are often in, because publication reputation is so critical to their careers. Remove that, and a lot motivation for high-quality output disappears. An executive who argued in such a situation, "Oh, we don't see any 'valued added' by our QA personnel, so let's get rid of 'em" is not only out-of-touch, but perhaps delusional.

    NOTE: I'm NOT arguing in favor of the current system, with its ridiculous costs. But it's going to take a lot more to come up with a substitute system than simply saying, "We don't need journals because they add no value." The value often isn't in the revision or copyediting or whatever -- it's in the aspirations of those who seek to publish in high-quality journals in the first place.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by AthanasiusKircher on Friday March 16 2018, @12:54PM (2 children)

    by AthanasiusKircher (5291) on Friday March 16 2018, @12:54PM (#653554) Journal

    Oh and sorry for the self-reply -- but to make something explicit that was implicit in my argument: the study mentioned in TFA obviously doesn't account for articles sent to the journal but REJECTED. In the QA analogy, it's like an executive looking only at the products that made it through QA and saying, "that department doesn't add anything."

    • (Score: 0) by Anonymous Coward on Friday March 16 2018, @01:20PM (1 child)

      by Anonymous Coward on Friday March 16 2018, @01:20PM (#653575)

      There was that editor of a medical journal who said they selected papers by throwing them down a flight of stairs and publishing whichever reached the bottom. Then for one issue they only published papers that failed peer review and nobody noticed.

      https://en.m.wikipedia.org/wiki/Richard_Smith_(editor) [wikipedia.org]

      • (Score: 2) by AthanasiusKircher on Friday March 16 2018, @02:16PM

        by AthanasiusKircher (5291) on Friday March 16 2018, @02:16PM (#653600) Journal

        To be clear, I didn't say that peer review process is good. There in fact seems to be conflicting evidence of its value, and I know many studies question it. My point was that the research mentioned in TFA is a poor way to judge the potential value of the journal process. (That is -- just because we know peer review is often broken doesn't necessarily mean that we should accept the conclusion of even a poor study with bad methodology because it agrees with what we expect. I'm sure there's a meta-moral of sorts in this.)