Stories
Slash Boxes
Comments

SoylentNews is people

posted by n1 on Wednesday April 23 2014, @05:20AM   Printer-friendly
from the who-can-you-trust-now dept.

Ars Technica has an article on investigations performed by Science magazine and the Ottawa Citizen.

Peer-reviewed scientific papers are the gold standard for research. Although the review system has its limitations, it ostensibly ensures that some qualified individuals have looked over the science of the paper and found that it's solid. But lately there have been a number of cases that raise questions about just how reliable at least some of that research is.

The first issue was highlighted by a couple of sting operations performed by Science magazine and the Ottawa Citizen. In both cases, a staff writer made up some obviously incoherent research. In the Citizen's example, the writer randomly merged plagiarized material from previously published papers in geology and hematology. The sting paper's graphs came out of a separate paper on Mars, while its references came from one on wine chemistry. Neither the named author nor the institution he ostensibly worked at existed.

Unfortunately, by attempting to highlight the problem of lax review procedures, some computer scientists may have exacerbated the problem. Suspecting that some reviewers weren't doing a thorough job on some conference papers, they put together a random gibberish paper generator for anyone who wanted to test whether reviewers were paying attention. Unfortunately, that software has since been used to get 120 pieces of gibberish published.

Related Stories

Three Scholars Dupe "Grievance Studies" Journals Into Publishing Hoax Papers 112 comments

In an effort to show how politically correct nonsense and evil (but I repeat myself) can get through academic peer review and be published, some academics did just that with seven papers. More are partly through the process.

A particularly funny and horrifying case is the Gender Studies journal Affilia. Adolf Hitler's Mein Kampf only needed to be translated with wording in the typical style of intersectionality theory, and it passed muster.

Another published paper, considered exemplary scholarship by the journal that published it, contains this whopper: "Dog parks are microcosms where hegemonic masculinist norms governing queering behavior and compulsory heterosexuality can be observed in a cross-species environment."

The Grievance Studies Scandal: Five Academics Respond

Now, three academics have submitted twenty spoof manuscripts to journals chosen for respectability in their various disciplines. Seven papers were accepted before the experiment stopped; more are surviving peer review. This new raid on screamingly barmy pseudo-scholarship is the Alan Sokal Opening, weaponised. Like dedicated traceurs in a Parkour-fest, the trio scrambled over the terrain of what they call Grievance Studies. And they dropped fire-crackers. One published paper proposed that dog parks are "rape-condoning spaces." Another, entitled "Our Struggle is My Struggle: Solidarity Feminism as an Intersectional Reply to Neoliberal and Choice Feminism" reworked, and substantially altered, part of Mein Kampf. The most shocking, (not published, its status is "revise and resubmit") is a "Feminist Approach to Pedagogy." It proposes "experiential reparations" as a corrective for privileged students. These include sitting on the floor, wearing chains, or being purposely spoken over. Reviewers have commented that the authors risk exploiting underprivileged students by burdening them with an expectation to teach about privilege.

Also at WSJ.

Related: Publishing Stings Find Shoddy Peer Review
Absurd Paper Accepted by Open-Access Computer Science Journal
Media World Fooled with Bogus Chocolate Diet Story


Original Submission #1Original Submission #2

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by aristarchus on Wednesday April 23 2014, @05:32AM

    by aristarchus (2645) on Wednesday April 23 2014, @05:32AM (#34723) Journal

    Sokal, is that you? Again? We got it the first time.

    • (Score: 2) by c0lo on Wednesday April 23 2014, @06:01AM

      by c0lo (156) Subscriber Badge on Wednesday April 23 2014, @06:01AM (#34730) Journal

      Sokal, is that you? Again? We got it the first time.

      Yeah... lately is as a newsworthy feat as reporting yet another group of teenage students sending a mobile phone up by a helium balloon to take pictures from "space".
      Which is a pity, actually, I had foolishly hoped the science journals' reviewing process would improve.

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
  • (Score: 5, Insightful) by buswolley on Wednesday April 23 2014, @05:35AM

    by buswolley (848) on Wednesday April 23 2014, @05:35AM (#34724)

    I know it is popular to bash on scientists, but the impact factors of these journals are very very low. I wouldn't even consider sending something to a journal with an impact of 0.4, for example...unless it is very trusted within a small subfield.

    --
    subicular junctures
    • (Score: 2) by buswolley on Wednesday April 23 2014, @05:39AM

      by buswolley (848) on Wednesday April 23 2014, @05:39AM (#34726)

      Also, I am constantly getting spam asking me to submit articles to journals I've never heard of. This is spam and scam, not science going wrong.

      --
      subicular junctures
    • (Score: 2) by aristarchus on Wednesday April 23 2014, @05:48AM

      by aristarchus (2645) on Wednesday April 23 2014, @05:48AM (#34727) Journal

      Um, you know that outside of the UK, no one has "impact factors", or even has any idea what they might be. Please explain, and submit your explanation to a peer review of sufficient impact.

      • (Score: 2, Interesting) by Anonymous Coward on Wednesday April 23 2014, @11:29AM

        by Anonymous Coward on Wednesday April 23 2014, @11:29AM (#34797)

        Um, you know that outside of the UK, no one has "impact factors", or even has any idea what they might be.

        Anyone involved in science or science publishing in any meaningful knows exactly what an impact factor [wikipedia.org] is. Most journals post their impact factor on the web sites like a restaurant posting its health inspection. Journal Citation Reports [thomsonreuters.com] is essentially a Who's Who for journals and the principle source and advocate for "impact factor".

        Really little more than the frequency of citation of a journal, but reasonable proxy for credibility and interest of its articles. You can look at impact factor as a kind of meta-review, where (presumably) the scientific community reads papers, evaluates whether they're actually credible, and chooses to cite only the best papers to support their hypotheses, thus resulting in higher "impact" for journals that manage to select good papers. This assumes that authors do not just skim the abstracts of papers looking for ones that support their own preconceived models, and that a prolific author's self-citations are a negligible fraction of the total citations. In practice, it's a kind of self-sustaining popularity contest, subject to all the biases as website comment moderation.

  • (Score: 2, Funny) by cafebabe on Wednesday April 23 2014, @06:36AM

    by cafebabe (894) on Wednesday April 23 2014, @06:36AM (#34739) Journal

    The paper may be complete bunkum but it is still a huge advance for the field of exogeohematological oenophilia.

    --
    1702845791×2
  • (Score: 1, Interesting) by Anonymous Coward on Wednesday April 23 2014, @06:58AM

    by Anonymous Coward on Wednesday April 23 2014, @06:58AM (#34743)

    I guess the scientific method needs metamoderation (moderation of moderation), just like SoylentNews. Or metareview.

    Science is the only worthwhile idea we've discovered yet its implementations are very shoddy at best in many journals. These should get punished hard by their subscribers for fraud like this as their function as filters for quality is their only reason of existence.

    • (Score: 5, Informative) by moondrake on Wednesday April 23 2014, @07:55AM

      by moondrake (2658) on Wednesday April 23 2014, @07:55AM (#34752)

      A form of metareviewing of scientific papers exists in the form of editors. Various journals implement this in different ways, but most common is that the editor tries to judge whether the reviewers did a good job, and weighs their opinions when he makes the final decision on the paper. Sometimes the section editor has to relay his view to an editor-in-chief who makes the final decision, adding yet another layer to the process. Note that for most of the smaller journals, the editors are still senior scientists, not full time employees of the publisher.

      Of course, this system is not without problems either. The editors are usually not anonymous and are sometimes not impartial. They also are more likely not fully aware of the details of the topic, and have little time to read all, as they get to see much more manuscripts compared to normal reviewers. I suspected several times that work that I reviewed was not actually read carefully by the editor, he or she just relied on what the reviewers said.

    • (Score: 3, Interesting) by lhsi on Wednesday April 23 2014, @08:01AM

      by lhsi (711) on Wednesday April 23 2014, @08:01AM (#34753) Journal

      I submitted a story the other day (still pending at the moment) about how public discussion of actual errors in science papers leads to more corrections. The researcher had a list of papers that had errors in them; for some the errors had been made public, and with others the errors hadn't.

      In the cases the errors in the papers had been made public there was a higher chance of the scientific record being corrected compared to when the errors were not made public.

      • (Score: 1) by opinionated_science on Wednesday April 23 2014, @01:54PM

        by opinionated_science (4031) on Wednesday April 23 2014, @01:54PM (#34874)

        there is a problem with publication of anything require subject specific knowledge and experimental results.

        In the sciences it is assumed that the methods describe how the results were generated.

        The problem is , to replicate some papers findings is very difficult when you may not have the resources to do so.

        In that case experienced reviewers apply their experience and do what we call in the trade "a BS test". Does it make you giggle when you read it?

        The fundamental problem is there are corporations that make a profit off of publicly funded works, and therefore there is the perverse incentive to encourage prolific publishing.

        This is more likely to be a problem in CS since the "lab" is ubiquotous, but in the experimental sciences it can cause real problems e.g. stem cells etc...

  • (Score: 5, Insightful) by MrGuy on Wednesday April 23 2014, @02:48PM

    by MrGuy (1007) on Wednesday April 23 2014, @02:48PM (#34918)

    It's interesting to note that the study by Science (a closed access journal) only targeted Open Access journals.

    My favorite quote from the article: "From the start of this sting, I have conferred with a small group of scientists who care deeply about open access. Some say that the open-access model itself is not to blame for the poor quality control revealed by Science's investigation. If I had targeted traditional, subscription-based journals, Roos told me, "I strongly suspect you would get the same result."

    Yes, I bet you probably would. And yet, you didn't.

    I get that open access, almost by definition, makes it easier to put out a fake-peer-reviewed (or at best weakly-peer-reviewed) journal. But I'm saddened (though not surprised) that the study's author (who is writing for a NON-open-access journal) elected NOT to perform the same test on non-open-access journals.

    As such, if we really want to be scientific (which is allegedly what the author wants us to be), we should be very careful about interpreting the conclusion "some open-access journals have quality control problems" to mean "open-access journals are the ONLY journals with quality control problems" or worse "open-access journals have WORSE quality control problems than closed-access journals."

    Glad the author of the Science study at least included that quote at the end. But the one-line abstract "A spoof paper concocted by Science reveals little or no scrutiny at many open-access journals" really strongly implies that the issue is ONLY with open access.

    The Ars Technical article points this out nicely, which is appreciated.

    • (Score: 0) by Anonymous Coward on Wednesday April 23 2014, @10:55PM

      by Anonymous Coward on Wednesday April 23 2014, @10:55PM (#35213)

      I am a proponent of open access journals, but there would be a difference in acceptance rate. There are open access journals that are for-profit scams, since they charge for publication costs. That being said, the amount of lazy reviewers (that let things slip past them) would probably be the same between closed and open access journals (after excluding the scam journals).