Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.
posted by Fnord666 on Monday February 22 2021, @08:47AM   Printer-friendly

The AI research paper was real. The "co-author" wasn't:

David Cox, the co-director of a prestigious artificial intelligence lab in Cambridge, Massachusetts, was scanning an online computer science bibliography in December when he noticed something odd—his name listed as an author alongside three researchers in China whom he didn't know on two papers he didn't recognize.

At first, he didn't think much of it. The name Cox isn't uncommon, so he figured there must be another David Cox doing AI research. "Then I opened up the PDF and saw my own picture looking back at me," Cox says. "It was unbelievable."

It isn't clear how prevalent this kind of academic fraud may be or why someone would list as a co-author someone not involved in the research. By checking other papers written by the same Chinese authors, WIRED found a third example, where the photo and biography of an MIT researcher were listed under a fictitious name.

It may be an effort to increase the chances of publication or gain academic prestige, Cox says. He says he has heard rumors of academics in China being offered a financial reward for publishing with researchers from prestigious Western institutions.

Whatever the reason, it highlights weaknesses in academic publishing, according to Cox and others. It also reflects a broader lack of rules around the publishing of papers in AI and computer science especially, where many papers are posted online without review beforehand.

"This stuff wouldn't be so harmful if it didn't undermine public trust in peer review," Cox says. "It really shouldn't be able to happen."


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Interesting) by FatPhil on Monday February 22 2021, @12:23PM (10 children)

    by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Monday February 22 2021, @12:23PM (#1115936) Homepage
    And surely this is orthogonal to reliability of peer review anyway. For almost all fields, peer review is not a review of correctness, it's a review of believability. For most of those fields, reproduction would have the role of review of correctness. For other fields, correctness has never been important anyway, alas.

    Fraudulent papers are often believable, which is why, in academia, fraud was typically held at about the same level as paedo-murder-rape - the punishment for abusing the loophole that it's quite easy to get away with must be so harsh that it's not worth the risk, not even once. Which was all well and good while the punishment remained harsh, but nowadays it seems to be being brushed aside with less of a sense of existential threat.

    I personally think there should be the introduction of institution-wide, or even country-wide, academic death sentences. Like the old "internet death sentence" of yore. There's more to "peer review" than just reading each others' papers - there's also policing how the work is being done, and policing is best done as close to the root of the problem as possible, the further away you get, the more nuclear you need to the policing to be. The institutions need to either root out their bad actors, or be rooted out of the academic system entirely.

    What to do? Institutionalised Science needs to be funding people like Bik, who is currently Patreon-funded:
    https://scienceintegritydigest.com/2020/12/31/2020-a-year-in-review/
    """
    Since I quit my full time job in March 2019, I have been able to work full time on science integrity. I am still following up on leads from the 800 papers I found during my scan of 20,000 biomedical papers with photos (published in mBio), but most of my time goes into investigating papers sent to me through email or social media.
    """
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    Starting Score:    1  point
    Moderation   +3  
       Insightful=1, Interesting=2, Total=3
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 2) by bzipitidoo on Monday February 22 2021, @01:12PM (2 children)

    by bzipitidoo (4388) on Monday February 22 2021, @01:12PM (#1115957) Journal

    And this is just one of many problems with peer review. Yes, I have been aware for some time that a totally unrelated scientist's name could be tacked onto a work. It doesn't seem to be of any value, if the peer review is done properly, with the reviewers kept ignorant of who the authors are so that they will not be swayed by that knowledge. It could still sway the editors, especially if they don't take the time to check for this. As for extreme punishment, it's not a viable solution to a problem. Better to make it impossible to pull that stunt.

    The title of this article says the AI research was real. But was it? If the real authors are willing to lie about who contributed, they'd surely be willing to tell other lies. Fudge the results, doctor the data, that sort of thing.

    Another problem is this sort of questioning and reporting can go too far, and make everyone involved in similar endeavors too skittish. There's a lot of suspicion of science these days. I am thinking especially that there is too much rejection of perfectly good papers, for not doing more, being even better, even more significant. Like this one somewhat prestigious journal reporting that they received 57 papers, and accepted only 24. That's an awfully high rejection rate. I find it sick, really. Few enough people even make it through a PhD program. Applicants can be rejected at the start. Those who get past that can fail the orals, and be kicked out. Then there's the final barrier, getting past the "PhDABD" stage. If you make it and get the sheepskin, then you have several problematic choices. Try for a postdoc? Try to get a faculty position? Then, welcome to the world of Publish or Perish. Anyway, one would think that having gone through all the effort to earn a PhD, and succeeding, one would not be subjected to such high rejection rates on their submissions for publication, as if still students of dubious potential.

    • (Score: 0) by Anonymous Coward on Monday February 22 2021, @02:57PM (1 child)

      by Anonymous Coward on Monday February 22 2021, @02:57PM (#1115988)

      One of the core problems is that we produce too many PhDs.
      There is a vast oversupply, leading to the problems you see here.

      • (Score: 0) by Anonymous Coward on Monday February 22 2021, @04:08PM

        by Anonymous Coward on Monday February 22 2021, @04:08PM (#1116012)

        And yet nobody to do any work. The "grant winner" lottery has selected for the wrong people. You have an entire lab of expensive winners none of whom can do the work they promised but need to hire the cheapest, most pliant foreign visa-seekers to do it. Of course they can't do it either - who cares? Just churn out some shit and shovel it to whatever journal. The people doing the reviews are - surprize! - not the genius lottery winners but those same shitty PhD students that are getting hired to do the work. The real, seasoned scientists don't have a job any more - too expensive, too slow, too annoying - only winners and students churning out copy allowed.

  • (Score: 5, Interesting) by PiMuNu on Monday February 22 2021, @01:14PM

    by PiMuNu (3823) on Monday February 22 2021, @01:14PM (#1115958)

    Just to support your point of view - as someone who has worked in large collaborations with complex data processing (nb physical sciences), the journal peer review is barely perceptible. The collaboration does all the peer review, and results have to be cross-checked by other labs and experiments. This is because it is impossible for anyone outside the collaboration to understand or check the nitty gritty of the data analysis.

    Case study - no one believes the sterile neutrino results floating around at the moment. Why? Because:
    * one of the main experiments (miniboone) published a result and then revised/retracted it.
    * the main experiments have results that are both inconsistent with standard neutrino physics but also _inconsistent with each other_.

    Another case study - apparent measurement of faster than light neutrinos (was in the news a few years back) were caused by a loose cable somewhere in the experimental apparatus. How can a journal referee spot this without having access to the experiment's apparatus? Clearly it needs to be checked by the experimenters and cross-checked by other labs.

    Anything involving population studies (medical, social "science") - might as well not even bother as far as I am concerned.

  • (Score: 0) by Anonymous Coward on Monday February 22 2021, @01:15PM (3 children)

    by Anonymous Coward on Monday February 22 2021, @01:15PM (#1115959)

    While it's annoying, I've got an account on ResearchGate and linked a few engineering papers that I was co-author on. RG sends me an email whenever one of my papers is cited, which sometimes turns up interesting research.

    RG also sends me "Is this you?" emails, where it's trying to determine if I'm an author on some other work--so it might have helped David Cox find the misuse of his name?

    • (Score: 2) by looorg on Monday February 22 2021, @01:48PM (2 children)

      by looorg (578) on Monday February 22 2021, @01:48PM (#1115972)

      This is what is partially weird about the story. Most people and organizations that matters in cases such as this subscribe to various services that check for those names (or keywords) to be mentioned in both digital- and more traditional print. There is a notification every time something new is found. So I find it somewhat odd that he, the co-director, of some fancy AI lab in Cambridge sits and does this manually. Unless he is just doing the old 'lets Google myself and see what we can find' (problem is that his name isn't really unique enough - David Cox, also it's probably an excellent p0rn-name). Still isn't this just the type of task he should instruct his precious AI to do for him?

      That said it's an interesting problem. What stops anyone from just slapping a few names on the paper to increase or inflate the gravitas of the work, certainly if it's outside the Anglo sphere, after all most people don't bother checking Chinese, Russian etc. So while it might be more or less instantly caught if you tried it here it might pass over there since nobody is really looking that hard. That said most people probably won't mind all to much if the paper is good, possibly dishonest but if they can claim they didn't know they didn't know. After all this will increase your citation mentric, which is sadly a valid metric on you academic prowess.

      I guess for him it's somewhat reverse tho, he doesn't want his name attached to a bunch of sub-par noname papers.

      That said I don't think it will undermine the public trust in the peer review process, after all they don't even know or care what it is. Nothing to undermine then.

      • (Score: 0) by Anonymous Coward on Monday February 22 2021, @03:36PM (1 child)

        by Anonymous Coward on Monday February 22 2021, @03:36PM (#1116000)

        Simple step for the journal staff -- send a form-letter-email to all the authors of every paper, using their published university email addresses (scraped from Uni websites, not supplied with the paper). Make sure that you get a "yes, I'm a co-author" reply from every author.

        As noted, completely independent from peer review, and something useful for journal staff to do (as staff is less needed due to e-reviewing and e-publishing).

        • (Score: 3, Interesting) by looorg on Monday February 22 2021, @03:50PM

          by looorg (578) on Monday February 22 2021, @03:50PM (#1116004)

          There is no standard as far as I know and each and every journal does things a bit differently. Some, or most even -- highly subjective estimate tho, require something akin to what you suggest, some don't. For some you are supposed to supply data and not just the finished paper. Some used to require paper copies, not that common anymore I think and then they also had forms of various kinds that you had to sign to allow them (or grant them the rights to publish it).

          Since citation has become so important there was a burst of new journals and more and more niche journals for smaller and smaller subject fields so while each field probably have like their premiere publication there are also a mass amount of less reputable publications or the alternative when departments or universities just start their own little publications just so that they can print and publish their own articles and papers as some kind of fail-safe. (and push the citation metric for their staff).

  • (Score: 0) by Anonymous Coward on Monday February 22 2021, @06:11PM (1 child)

    by Anonymous Coward on Monday February 22 2021, @06:11PM (#1116077)
    There was some guy who called me a Flat Earther because I said lots of peer reviewed stuff are bullshit. He has to be nearly as stupid as a Flat Earther to not be able to figure that out.
    • (Score: 0) by Anonymous Coward on Tuesday February 23 2021, @11:03PM

      by Anonymous Coward on Tuesday February 23 2021, @11:03PM (#1116659)

      I think you are double bullshit for besmirching peer review, you disgusting deplorable Flat-Earther, you!! You must be as stupid as the actual authors of the paper under discussion! BTW, how muchs is "lots"? Metric, or Imperial?