Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Sunday October 01 2017, @11:44PM   Printer-friendly
from the scientific-skirmishes dept.

Earlier this month, when the biotech firm Human Longevity published a controversial paper claiming that it could predict what a person looks like based on only a teeny bit of DNA, it was just a little over a week before a second paper was published discrediting it as flawed and false. The lightening[sic] speed with which the rebuttal was delivered was thanks to bioRxiv, a server where scientists can publish pre-prints of papers before they have gone through the lengthy peer-review process. It took only four more days before a rebuttal to the rebuttal was up on bioRxiv, too.

This tit-for-tat biological warfare was only the latest in a series of scientific kerfuffles that have played out on pre-print servers like bioRxiv. In a piece that examines the boom of biology pre-prints, Science questions their impact on the field. In a time when a scandal can unfold and resolve in a single day's news cycle, pre-prints can lead to science feuds that go viral, unfolding without the oversight of peer-review at a rapid speed.

"Such online squabbles could leave the public bewildered and erode trust in scientists," Science argued. Many within the scientific community agree.

Should Scientists Be Posting Their Work Online Before Peer Review?

[Source Article (PDF)]: THE PREPRINT DILEMMA

What do you think ??


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by Anonymous Coward on Monday October 02 2017, @01:36AM (3 children)

    by Anonymous Coward on Monday October 02 2017, @01:36AM (#575737)

    I had my stuff stolen. I was taking a class in logic that was taught by a professor whose PhD mentor was one of the authors of the book. We hit an unanswered question, and I came up with the solution. My professor and worked on writing a paper and were just about ready to submit it, when my professor came to our next meeting with a dejected look on his face. The other coauthor of the book had stolen my idea and was going to have it published. Turns out my professor told his mentor who told his coauthor who rushed out a paper. So now, my idea has numerous papers written about it, appears in SEP, and taught in a textbook used by tens of thousands (at least) of students a year, while being named after someone else.

    At least I got an "A" in the class.

    Starting Score:    0  points
    Moderation   +4  
       Interesting=3, Informative=1, Total=4
    Extra 'Interesting' Modifier   0  

    Total Score:   4  
  • (Score: 3, Informative) by TheLink on Monday October 02 2017, @08:22AM (1 child)

    by TheLink (332) on Monday October 02 2017, @08:22AM (#575816) Journal

    I suspect I was the one who inspired CoDel. Everyone and dog was barking up the wrong tree about buffers being too large:
    https://dl.acm.org/citation.cfm?id=2071893&preflayout=flat#comments [acm.org]
    See Dave Taht's post about "byte queue limits" - limiting queues by bytes:

    The publication deadline for this missed the adoption of Byte Queue Limits (BQL) into the net-next tree
    of the linux kernel. This holds promise to improve latency inside the driver portion of that stack by an
    order of magnitude.

    Followed by my comment:

    In my opinion the actual solution to latency is not a reduction in buffer sizes. Because the real problem isn't actually large buffers. The problem is devices holding on to packets longer than they should. Given the wide variations in bandwidth, it is easier to define "too high a delay" than it is to define "too large a buffer".

    So the real solution would be for routers (and other similar devices) to drop and not forward packets that are older than X milliseconds (where X could be 1, 5, 50 or 100 depending on the desired maximum hop latency and the output bandwidths). X would be measured from the time the packet ends up in that router. Routers may have different values for X for different output paths/ports or even "tcp connections" (more expensive computationally), or a single hop wide value (cheaper to calculate).

    Followed by my other comment:

    Anyway, once you take packet age into account, it doesn't matter even if your buffer is infinite in size.

    After that they came up with CoDel and some revisionist history in Wikipedia: https://en.wikipedia.org/wiki/CoDel#The_CoDel_algorithm [wikipedia.org]

    Based on Jacobson's notion from 2006, CoDel was developed to manage queues under control of the minimum delay experienced by packets in the running buffer window.

    Which is disingenuous since they were still fixated about buffer and queue sizes for years till I made that comment and if you read the actual 2006 pdf ( http://www.pollere.net/Pdfdocs/QrantJul06.pdf [pollere.net] ) it was still stuck in that sort of thinking. You can see the suggestions they were talking about in 2006 was different from the method I suggested. Do note the wiki citation for that sentence is to a 2012 paper, which being in 2012 doesn't back up the claim that it was based on a 2006 notion ;).

    But I'm more amused than anything. And I'll be happy if Cisco etc started doing things better. But I wouldn't have any money to challenge any patents though, so if anyone patents ( https://www.google.com/patents/US9686201 [google.com] ) and charges for such stuff I'll leave it to others to fight that prior art thing.

    See also my comment on Slashdot in Jan 2011: https://tech.slashdot.org/comments.pl?sid=1939940&cid=34793154 [slashdot.org]
    That's before the Dec 2011 comments and 2012 paper and the patent application.

    From that Dec 2010 article ( https://gettys.wordpress.com/2010/12/06/whose-house-is-of-glasse-must-not-throw-stones-at-another/ [wordpress.com] )
      to the Dec 2011 article those bunch were still going "OMG buffers are too big!". When that's not really the problem.

    • (Score: 1, Informative) by Anonymous Coward on Monday October 02 2017, @11:18AM

      by Anonymous Coward on Monday October 02 2017, @11:18AM (#575853)

      See, this is exactly the kind of garbage we'll have to filter through when reading everyone's articles.

  • (Score: 3, Insightful) by tfried on Monday October 02 2017, @07:14PM

    by tfried (5534) on Monday October 02 2017, @07:14PM (#576102)

    Well, that sucks. I feel for you there.

    But then, preprints (aka pre-review-publication) wouldn't have helped you at all, as it just speeds up the process for anyone. If some random asshole can beat you to submitting a paper for peer review, they can beat you to non-reviewed publication just the same.