Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Sunday March 15 2015, @04:27AM   Printer-friendly
from the recursion:-see-recursion dept.

CBS News Reports:

A growing number of scientific studies is making it harder for researchers to keep track of all their content.

Attention decay in science [link to paper], a new paper published by professors from universities in Finland and California, reports that "the attention that can be devoted to individual papers measured by their citation counts, is bound to decay rapidly," due to the overwhelming number of studies.

The research suggests that the decay is accelerating in recent times, signaling that papers are forgotten more quickly. The study focused on scientific research but notes that the same concept can be applied to the internet and popular culture.

The conclusion states that due to the exponential growth of these publications scholars “forget” papers more easily now than in the past, sometimes making it harder to isolate the most relevant information.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Informative) by takyon on Sunday March 15 2015, @04:43AM

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Sunday March 15 2015, @04:43AM (#157959) Journal
    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 5, Insightful) by fishybell on Sunday March 15 2015, @04:45AM

    by fishybell (3156) on Sunday March 15 2015, @04:45AM (#157960)

    I call BS on this (of course, not having RTFA). If there were a good way to search the content of all of the scientific literature that is produced each year, and everyone had access to all of it (ie no paywalls) then there would be -- like there is for the internet -- no real maximum size. From what I gathered from the summary the biggest problem isn't that there is too much information out there, but that it's hard to keep track of it. It's a solved problem (see Google et al), just not one that the publishers are willing to let be solved.

    • (Score: 2) by kaszz on Sunday March 15 2015, @04:53AM

      by kaszz (4211) on Sunday March 15 2015, @04:53AM (#157962) Journal

      Study finds that studies that are inaccessible by paywalls gets duplicated? ;-)

    • (Score: 3, Interesting) by gringer on Sunday March 15 2015, @06:42AM

      by gringer (962) on Sunday March 15 2015, @06:42AM (#157979)

      Keeping track of information is difficult, and journals generally don't like people to pepper their articles with too many citations. If the same information gets spread around, then the chance of citation drops for any particular article that contains that information. This is a problem, even with Watson-level recall, and even the very best papers will suffer from this issue.

      Let's say there's a wonderful paper published in a journal that does a whole bunch of things*. It survives for about 6 months with citations ramping up, but then someone discovers one of those things can be done better. Then, people who would previously cite the big paper and therefore let others know about it, might decide that in their particular area, the new paper is a more appropriate citation.

      About 6 months after that, the paper has hit its "peak citation rate", as the popularity of the paper is eroded in many different areas by the smaller, newer papers. Pick any one of those new papers, and you could easily say the earlier paper is better. However, pick any one of those many things, and you can probably find a better paper for the that particular area of study. Funding sources encourage this behaviour — being better than some previous paper, and fragmenting the research knowledge as much as possible.

      People could read the single big paper and get a great overview, but over time they become more likely to know about the smaller papers which give excellent detail, but are very specific. Over time, the general knowledge of readers is reduced, and they lose track of related work outside their area of expertise.

      * <plug>The closest example to this of a paper that I have co-authored is this one [sciencedirect.com]. It has over 30 sub-projects that stem from an initial de-novo transcriptome assembly and differential expression analysis.</plug>

      --
      Ask me about Sequencing DNA in front of Linus Torvalds [youtube.com]
      • (Score: 2) by kaszz on Sunday March 15 2015, @07:02AM

        by kaszz (4211) on Sunday March 15 2015, @07:02AM (#157981) Journal

        So the Trojan horse is an article with a lot of citations from other articles within other journals? ;)

        I think Evilvier perhaps will get some Troj^H^H articles!

      • (Score: 5, Insightful) by Thexalon on Sunday March 15 2015, @01:29PM

        by Thexalon (636) on Sunday March 15 2015, @01:29PM (#158012)

        The real problem that this only touches on: Why the heck are we trying to focus on quantifying the value of research and researchers by counting publications and citations? It's not really a numerical thing, and sometimes crackpots turn out to be right and seemingly mainstream folks turn out to be crackpots.

        About the only reason for this behavior I can see is to justify the complete lack of tenure-track positions compared to the number of people qualified to take them. That has a lot to do with the massive cuts in university funding, and the massive increase in spending on non-academic personnel at academic institutions (and yes, deans count as non-academic personnel unless they have teaching duties, which most don't).

        --
        The only thing that stops a bad guy with a compiler is a good guy with a compiler.
        • (Score: 0) by Anonymous Coward on Sunday March 15 2015, @03:31PM

          by Anonymous Coward on Sunday March 15 2015, @03:31PM (#158030)

          Also the massive increase in more PhDs being trained for positions that do not exist.

    • (Score: 3, Interesting) by Anonymous Coward on Sunday March 15 2015, @01:46PM

      by Anonymous Coward on Sunday March 15 2015, @01:46PM (#158015)
      But what's the problem if we had millions of studies AND they were all of high quality? So to me the problem is that many (most?) of the studies are crap and it's often not that easy to tell which are the crap ones.

      A lot of this is due to the "publish or perish" system and the popular belief that we shouldn't be paying people if they "do nothing" (and the difficulty of figuring out whether someone is doing nothing or actually thinking :) ). But people doing crap is often worse than doing nothing[1].

      I suspect if we paid crappy scientists to sit around and do nothing it would still be better than paying crappy scientists to do crappy flawed studies and publish them.

      We probably still need some way of deciding which scientists are crap enough to get rid off. But just because it's subjective and requires some brains, taste and talent doesn't mean it won't work well enough. After all what's the metric Steve Jobs used for "insanely great" that many customers agreed on? What objective measure do you use for deciding one chef's dish is excellent and another chef's is below par?

      [1] That's why I'd be agreeable with some form of welfare system - where you can get paid to do nothing but you can't have more than X children if you can't support yourself and your children and you can't find suitable sponsors for them (if the country is not rich enough X = zero). Better to pay people to do nothing, than pay for them in more expensive ways when they commit crimes or do other negative stuff to support themselves. In contrast some welfare states seem to set things up so that the non-productive ones breed more than the productive ones. That's only fine if they shared enough of the same genes (bee/ant colonies).
  • (Score: 0) by Anonymous Coward on Sunday March 15 2015, @04:51AM

    by Anonymous Coward on Sunday March 15 2015, @04:51AM (#157961)

    This paper is destined to never be cited, or used on a grant submission.

  • (Score: 4, Insightful) by Snotnose on Sunday March 15 2015, @05:10AM

    by Snotnose (1623) on Sunday March 15 2015, @05:10AM (#157964)

    More studies, by definition, can't be "too many". The problem is some PHD candidate examines a few people, writes a paper with a surprising or controversial result, and the 24/7 news media jumps all over it. Usually by distorting the result all out of proportion and ignoring the A) small sample size; B) possible bias of the researcher; C) possibility of sample error; or D) some other stupid thing.

    --
    Why shouldn't we judge a book by it's cover? It's got the author, title, and a summary of what the book's about.
    • (Score: 2) by CRCulver on Sunday March 15 2015, @05:23PM

      by CRCulver (4390) on Sunday March 15 2015, @05:23PM (#158058) Homepage

      The paper in question isn't talking about "reporting" of science to the general public or the non-specialist media. It is talking about the difficulty scholars have with keeping up with all progress in their field. From the SN submission with the key word in bold:

      The conclusion states that due to the exponential growth of these publications scholars “forget” papers more easily now than in the past

    • (Score: 1) by beardedchimp on Monday March 16 2015, @11:57AM

      by beardedchimp (393) on Monday March 16 2015, @11:57AM (#158310)

      More studies, by definition, can't be "too many".

      That's not true at all. Professors are under huge pressure by their University to churn out papers. Depending on the country and University there is an expected number of papers to be published per year and in China the number is beyond reasonable. This results in more papers of less quality and less research behind them. 5 studies is not intrinsically better than 1 study of high quality and impact.

  • (Score: 2) by rts008 on Sunday March 15 2015, @05:19AM

    by rts008 (3001) on Sunday March 15 2015, @05:19AM (#157967)

    Okay, now I'm primed up for the news that the study that finds that studies have jumped the shark, has also jumped the shark.

    The very first thing that popped into my head was something from Hank Williams Jr., where he wants to be on the coalition to ban coalitions.

    I guess this is one of the downsides to success in academia that requires paper publishing churn as a measure of success/worth.

    • (Score: 0) by Anonymous Coward on Sunday March 15 2015, @06:28PM

      by Anonymous Coward on Sunday March 15 2015, @06:28PM (#158077)

      Okay, now I'm primed up for the news that the study that finds that studies have jumped the shark, has also jumped the shark.

      I'm just waiting for "Those responsible for sacking the people who have just been sacked have been sacked." Then I will know this has completely run its course.

      • (Score: 0) by Anonymous Coward on Monday March 16 2015, @01:45PM

        by Anonymous Coward on Monday March 16 2015, @01:45PM (#158340)

        The rest of the comments have been made by a new team in a completely different style.

  • (Score: 3, Informative) by mendax on Sunday March 15 2015, @07:38AM

    by mendax (2840) on Sunday March 15 2015, @07:38AM (#157983)

    I think we need another study to study the conclusions of this study.

    --
    It's really quite a simple choice: Life, Death, or Los Angeles.
  • (Score: 0) by Anonymous Coward on Sunday March 15 2015, @03:44PM

    by Anonymous Coward on Sunday March 15 2015, @03:44PM (#158034)

    to pay people who study studies.

    That way, we can all just read SoylentNews and order pizza with our hard earned studying.