Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday February 21 2019, @01:29AM   Printer-friendly
from the search-and-research dept.

Machine-learning techniques used by thousands of scientists to analyse data are producing results that are misleading and often completely wrong.

Dr Genevera Allen from Rice University in Houston said that the increased use of such systems was contributing to a "crisis in science".

She warned scientists that if they didn't improve their techniques they would be wasting both time and money. Her research was presented at the American Association for the Advancement of Science in Washington.

A growing amount of scientific research involves using machine learning software to analyse data that has already been collected. This happens across many subject areas ranging from biomedical research to astronomy. The data sets are very large and expensive.

[...] "There is general recognition of a reproducibility crisis in science right now. I would venture to argue that a huge part of that does come from the use of machine learning techniques in science."

The "reproducibility crisis" in science refers to the alarming number of research results that are not repeated when another group of scientists tries the same experiment. It means that the initial results were wrong. One analysis suggested that up to 85% of all biomedical research carried out in the world is wasted effort.

It is a crisis that has been growing for two decades and has come about because experiments are not designed well enough to ensure that the scientists don't fool themselves and see what they want to see in the results.

https://www.bbc.com/news/science-environment-47267081


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Thursday February 21 2019, @02:51AM (3 children)

    by Anonymous Coward on Thursday February 21 2019, @02:51AM (#804340)

    > You tell machines to do the science for you, are you a scientist?

    I tried something similar on famous artist Claus Oldenberg once, at a college art opening. One of the pieces on display was this one (or very similar):
    http://infinitemiledetroit.com/Claes_Oldenburgs_Giant_Three-Way_Plug_and_the_Issue_of_Projective_Vision.html [infinitemiledetroit.com] There was also a similar sized cube tap made from fabric (soft sculpture), and several other large metal sculptures.

    My question was, did you have fun making these things after you conceived of them. His answer was that he didn't make anything, jobbed it all out to specialist fabrication shops.

    ** You hire craftspeople to build your sculptures, are you an artist?

  • (Score: 1, Insightful) by Anonymous Coward on Thursday February 21 2019, @03:26AM (1 child)

    by Anonymous Coward on Thursday February 21 2019, @03:26AM (#804352)

    At least the "artist" jobbed out to - note - "specialist" fabrication shops. Most of this AI Science is DIY with some quick intro to Python Jupyter coked up on the researcher's own little notebook. Or in the cases of super-computer abuse, well, faulty premises at best, programmed in as the expected result. No surprises here, human error amplified byu several orders of magnitude by machine iteration.

    • (Score: 0) by Anonymous Coward on Thursday February 21 2019, @03:43AM

      by Anonymous Coward on Thursday February 21 2019, @03:43AM (#804359)

      > coked up on the researcher's own little notebook

      Ha ha, I see what you did there! (assuming you meant cooked).

  • (Score: 0) by Anonymous Coward on Thursday February 21 2019, @03:31AM

    by Anonymous Coward on Thursday February 21 2019, @03:31AM (#804354)

    In this case, he put it out open, and he should have listed his subcontractors in the credit for the works.

    Rather different from "scientists" that use some "smart" software tools that they don't understand, to validate their hypothesis.

    In the end, it comes down to: Do you even understand what the fuck you are arguing?