Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.
posted by Fnord666 on Monday December 11 2017, @04:02AM   Printer-friendly
from the even-AIs-like-cat-pics dept.

Google Taught an AI That Sorts Cat Photos to Analyze DNA

When Mark DePristo and Ryan Poplin began their work, Google's artificial intelligence did not know anything about genetics. In fact, it was a neural network created for image recognition—as in the neural network that identifies cats and dogs in photos uploaded to Google. It had a lot to learn.

But just eight months later, the neural network received top marks at an FDA contest for accurately identifying mutations in DNA sequences. And in just a year, the AI was outperforming a standard human-coded algorithm called GATK. DePristo and Poplin would know; they were on the team that originally created GATK.

It had taken that team of 10 scientists five years to create GATK. It took Google's AI just one to best it. "It wasn't even clear it was possible to do better," says DePristo. They had thrown every possible idea at GATK. "We built tons of different models. Nothing really moved the needle at all," he says. Then artificial intelligence came along.

This week, Google is releasing the latest version of the technology as DeepVariant. Outside researchers can use DeepVariant and even tinker with its code, which the company has published as open-source software.

DeepVariant, like GATK before it, solves a technical but important problem called "variant calling." When modern sequencers analyze DNA, they don't return one long strand. Rather, they return short snippets maybe 100 letters long that overlap with each other. These snippets are aligned and compared against a reference genome whose sequence is already known. Where the snippets differ with the reference genome, you probably have a real mutation. Where the snippets differ with the reference genome and with each other, you have a problem.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by takyon on Monday December 11 2017, @09:24PM

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Monday December 11 2017, @09:24PM (#608442) Journal

    AlphaGo is a bit of a sideshow. Google has "silently" integrated machine learning and TPU hardware into their core products (search, translate, photos, more?):

    The Great A.I. Awakening [nytimes.com]

    Build and train machine learning models on our new Google Cloud TPUs [www.blog.google]

    They got about a decade worth of improvement (at the previous pace) in Google Translate in less than a year.

    I expect the Google Home (Amazon Echo/Alexa competitor) is also powered by some TPUs somewhere.

    In a way, they are racing against time to improve their machine learning efforts in order to try and use it to save YouTube with censorbots (to prevent advertisers from fleeing the platform in fear of bad press).

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2