Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by martyb on Tuesday December 01 2020, @04:04AM   Printer-friendly
from the Solved-for-suitably-small-value-of-"solved" dept.

DeepMind's AI is Claimed to Make Gigantic Leap in Solving Protein Structures

DeepMind's program, called AlphaFold, outperformed around 100 other teams in a biennial protein-structure prediction challenge called CASP, short for Critical Assessment of Structure Prediction. The results were announced on 30 November, at the start of the conference — held virtually this year — that takes stock of the exercise.

John Moult of the University of Maryland in College Park (founder of this conference) says: "In some sense the problem is solved."

https://www.nature.com/articles/d41586-020-03348-4

Many Caveats: This seems unusually breathless for Nature and this is a very hard problem that's been worked on for decades. Having worked in a group studying the protein folding problem back in the 80s, I've learned to be pretty skeptical of miracles in this over the years. That said if it is accurate that this works well enough to provide clues to x-ray diffraction determination of structure in hard cases, that alone makes it very worthwhile. If it works well in truly de novo cases without other information like x-ray diffraction or nuclear magnetic resonance then it would be just as revolutionary as the article says.

Folding @ Alpha

Google's Deepmind claims to have created an artificially intelligent program called "AlphaFold" that is able to solve protein folding problems in a matter of days.

If it works, the solution has come "decades" before it was expected, according to experts, and could have transformative effects in the way diseases are treated.

There are 200 million known proteins at present but only a fraction have actually been unfolded to fully understand what they do and how they work. Even those that have been successfully understood often rely on expensive and time-intensive techniques, with scientists spending years unfolding each structure and relying on equipment that can cost many millions of dollars.

DeepMind worked on the AI project with the 14th Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction (CASP14), a group of scientists who have been looking into the matter since 1994.

Go, Chess, COVID...

Also at Science Magazine and TechCrunch.


Original Submission #1 Original Submission #2

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by dltaylor on Tuesday December 01 2020, @08:15AM (7 children)

    by dltaylor (4693) on Tuesday December 01 2020, @08:15AM (#1082760)

    This is NOT "solved" (useful as the hints might be) until it is 100%, or you can readily identify which configurations it got wrong.

    It's like the Intel FPU errors. Since the user has no way of knowing which calculations were performed incorrectly, it must be assumed that ALL of them are untrustworthy.

    I'm not saying that the research is not useful, but until there's a second check, using a different method, there can be many person-years and dump truck loads of money wasted developing therapies against a misfolded model.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 1, Insightful) by Anonymous Coward on Tuesday December 01 2020, @09:19AM

    by Anonymous Coward on Tuesday December 01 2020, @09:19AM (#1082770)

    I'm not saying that the research is not useful, but until there's a second check, using a different method, there can be many person-years and dump truck loads of money wasted developing therapies against a misfolded model.

    As with everything, you have to verify the model results with an experiment. The point here is that you can filter vast majority of cases away from the wet work area. And it's a tool for a job - it's for almost 100% of the cases, which sounds like a major improvement of what was available before.

    To put it another way, AI didn't solve Go either, but it can win probably 100% of the time now. Results are what matters here, not actual rigor.

  • (Score: 4, Insightful) by Hartree on Tuesday December 01 2020, @09:31AM (4 children)

    by Hartree (195) on Tuesday December 01 2020, @09:31AM (#1082774)

    In some sense, but it's not the only method with that problem. And, in fact, it gives an indication of how likely its solution is to be correct.

    It may seem surprising, but that's also the case with our other methods of structure determination. In x-ray diffraction data from crystals of proteins the route to the solution is not unique. There's some of the information needed that we don't have. (That's called the phase problem.) So, what we do is make a good guess as to that information and see if the result conforms to the data that we do have. That's why x-ray crystallography is still in part an art and not simple computation. When we solve a crystal structure, we also can compute an indication of how "good" a structure is likely to be.

    NMR has a similar problem in that it can give distances between some of the atoms in a molecule, but you have to backsolve the relative locations based on that and in something like a protein it's a bit complicated (to say the least) and not always unique either.

    The hot new spiffy method for structure determination is cryo electron microscopy and it shares similar problems.

    One thing that's been done is to use data from one method to restrict what solutions are possible in one of the other methods.

    Even with all that, a lot of incorrect structures get published. A prof I know uses his spare time (Ha!) to check the solutions of crystal structures others have published and finds quite a number that have problems ranging from high uncertainty to just flat out being wrong. That's a little disconcerting, but then again, that's how science works.

    • (Score: 0) by Anonymous Coward on Tuesday December 01 2020, @12:15PM

      by Anonymous Coward on Tuesday December 01 2020, @12:15PM (#1082788)

      What is the point though? It's like worrying about getting the knob for the radio back on while the car is stuck on accelerate and running out of oil. All it's ever used for is details that miss the big picture.

    • (Score: 1, Funny) by Anonymous Coward on Tuesday December 01 2020, @04:18PM

      by Anonymous Coward on Tuesday December 01 2020, @04:18PM (#1082846)

      Please name and shame this Professor who is not writing grants and appears to be actually doing something akin to work. This is unacceptable.

    • (Score: 1) by Fock on Tuesday December 01 2020, @11:56PM (1 child)

      by Fock (5105) on Tuesday December 01 2020, @11:56PM (#1083032)

      How do I become a moderator? Can I get some mod points please? Someone needs to moderate Hartree up here, as the only comment of consequence so far... Thanks Hartree keep it up

  • (Score: 3, Interesting) by JoeMerchant on Tuesday December 01 2020, @12:58PM

    by JoeMerchant (3937) on Tuesday December 01 2020, @12:58PM (#1082797)

    Since the user has no way of knowing which calculations were performed incorrectly, it must be assumed that ALL of them are untrustworthy.

    That all depends on your rules of engagement.

    Does your problem space require mathematical perfection? Surprisingly few real-world applications do.

    Is your algorithm wildly sensitive to commonly encountered boundary conditions? Many are, and have no good reason to be. A little discrete analysis of your decision trees can reveal the knife-edge conditions which can be moved away from the commonly encountered values into a much more stable / small error insensitive configuration that gives results of equal or even better (due to their stability/repeatability) value.

    Is the only reason your application requires perfection because someone set an arbitrary, and un-necessary, rule requiring it? In my experience, this is the case almost 100% of the time and the only thing binding people to the un-necessary rules is an inability to challenge them.

    Stock market traders may have gotten "erroneous" results from the Intel FPU bug, it's doubtful that any of them actually lost significant amounts of money because of it.

    --
    🌻🌻 [google.com]