Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by martyb on Tuesday December 01 2020, @04:04AM   Printer-friendly
from the Solved-for-suitably-small-value-of-"solved" dept.

DeepMind's AI is Claimed to Make Gigantic Leap in Solving Protein Structures

DeepMind's program, called AlphaFold, outperformed around 100 other teams in a biennial protein-structure prediction challenge called CASP, short for Critical Assessment of Structure Prediction. The results were announced on 30 November, at the start of the conference — held virtually this year — that takes stock of the exercise.

John Moult of the University of Maryland in College Park (founder of this conference) says: "In some sense the problem is solved."

https://www.nature.com/articles/d41586-020-03348-4

Many Caveats: This seems unusually breathless for Nature and this is a very hard problem that's been worked on for decades. Having worked in a group studying the protein folding problem back in the 80s, I've learned to be pretty skeptical of miracles in this over the years. That said if it is accurate that this works well enough to provide clues to x-ray diffraction determination of structure in hard cases, that alone makes it very worthwhile. If it works well in truly de novo cases without other information like x-ray diffraction or nuclear magnetic resonance then it would be just as revolutionary as the article says.

Folding @ Alpha

Google's Deepmind claims to have created an artificially intelligent program called "AlphaFold" that is able to solve protein folding problems in a matter of days.

If it works, the solution has come "decades" before it was expected, according to experts, and could have transformative effects in the way diseases are treated.

There are 200 million known proteins at present but only a fraction have actually been unfolded to fully understand what they do and how they work. Even those that have been successfully understood often rely on expensive and time-intensive techniques, with scientists spending years unfolding each structure and relying on equipment that can cost many millions of dollars.

DeepMind worked on the AI project with the 14th Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction (CASP14), a group of scientists who have been looking into the matter since 1994.

Go, Chess, COVID...

Also at Science Magazine and TechCrunch.


Original Submission #1 Original Submission #2

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Tuesday December 01 2020, @04:24AM (1 child)

    by Anonymous Coward on Tuesday December 01 2020, @04:24AM (#1082689)

    Hooray for AI and this solution. Seriously, congrats to the developers.

    However, given my generally negative outlook in this terrible year, my second thought is that these tools will soon propagate to all kinds of experimenters...and take humanity one step closer to the gray goo end of life as we know it.

    https://en.wikipedia.org/wiki/Gray_goo [wikipedia.org]

    • (Score: 2) by ikanreed on Tuesday December 01 2020, @06:56PM

      by ikanreed (3164) Subscriber Badge on Tuesday December 01 2020, @06:56PM (#1082908) Journal

      I wouldn't worry about that. Being able to anticipate morphology has absolutely no bearing on self-replication.

      It will help genomics a lot, which has been a field mostly built on a potentially spurious correlation house of cards thus far. Now we can start to anticipate and interpret the actual biological implication of SNVs and SNPs in a properly reductive sense.

  • (Score: 0) by Anonymous Coward on Tuesday December 01 2020, @04:46AM (6 children)

    by Anonymous Coward on Tuesday December 01 2020, @04:46AM (#1082698)

    It's a bit stupid talking about "solving" things with AI. The solution is pretty much 42 in all cases. Where/what/how? Shut up that's not what you asked. It's 42.

    Proteins are the ultimate in evolutionary processes. Trillions of combinations that slightly differ and behave differently in response to slight differences in the environment. It's possible there is just no simplifying principle like you see in physics and chemistry and occasionally in other things. I guess it invites the question: what does "solve" mean?

    We certainly aren't getting an understanding but more like a region in a very large space where some solutions (the few we know) tend to lie. It seems more like having a computer parse a book and assign a Dewey number to it based on books it has in its collection. It doesn't explain any part of why a Romance novel is distinct from a math textbook.

    • (Score: 5, Informative) by MIRV888 on Tuesday December 01 2020, @07:11AM (1 child)

      by MIRV888 (11376) on Tuesday December 01 2020, @07:11AM (#1082744)

      You are claiming that some things are simply unfathomable because they are complicated. That has not proven to be the case with a whole lot of stuff we consider commonplace. Proteins are no different. I will take science 7 days a week and twice on Sunday.

      • (Score: 3, Interesting) by JoeMerchant on Tuesday December 01 2020, @12:50PM

        by JoeMerchant (3937) on Tuesday December 01 2020, @12:50PM (#1082795)

        Go was thought to be "unsolvably complex" until AlphaGo went and surpassed literally Millennia of lifelong human studies. AlphaGo may not have "solved" Go, but it is better at predicting Go outcomes and optimal next moves than any human mind currently living. What sets AlphaGo apart from exhaustive search algorithms on NP hard problem spaces is that it finds near-optimal paths orders of magnitude faster than exhaustive search.

        AlphaFold may not have optimally solved the protein folding problem, but if it has made a couple orders of magnitude progress in the speed of solve over the current parallel search approaches that is both dramatic in terms of what can now be accomplished in protein folding work, and not surprising given the Alpha approach's past performance on similar "hard" problems.

        --
        🌻🌻 [google.com]
    • (Score: 3, Interesting) by Anonymous Coward on Tuesday December 01 2020, @07:26AM (3 children)

      by Anonymous Coward on Tuesday December 01 2020, @07:26AM (#1082752)

      You sound cynical. Don't be. Neural networks are really good at solving problems that involve well defined end-game. The problem most people have is understanding this - neural networks are shit at random issue of the day. And you can't just "train it" on randomness as you'll just get randomness in response. This is also why neural networks seems exceptionally well suited for things like Go or Chess -- well defined problems with well defined end-game. Finding means to the ends is what neural networks are good at.

      If you want bad solutions, present the opposite. What ends will you get with these tools? Useless proposition. Neutral networks don't know things like "purpose" or "ethics". Reality is WE don't know what is our "purpose" and we struggle with it every day. How many churches, mosques and temples try to proclaim they preach the purpose while telling you nothing? We claim ethics and then our actions are opposite because it's inconvenient?

      We are a neural network that tries to cope with the real world. We come up with problems that we are not accustomed to. So now we have this artificial neural network to help with finding the means to an end. "Find a face in a crowd" "find a way to win a game" "find folding patterns for proteins", seem all good candidate once we define the end game. If we can't define an end game (eg. no proper training input, no well defined end solutions), then AI is not going to help with that ;)

      • (Score: 0) by Anonymous Coward on Tuesday December 01 2020, @09:03AM

        by Anonymous Coward on Tuesday December 01 2020, @09:03AM (#1082767)

        There's no "means". It just goes straight to "end".

      • (Score: 0) by Anonymous Coward on Tuesday December 01 2020, @03:31PM (1 child)

        by Anonymous Coward on Tuesday December 01 2020, @03:31PM (#1082830)

        AI a qualified guessing machine.

        • (Score: 0) by Anonymous Coward on Tuesday December 01 2020, @03:42PM

          by Anonymous Coward on Tuesday December 01 2020, @03:42PM (#1082833)

          Human is unqualified.

  • (Score: 2) by dltaylor on Tuesday December 01 2020, @08:15AM (7 children)

    by dltaylor (4693) on Tuesday December 01 2020, @08:15AM (#1082760)

    This is NOT "solved" (useful as the hints might be) until it is 100%, or you can readily identify which configurations it got wrong.

    It's like the Intel FPU errors. Since the user has no way of knowing which calculations were performed incorrectly, it must be assumed that ALL of them are untrustworthy.

    I'm not saying that the research is not useful, but until there's a second check, using a different method, there can be many person-years and dump truck loads of money wasted developing therapies against a misfolded model.

    • (Score: 1, Insightful) by Anonymous Coward on Tuesday December 01 2020, @09:19AM

      by Anonymous Coward on Tuesday December 01 2020, @09:19AM (#1082770)

      I'm not saying that the research is not useful, but until there's a second check, using a different method, there can be many person-years and dump truck loads of money wasted developing therapies against a misfolded model.

      As with everything, you have to verify the model results with an experiment. The point here is that you can filter vast majority of cases away from the wet work area. And it's a tool for a job - it's for almost 100% of the cases, which sounds like a major improvement of what was available before.

      To put it another way, AI didn't solve Go either, but it can win probably 100% of the time now. Results are what matters here, not actual rigor.

    • (Score: 4, Insightful) by Hartree on Tuesday December 01 2020, @09:31AM (4 children)

      by Hartree (195) on Tuesday December 01 2020, @09:31AM (#1082774)

      In some sense, but it's not the only method with that problem. And, in fact, it gives an indication of how likely its solution is to be correct.

      It may seem surprising, but that's also the case with our other methods of structure determination. In x-ray diffraction data from crystals of proteins the route to the solution is not unique. There's some of the information needed that we don't have. (That's called the phase problem.) So, what we do is make a good guess as to that information and see if the result conforms to the data that we do have. That's why x-ray crystallography is still in part an art and not simple computation. When we solve a crystal structure, we also can compute an indication of how "good" a structure is likely to be.

      NMR has a similar problem in that it can give distances between some of the atoms in a molecule, but you have to backsolve the relative locations based on that and in something like a protein it's a bit complicated (to say the least) and not always unique either.

      The hot new spiffy method for structure determination is cryo electron microscopy and it shares similar problems.

      One thing that's been done is to use data from one method to restrict what solutions are possible in one of the other methods.

      Even with all that, a lot of incorrect structures get published. A prof I know uses his spare time (Ha!) to check the solutions of crystal structures others have published and finds quite a number that have problems ranging from high uncertainty to just flat out being wrong. That's a little disconcerting, but then again, that's how science works.

      • (Score: 0) by Anonymous Coward on Tuesday December 01 2020, @12:15PM

        by Anonymous Coward on Tuesday December 01 2020, @12:15PM (#1082788)

        What is the point though? It's like worrying about getting the knob for the radio back on while the car is stuck on accelerate and running out of oil. All it's ever used for is details that miss the big picture.

      • (Score: 1, Funny) by Anonymous Coward on Tuesday December 01 2020, @04:18PM

        by Anonymous Coward on Tuesday December 01 2020, @04:18PM (#1082846)

        Please name and shame this Professor who is not writing grants and appears to be actually doing something akin to work. This is unacceptable.

      • (Score: 1) by Fock on Tuesday December 01 2020, @11:56PM (1 child)

        by Fock (5105) on Tuesday December 01 2020, @11:56PM (#1083032)

        How do I become a moderator? Can I get some mod points please? Someone needs to moderate Hartree up here, as the only comment of consequence so far... Thanks Hartree keep it up

    • (Score: 3, Interesting) by JoeMerchant on Tuesday December 01 2020, @12:58PM

      by JoeMerchant (3937) on Tuesday December 01 2020, @12:58PM (#1082797)

      Since the user has no way of knowing which calculations were performed incorrectly, it must be assumed that ALL of them are untrustworthy.

      That all depends on your rules of engagement.

      Does your problem space require mathematical perfection? Surprisingly few real-world applications do.

      Is your algorithm wildly sensitive to commonly encountered boundary conditions? Many are, and have no good reason to be. A little discrete analysis of your decision trees can reveal the knife-edge conditions which can be moved away from the commonly encountered values into a much more stable / small error insensitive configuration that gives results of equal or even better (due to their stability/repeatability) value.

      Is the only reason your application requires perfection because someone set an arbitrary, and un-necessary, rule requiring it? In my experience, this is the case almost 100% of the time and the only thing binding people to the un-necessary rules is an inability to challenge them.

      Stock market traders may have gotten "erroneous" results from the Intel FPU bug, it's doubtful that any of them actually lost significant amounts of money because of it.

      --
      🌻🌻 [google.com]
(1)