Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Sunday August 07 2016, @11:19PM   Printer-friendly
from the close-enough dept.

If you studied computer science, you most likely encountered the Wagner-Fischer algorithm for edit distance in whatever course introduced 2D arrays. The edit distance between two strings of symbols is the minimum number of edits (insertions, deletions, and substitutions) necessary to turn one string into the other. The algorithm builds a huge table with symbols of one string labeling the rows and symbols from the other labeling the columns. Each entry in the table is the the number of edits required to turn the string ending with the corresponding column into the string ending with the corresponding row.

For years people have been trying to improve on this basic algorithm. And failing. Now, Phys.org is reporting on a paper to appear at ACM Symposium on Theory of Computing (STOC) next week which shows that Wagner-Fischer is as good as it's likely going to get. This sounds like an NP thing..and it is! If it were possible to solve the edit distance problem in less than quadratic time (i.e. faster than filling up the huge table), then it would be possible to solve the satisfiability problem in less than exponential time.

From Phys.org:

Theoretical computer science is particularly concerned with a class of problems known as NP-complete. Most researchers believe that NP-complete problems take exponential time to solve, but no one's been able to prove it. In their STOC paper, [Piotr] Indyk and his student Arturs Backurs demonstrate that if it's possible to solve the edit-distance problem in less-than-quadratic time, then it's possible to solve an NP-complete problem in less-than-exponential time. Most researchers in the computational-complexity community will take that as strong evidence that no subquadratic solution to the edit-distance problem exists.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by opinionated_science on Sunday August 07 2016, @11:55PM

    by opinionated_science (4031) on Sunday August 07 2016, @11:55PM (#385097)

    in the real world there are tradeoffs of memory for energy(cpu speed).

    Theoretically, all problems can be mapped to classes of other problems, in particular phase spaces indicating the dimensionality of the problem.

    The practical solutions can *sometimes* be faster, by exploiting empirical locality.

    Anyone actually working on this? I've only done it for DNA/RNA/Protein sequences...

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 3, Informative) by RamiK on Monday August 08 2016, @12:28AM

    by RamiK (1813) on Monday August 08 2016, @12:28AM (#385106)

    Texas Hold 'em got brute forced a little while ago through memory rather than compute (http://ai.cs.unibas.ch/_files/teaching/fs15/ki/material/ki02-poker.pdf). If you look up essays citing that solution, you'll find all the current research that tries formalizing the compute~memory trade-offs.

    --
    compiling...
  • (Score: 2) by DeathMonkey on Monday August 08 2016, @05:48PM

    by DeathMonkey (1380) on Monday August 08 2016, @05:48PM (#385385) Journal

    in the real world there are tradeoffs of memory for energy(cpu speed).
     
    With a big enough hash-table all problems can execute in O(1) time.