Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Wednesday June 01 2016, @12:31AM   Printer-friendly
from the 2+2!=5 dept.

A trio of researchers has solved a single math problem by using a supercomputer to grind through over a trillion color combination possibilities, and in the process has generated the largest math proof ever—the text of it is 200 terabytes in size. In their paper uploaded to the preprint server arXiv, Marijn Heule with the University of Texas, Oliver Kullmann with Swansea University and Victor Marek with the University of Kentucky outline the math problem, the means by which a supercomputer was programmed to solve it, and the answer which the proof was asked to provide.

The math problem has been named the boolean Pythagorean Triples problem and was first proposed back in the 1980's by mathematician Ronald Graham. In looking at the Pythagorean formula: a2 + b2 = c2, he asked, was it possible to label each a non-negative integer, either blue or red, such that no set of integers a, b and c were all the same color. He offered a reward of $100 to anyone who could solve the problem.

To solve this problem the researchers applied the Cube-and-Conquer paradigm, which is a hybrid of the SAT method for hard problems. It uses both look-ahead techniques and CDCL solvers. They also did some of the math on their own ahead of giving it over to the computer, by using several techniques to pare down the number of choices the supercomputer would have to check, down to just one trillion (from 102,300). Still the 800 processor supercomputer ran for two days to crunch its way through to a solution. After all its work, and spitting out the huge data file, the computer proof showed that yes, it was possible to color the integers in multiple allowable ways—but only up to 7,824—after that point, the answer became no.

Original Study


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: -1, Troll) by Anonymous Coward on Wednesday June 01 2016, @04:27AM

    by Anonymous Coward on Wednesday June 01 2016, @04:27AM (#353337)

    So it prove it is 7,824. Ok, do it. Prove it was not a bug in the code. I can always prove a bug, but NOT bug-free.

    Now read "The Nine Billion Names of God", and real understand the question.

    Starting Score:    0  points
    Moderation   -1  
       Troll=1, Total=1
    Extra 'Troll' Modifier   0  

    Total Score:   -1  
  • (Score: 2) by q.kontinuum on Wednesday June 01 2016, @04:48AM

    by q.kontinuum (532) on Wednesday June 01 2016, @04:48AM (#353342) Journal

    Software correctness can be formally proven. However, this doesn't account for bit-flips due to radiation or faulty hardware...

    --
    Registered IRC nick on chat.soylentnews.org: qkontinuum
    • (Score: 0) by Anonymous Coward on Wednesday June 01 2016, @06:24PM

      by Anonymous Coward on Wednesday June 01 2016, @06:24PM (#353575)

      Not really. Software correctness can have a strong backing but prove to be bug-free. Even a single assembler ops (an atom) can still be broken by a bug... Cold solder joint, heating, bit-flip, error in microcode (quarks) , even physical bug.

      So to the base question 200TB "proof". It is a listing not a proof.

      • (Score: 2) by q.kontinuum on Thursday June 02 2016, @06:39AM

        by q.kontinuum (532) on Thursday June 02 2016, @06:39AM (#353882) Journal

        A bit-flip is not a bug in software, it is faulty hardware (as I mentioned before). Correctness of the software: The first article I found [wikipedia.org] is about correctness of the algorithm. To prove correctness of the source code, next step would be to include the behaviour of the compiler for the specific architecture (word-length, if applicable rounding for floats, etc.) Should still be possible to prove correctness.

        It probably gets impractical if you need to prove the correctness of the compiler as well (unless software was coded in assembler, in which case the correctness of the implementation will be harder to prove). For the execution, I would expect that only statistical statements are possible, as in "The proof is valid within x sigma" due to potential bugs. The chances can be increased by redundant hardware and error-checking, and considering that our all lives depends on reliability of some computer-controlled nuclear missile launchers, I would expect that for all practical purposes the risk of accepting a wrong evidence in maths is less than the risk of being wiped out due to a computer error.

        --
        Registered IRC nick on chat.soylentnews.org: qkontinuum