Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Wednesday June 01 2016, @12:31AM   Printer-friendly
from the 2+2!=5 dept.

A trio of researchers has solved a single math problem by using a supercomputer to grind through over a trillion color combination possibilities, and in the process has generated the largest math proof ever—the text of it is 200 terabytes in size. In their paper uploaded to the preprint server arXiv, Marijn Heule with the University of Texas, Oliver Kullmann with Swansea University and Victor Marek with the University of Kentucky outline the math problem, the means by which a supercomputer was programmed to solve it, and the answer which the proof was asked to provide.

The math problem has been named the boolean Pythagorean Triples problem and was first proposed back in the 1980's by mathematician Ronald Graham. In looking at the Pythagorean formula: a2 + b2 = c2, he asked, was it possible to label each a non-negative integer, either blue or red, such that no set of integers a, b and c were all the same color. He offered a reward of $100 to anyone who could solve the problem.

To solve this problem the researchers applied the Cube-and-Conquer paradigm, which is a hybrid of the SAT method for hard problems. It uses both look-ahead techniques and CDCL solvers. They also did some of the math on their own ahead of giving it over to the computer, by using several techniques to pare down the number of choices the supercomputer would have to check, down to just one trillion (from 102,300). Still the 800 processor supercomputer ran for two days to crunch its way through to a solution. After all its work, and spitting out the huge data file, the computer proof showed that yes, it was possible to color the integers in multiple allowable ways—but only up to 7,824—after that point, the answer became no.

Original Study


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Wednesday June 01 2016, @12:42AM

    by Anonymous Coward on Wednesday June 01 2016, @12:42AM (#353280)

    Just like I said it would.

  • (Score: 0) by Anonymous Coward on Wednesday June 01 2016, @12:49AM

    by Anonymous Coward on Wednesday June 01 2016, @12:49AM (#353282)

    A proof show logic to goal. 200TB is sampling of all possible solutions and picking those that work.

    If it is proof, then every kid the fills in blank on correctly for 5 +[_]=6. is a mathematician!

    Or was that an Einstein?

    • (Score: 2, Insightful) by Anonymous Coward on Wednesday June 01 2016, @01:11AM

      by Anonymous Coward on Wednesday June 01 2016, @01:11AM (#353292)

      That just shows you are an uneducated math-illiterate fool.

      Testing all possible combinations to reach a conclusion is a valid proof technique.

      • (Score: 0) by Anonymous Coward on Wednesday June 01 2016, @01:29AM

        by Anonymous Coward on Wednesday June 01 2016, @01:29AM (#353298)

        Testing all possible combinations to reach a conclusion is a valid proof technique.

        It is. It just isn't elegant. Also it opens up that horrible problem of how do you know that you, or they, tested all the possible combinations. The main problem here is that I doubt many, or any, would want to check a 200 Terabyte datadump. How will we know ...

        • (Score: 3, Insightful) by bzipitidoo on Wednesday June 01 2016, @01:44AM

          by bzipitidoo (4388) on Wednesday June 01 2016, @01:44AM (#353302) Journal

          Not elegant? Get used to it. So called "computer proofs", in which computers exhaustively check all of a huge but not infinite number of possibilities, have been around for years now.

          We'd all like to see Fermat's proof that wouldn't quite fit in the margins of a single page, love to see mathematical induction reduce infinity to less than a dozen special cases, swoon over a one page proof that P != NP, and faint with joy and amazement over a hundred page Theory of Everything. But I'll take massive petabyte sized proofs any time over no proof at all.

          • (Score: 2) by edIII on Wednesday June 01 2016, @07:49PM

            by edIII (791) on Wednesday June 01 2016, @07:49PM (#353628)

            I might get used to a lack of elegance. That is rather subjective in many cases. However, I can't get around the fact the proof is about as useful as tits on a bull. Moreover, I don't consider that 200TB to be proof of anything intrinsically. Where is the logic? Where is the mind bending math I need to decipher?

            But I'll take massive petabyte sized proofs any time over no proof at all

            That's not as elegant as providing the all of the code and documented logic that produced this "proof". If all you have is the programmatic output of brute force checking of permutations then the output is nearly useless as proof. The lines of code are the proof themselves, not the output. If you wanted to prove the output was "proof", or peer review it as it were, you would need to write more code again.

            So that 200TB is meaningless unless accompanied by the code, which raises the question then, of why include the entire 200TB anyways? You could include a GB of it or so to be spot checked by the program. If the logic of the code is sound and capable of passing multiple peer reviews, than we truly don't need to be hauling around that 200TB do we?

            I don't carry around 4 billion numbers with me every day, but just have confidence I can generate them on demand ;)

            --
            Technically, lunchtime is at any moment. It's just a wave function.
        • (Score: 2) by darkfeline on Wednesday June 01 2016, @03:15AM

          by darkfeline (1030) on Wednesday June 01 2016, @03:15AM (#353324) Homepage

          Reality isn't elegant. Best get used to it now.

          --
          Join the SDF Public Access UNIX System today!
          • (Score: 0) by Anonymous Coward on Wednesday June 01 2016, @04:37PM

            by Anonymous Coward on Wednesday June 01 2016, @04:37PM (#353530)

            And this proof is far more elegant than 10^2300-ish lines of the form saying "colouring X_i is invalid because PPT a, b, and c are the same colour".

        • (Score: 4, Insightful) by q.kontinuum on Wednesday June 01 2016, @04:42AM

          by q.kontinuum (532) on Wednesday June 01 2016, @04:42AM (#353341) Journal

          No one has to go through all the data. Go through the source code, prove it's correctness and run it yourself. It's a kind of meta-evaluation, and seen that way the poof is much more elegant / appealing.

          --
          Registered IRC nick on chat.soylentnews.org: qkontinuum
          • (Score: 0) by Anonymous Coward on Wednesday June 01 2016, @10:11AM

            by Anonymous Coward on Wednesday June 01 2016, @10:11AM (#353401)

            Something something about not being able to run this program in MS/DOS mode...

    • (Score: 4, Insightful) by wisnoskij on Wednesday June 01 2016, @01:58AM

      by wisnoskij (5149) <reversethis-{moc ... ksonsiwnohtanoj}> on Wednesday June 01 2016, @01:58AM (#353307)

      Considering that it is making a prediction against all non-negative integers, aka infinity, this cannot be the output of all possible solutions. There is nothing in Math that says that all provable concepts can be proven elegantly. In fact many things simply cannot be proven, and for all elegant proofs I would be willing to bet that their exist many many more problems where the absolutely smallest proof is many millions of lines long or longer. Unfortunately, this is probably where Math will be heading fast. All the worthwhile man-provable things will run out, and Math will be be almost exclusively dealing with proofs that cannot ever be checked by hand or understood by beings unable to do many billions of calculations per second. It will simply be a matter of programming computers to solve problems and assuming that if that programming is correct the output probably is as well.

      • (Score: 2) by mhajicek on Wednesday June 01 2016, @02:55AM

        by mhajicek (51) on Wednesday June 01 2016, @02:55AM (#353319)

        Did you RTFS?

        "it was possible to color the integers in multiple allowable ways—but only up to 7,824—after that point, the answer became no."

        They didn't go to infinity. They had the machine check one trillion cases.

        --
        The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
        • (Score: -1, Troll) by Anonymous Coward on Wednesday June 01 2016, @04:27AM

          by Anonymous Coward on Wednesday June 01 2016, @04:27AM (#353337)

          So it prove it is 7,824. Ok, do it. Prove it was not a bug in the code. I can always prove a bug, but NOT bug-free.

          Now read "The Nine Billion Names of God", and real understand the question.

          • (Score: 2) by q.kontinuum on Wednesday June 01 2016, @04:48AM

            by q.kontinuum (532) on Wednesday June 01 2016, @04:48AM (#353342) Journal

            Software correctness can be formally proven. However, this doesn't account for bit-flips due to radiation or faulty hardware...

            --
            Registered IRC nick on chat.soylentnews.org: qkontinuum
            • (Score: 0) by Anonymous Coward on Wednesday June 01 2016, @06:24PM

              by Anonymous Coward on Wednesday June 01 2016, @06:24PM (#353575)

              Not really. Software correctness can have a strong backing but prove to be bug-free. Even a single assembler ops (an atom) can still be broken by a bug... Cold solder joint, heating, bit-flip, error in microcode (quarks) , even physical bug.

              So to the base question 200TB "proof". It is a listing not a proof.

              • (Score: 2) by q.kontinuum on Thursday June 02 2016, @06:39AM

                by q.kontinuum (532) on Thursday June 02 2016, @06:39AM (#353882) Journal

                A bit-flip is not a bug in software, it is faulty hardware (as I mentioned before). Correctness of the software: The first article I found [wikipedia.org] is about correctness of the algorithm. To prove correctness of the source code, next step would be to include the behaviour of the compiler for the specific architecture (word-length, if applicable rounding for floats, etc.) Should still be possible to prove correctness.

                It probably gets impractical if you need to prove the correctness of the compiler as well (unless software was coded in assembler, in which case the correctness of the implementation will be harder to prove). For the execution, I would expect that only statistical statements are possible, as in "The proof is valid within x sigma" due to potential bugs. The chances can be increased by redundant hardware and error-checking, and considering that our all lives depends on reliability of some computer-controlled nuclear missile launchers, I would expect that for all practical purposes the risk of accepting a wrong evidence in maths is less than the risk of being wiped out due to a computer error.

                --
                Registered IRC nick on chat.soylentnews.org: qkontinuum
        • (Score: 2) by VLM on Wednesday June 01 2016, @12:36PM

          by VLM (445) on Wednesday June 01 2016, @12:36PM (#353438)

          The problem is there's an infinite set of proofs where the "termination point" or "last time it worked" is a smooth white noise distribution of numbers between zero and infinity.

          So that's my mental model I can imagine in my mind of all of "proof space". This situation is one specific problem with a tiny little dot at 7824 cases.

          The problem is there's an infinite number of math proofs, which are presumably just as interesting as this one to some weirdo out there, that don't terminate until some ridiculously large Ackermann function value that'll never be computed.

          Essentially you're claiming that the existence of one program that rapidly-ish halts proves the Turing halting problem doesn't exist as a general problem.

    • (Score: 2) by Non Sequor on Wednesday June 01 2016, @03:23AM

      by Non Sequor (1005) on Wednesday June 01 2016, @03:23AM (#353325) Journal

      every kid the fills in blank on correctly for 5 +[_]=6. is a mathematician!

      Actually, I kind of like this.

      --
      Write your congressman. Tell him he sucks.
  • (Score: 5, Insightful) by julian on Wednesday June 01 2016, @12:51AM

    by julian (6003) Subscriber Badge on Wednesday June 01 2016, @12:51AM (#353284)

    Imagine if I asked you to learn a programming language and a culture of practices where:

    -All the variables were a single letter, pathologically so, and that programmers enjoyed using foreign alphabets, glyph variation, and fonts to disambiguate their code from meaningless gibberish.

    -None of the functions were documented independently, and instead the API documentation consisted of circular references to other pieces of similar code, often with the same names overloaded into multiple meanings, often impossible to Google.

    -None of the sample code could be run on a typical computer, in fact, most of it was pseudo-code lacking a clear definition of input and output, or even the environment and domain in which it is typically expected to be run.

    -Expressive code in this language was impossible to write in a typical text editor or word processor, and transcribing code from paper back into something useful was an error-prone process.

    -Papers describing novel idea in the field were incomprehensible by default, so that the world's most established experts on the subject would need years to decide whether a particularly cutting-edge program is meaningful or in fact insane gibberish.

    -These experts not only see nothing wrong with this picture, but in fact, revel in symbolic obscurity and ego-driven naming, dismissing sincere attempts at fixing these deep seated problems as meaningless taste arguments or pointless pop culture entertainment.

    If you were an experienced programmer, you would rightly call me insane. If you were a mathematician, it would be a day ending in y.

    • (Score: 0) by Anonymous Coward on Wednesday June 01 2016, @01:15AM

      by Anonymous Coward on Wednesday June 01 2016, @01:15AM (#353293)

      Yeah, they transformed an math problem into software verification problem.

    • (Score: 4, Funny) by Marand on Wednesday June 01 2016, @01:57AM

      by Marand (1081) on Wednesday June 01 2016, @01:57AM (#353306) Journal

      Sounds like you just asked us to learn Haskell, actually. :)

    • (Score: 2) by fleg on Wednesday June 01 2016, @02:47AM

      by fleg (128) Subscriber Badge on Wednesday June 01 2016, @02:47AM (#353316)

      nice work!

    • (Score: 1) by TheSage on Wednesday June 01 2016, @07:16AM

      by TheSage (133) on Wednesday June 01 2016, @07:16AM (#353360) Journal

      You sound quite bitter.

      You give a nice example of judging one field of science/technology with the criteria of another field. The result is not surprising. Rest assured there are good reasons why mathematicians do things the way they do them; for example a big one you have missed is, that a mathematical idea (i.e. the content) is much more important than the way it is formulated (i.e. the way it is written down). Can some aspects of how we do mathematics be improved? Of course they can. But trying to make all of mathematics into a programming language will fail for, oh so many reasons.

      Having said that, it would be an interesting program to pursue; it could provide a new perspective to some fields.

      • (Score: 1, Insightful) by Anonymous Coward on Wednesday June 01 2016, @07:26AM

        by Anonymous Coward on Wednesday June 01 2016, @07:26AM (#353366)

        But trying to make all of mathematics into a programming language will fail for, oh so many reasons.

        The great Mathematician and Computer Scientist Alan Turing proved this statement wrong with a mathematical proof that is also a computer.

        • (Score: 1) by TheSage on Wednesday June 01 2016, @08:58AM

          by TheSage (133) on Wednesday June 01 2016, @08:58AM (#353380) Journal

          I am not talking here about the theoretical possibility of transforming mathematics into programs. Obviously, this can be done (with the possible exception of some highly meta stuff, depending on what you call mathematics). As should be clear from context, I was talking about the practicality of such an approach. It would be like trying to do biology using only the language of physics (or vice versa). Theoretically possible, but useful only in a very restricted set of problems.

      • (Score: 2) by VLM on Wednesday June 01 2016, @12:25PM

        by VLM (445) on Wednesday June 01 2016, @12:25PM (#353432)

        But trying to make all of mathematics into a programming language will fail for, oh so many reasons.

        Start with one? Other than the extremely obvious known forklift upgrade issues of "its gonna be expensive and take awhile and be semi-incompatible with the legacy system in funny ways"

        Its not a language syntax problem anyway. Its architectural failings.

        Given a zillion lines of code written in INTERCAL or even worse COBOL, at some point you can't move forward and you can't line by line translate into clojure or something one line at a time, so you have to forklift upgrade the whole thing and re implement the whole system in something more powerful.

        People dislike coq in the same way they dislike r, but they use it anyway. Some unit tests, some abstraction, a couple billion lines of code, we'll have something going on. Super-coq-2.0 sounds like a pr0n movie but is probably whats needed.

        I'd stand by my theory that much like programming eventually there's going to be some problem that can only be solved by a forklift upgrade top to bottom of "math" as a technology. They'll eventually exist some attractive goal, a physics theory of everything, some cryptographic problem, star trek style warp drive, something, and its only going to be solvable with a forklift upgrade of "math" as a pre-req.

        Something that is insightful and worth considering is the new system will be more powerful and capable of really cool things and fit existing knowledge really well, but much like "R" today, it'll still be a steaming pile of WTF. Better doesn't mean easier to learn or more rationally designed, especially if the explicit goal definition of better was to be faster or more capable. Given that R is about the limit of BS that people will tolerate, there might be human limits preventing a system strong enough. Then you'd need an intermediate language, much like AMD64 machine language is a PITA to program effectively in, but compilers take care of that. I wonder how many layers of DSLs and compilers a "math" technology would require to make it human compatible.

    • (Score: 2) by VLM on Wednesday June 01 2016, @12:11PM

      by VLM (445) on Wednesday June 01 2016, @12:11PM (#353428)

      You left out that the type system, although hideously documented as per above, isn't all that bad.

      Part of the second to last is there is no debugger.

  • (Score: 2) by Non Sequor on Wednesday June 01 2016, @01:51AM

    by Non Sequor (1005) on Wednesday June 01 2016, @01:51AM (#353304) Journal

    Almost all of the real numbers can't be described by any finite program. These numbers exist by implication of the procedure used to take the algebraic closure of the rational numbers combined with your basic arithmetic operations, but they're effectively unnameable. In practice, we can only use computable numbers. These are numbers that can be generated (up to a parameterized precision) by a finite program using a tractable algorithm. These are the numbers that can be conceivably given names. But again, if you impose resource constraints on the programs (code size less than x, execution time less than y), you drop down to an even smaller set of numbers we can give names using bounded resources. And yet again, if you look at the resources that will be available over all of human existence and ask how much resources can be invested in computation, then you drop down to an even smaller set of numbers that will be named by mankind.

    A specific example of this progression is GUIDs. 128 fair coin flips gives you a sequence of bits that has a negligible chance of being generated twice under current usage levels, and it's not that hard to expand the ID space to make it implausible that the same sequence will be generated twice in the history of the universe. We have linguistic constructions to give names to what we regard as very large integers, or very large spans of decimal digits, but most of these names will never be uttered by human lips.

    Now against all this, I have to wonder if a 200TB proof is large or if it's merely a quaint human artifact? One might imagine that, if human intuition can lead us to conjectures then perhaps there is a bound on the required proof size since generally a conjecture implies the existence of a proof (either for or against) with some steps not fully filled in. Or, maybe there isn't a relationship between the things that we can speculate on and the resources needed to resolve this speculation.

    --
    Write your congressman. Tell him he sucks.
    • (Score: 0) by Anonymous Coward on Wednesday June 01 2016, @03:29AM

      by Anonymous Coward on Wednesday June 01 2016, @03:29AM (#353327)

      Almost all of the real numbers can't be described by any finite program. These numbers exist by implication of the procedure used to take the algebraic closure of the rational numbers combined with your basic arithmetic operations, but they're effectively unnameable. In practice, we can only use computable numbers. These are numbers that can be generated (up to a parameterized precision) by a finite program using a tractable algorithm. These are the numbers that can be conceivably given names.

      Not quite: there are real numbers that can be named and reasoned about which are not computable. Such numbers can be constructed them using, for example, known non-computable functions such as the busy beaver function. These numbers are called definable. But it is still the case that almost all real numbers are not definable, as there are uncountably many reals and only a countable subset of them are definable.

      • (Score: 2) by Non Sequor on Wednesday June 01 2016, @03:47AM

        by Non Sequor (1005) on Wednesday June 01 2016, @03:47AM (#353332) Journal

        I know and I'd argue that the actual concrete values of things which are definable but not computable actually do not figure significantly in their use. These are objects with no realizable construction.

        --
        Write your congressman. Tell him he sucks.
    • (Score: 0) by Anonymous Coward on Wednesday June 01 2016, @09:48AM

      by Anonymous Coward on Wednesday June 01 2016, @09:48AM (#353393)

      (a + b)(c - d) = e
      e = ac + bc - ad - bd

      I wrote a basic program to do that kind of algebraic manipulation on a Vic-20

      Tell me again how supercomputers can't deal with real numbers?

  • (Score: 0) by Anonymous Coward on Wednesday June 01 2016, @02:29AM

    by Anonymous Coward on Wednesday June 01 2016, @02:29AM (#353312)

    But I suppose Prof. Mazières could think up ways to scale this one [stanford.edu] up to be, let's say, 250 TB.

  • (Score: 0) by Anonymous Coward on Wednesday June 01 2016, @03:06AM

    by Anonymous Coward on Wednesday June 01 2016, @03:06AM (#353321)

    After all, it did take 30 years and a supercomputer to disprove it.

    But, 7825 (= 5 * 5 * 313) is a pretty low ceiling for number theory. Better luck next time, Prof.

    • (Score: 0) by Anonymous Coward on Wednesday June 01 2016, @03:33AM

      by Anonymous Coward on Wednesday June 01 2016, @03:33AM (#353328)

      7825 (= 5 * 5 * 313) is a pretty low ceiling for number theory. Better luck next time, Prof.

      It's OK, Ronald Graham already has Graham's Number [wikipedia.org] named after him.

  • (Score: 2) by arslan on Wednesday June 01 2016, @03:26AM

    by arslan (3462) on Wednesday June 01 2016, @03:26AM (#353326)

    If folks were to do this as their homework submission... the lecturer would be stuffed!

  • (Score: 0) by Anonymous Coward on Wednesday June 01 2016, @06:31AM

    by Anonymous Coward on Wednesday June 01 2016, @06:31AM (#353351)

    I mean A HUNDRED BUX is some serious money! Especially if 1980s dollars.

  • (Score: 2) by Aiwendil on Wednesday June 01 2016, @08:27AM

    by Aiwendil (531) on Wednesday June 01 2016, @08:27AM (#353371) Journal

    I understand that they mean "divide the naturals into to groups and try to pick from both groups" (why describe it as colours btw?) but what I don't understand is how they divide it into both groups.

    (It can't be a unique grouping per triplet, since in that case just put whatever-is-a in another group than whatever-is.b-or-c. And tring every possibke firmula for grouping would have you end up with only every possible known/expressible formula)

    • (Score: 0) by Anonymous Coward on Wednesday June 01 2016, @10:50AM

      by Anonymous Coward on Wednesday June 01 2016, @10:50AM (#353409)

      blue and red are used because true and false have specialized meaning in the context of proof. As programmers we have tended to blur the line and assume our true / false dichotomy is the only boolean. We squish all booleans into our true / false world, and it is not always clear how they map onto the thing we are modeling. In this case it makes no sense to talk about a and b being false and c being true. We're not doing boolean algebra. We are doing normal algebra, and seeing if the property hold's true.

      • (Score: 2) by Aiwendil on Wednesday June 01 2016, @12:08PM

        by Aiwendil (531) on Wednesday June 01 2016, @12:08PM (#353427) Journal

        Still, why colours? Why not just call them "a" or "b" instead? or "Group 1" and "Group 2"? or something similar?

        And don't get me started on coders only thinking in true/false, I often run into that and end up long (albeit fun) arguments about much of my logic since I tend to prefer ternary (true/false/undetermined (or at work "current"/"no current"/"non-reliable") - it makes a lot more sense when you can't set the init-states yourself, or have another subsystem that check for errors)

        • (Score: 3, Funny) by tibman on Wednesday June 01 2016, @05:17PM

          by tibman (134) Subscriber Badge on Wednesday June 01 2016, @05:17PM (#353545)

          You'd like javascript then. For booleans we have true, false, truthy, falsy, undefined, and null : )

          --
          SN won't survive on lurkers alone. Write comments.