Stories
Slash Boxes
Comments

SoylentNews is people

posted by CoolHand on Wednesday October 04 2017, @03:35PM   Printer-friendly
from the math-rebels dept.

QuantaMagazine.org has a very long and interesting article on computers redefining the roots of Math.

On a recent train trip from Lyon to Paris, Vladimir Voevodsky sat next to Steve Awodey and tried to convince him to change the way he does mathematics.

Voevodsky, 48, is a permanent faculty member at the Institute for Advanced Study (IAS) in Princeton, N.J. He was born in Moscow but speaks nearly flawless English, and he has the confident bearing of someone who has no need to prove himself to anyone. In 2002 he won the Fields Medal, which is often considered the most prestigious award in mathematics.

[...] The idea of doing mathematics in a program like Coq has a long history. The appeal is simple: Rather than relying on fallible human beings to check proofs, you can turn the job over to computers, which can tell whether a proof is correct with complete certainty. Despite this advantage, computer proof assistants haven't been widely adopted in mainstream mathematics. This is partly because translating everyday math into terms a computer can understand is cumbersome and, in the eyes of many mathematicians, not worth the effort.

For nearly a decade, Voevodsky has been advocating the virtues of computer proof assistants and developing univalent foundations in order to bring the languages of mathematics and computer programming closer together. As he sees it, the move to computer formalization is necessary because some branches of mathematics have become too abstract to be reliably checked by people.

"The world of mathematics is becoming very large, the complexity of mathematics is becoming very high, and there is a danger of an accumulation of mistakes," Voevodsky said. Proofs rely on other proofs; if one contains a flaw, all others that rely on it will share the error.

This is something Voevodsky has learned through personal experience. In 1999 he discovered an error in a paper he had written seven years earlier. Voevodsky eventually found a way to salvage the result, but in an article last summer in the IAS newsletter, he wrote that the experience scared him. He began to worry that unless he formalized his work on the computer, he wouldn't have complete confidence that it was correct.

But taking that step required him to rethink the very basics of mathematics. The accepted foundation of mathematics is set theory. Like any foundational system, set theory provides a collection of basic concepts and rules, which can be used to construct the rest of mathematics. Set theory has sufficed as a foundation for more than a century, but it can't readily be translated into a form that computers can use to check proofs. So with his decision to start formalizing mathematics on the computer, Voevodsky set in motion a process of discovery that ultimately led to something far more ambitious: a recasting of the underpinnings of mathematics.

Geometry and classic algebra are not really isomorphic. Boolean logic and number theory were proved incompatible by Bertrand Russell


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by TheRaven on Thursday October 05 2017, @10:24AM (4 children)

    by TheRaven (270) on Thursday October 05 2017, @10:24AM (#577389) Journal

    Assuming that you're not trolling, allow me to give you a counterpoint:

    Type theory is one way of encoding constraints. Most mainstream programming languages have restrictive type systems that don't allow you to encode various things that are generally useful. In a fully dynamically typed language, you use explicit assertions (for example: must implement this protocol, must provide this method) that are checked at run time, but these are generally the least useful checks: if you're making errors related to the sorts of things that most mainstream programming languages can check then you have problems. Here's a trivial example: In your favourite statically typed language, how do you express in the type system that a sort function is correct (i.e. that, for all inputs, the output will have the same set of elements as the input and that they will respect the specified partial ordering)? That's a trivial correctness property in comparison to most programs, yet one that is beyond the scope of most static type systems.

    At the opposite extreme are languages with Turing-complete type systems. There are quite a few of these, though C++ is the only one that's made it into the mainstream. You can, using C++ templates (or, in C++11 or later, templates and constexpr functions, and ideally templates in combination with static assertions), check any arbitrary property of your program that can be known at compile time. This is hugely powerful, but it has a serious limitation: type checking the program reduces to the halting problem. I am currently working on a research project that takes around 2 minutes for template instantiation on a single (fairly small) compilation unit because it is performing a huge number of static checks that go far beyond anything that most other mainstream statically typed languages can do (and, as a result, catches a lot of subtle bugs at compile time). Unfortunately, sometimes when I make an error in this code the compile time jumps to 10+ minutes. Sometimes, when I make a correct change, it fails to compile with a confusing error. The reason for both of these is that the compiler has limits (configurable by command-line flag) for recursion depth of template instantiation and constexpr function invocations. When a check involving a constexpr function fails to terminate in the bounded number of invocations, the error message tells me that a value is not a constant (which is confusing, because the value is a constant, it's just that the compiler gave up trying to evaluate it too early).

    It is provably impossible to determine whether an arbitrary string is a valid C++ program for precisely this reason: the results of constexpr evaluations and template instantiations can perform any arbitrary computation and so you have to solve the halting problem to tell whether it's actually wrong or whether it's just taking a really long time.

    If you really care about correctness, then you shouldn't use a language that tries to shoehorn arbitrary checks into a type system, you should use a language that's designed explicitly for proof-carrying code. F* is the closest example that I'm aware of to being a mainstream language: It's a variant of F# that uses the Z3 SMT solver to check correctness proofs written in the same language as the rest of the program. Unfortunately, it suffers from the same problem: sometimes, correct proofs will time out in the solver, so you can only compile a subset of all correct programs.

    TL;DR: There's no silver bullet. Trivial type systems will catch trivial errors, complex type systems may never terminate checking correct programs.

    --
    sudo mod me up
    Starting Score:    1  point
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 0) by Anonymous Coward on Friday October 06 2017, @12:29AM (1 child)

    by Anonymous Coward on Friday October 06 2017, @12:29AM (#577724)

    Why is everything today labeled "Trolling"? I just... I don't even know what that means anymore.

    • (Score: 2) by TheRaven on Friday October 06 2017, @11:02AM

      by TheRaven (270) on Friday October 06 2017, @11:02AM (#577934) Journal
      It's trolling because static vs dynamic languages is a flame war even older than vi vs emacs, and the post asserting that one is superior looks carefully designed to prompt a response: i.e. trolling.
      --
      sudo mod me up
  • (Score: 0) by Anonymous Coward on Friday October 06 2017, @12:36AM (1 child)

    by Anonymous Coward on Friday October 06 2017, @12:36AM (#577728)

    What's wrong with reporting to the user that a program (or fragment thereof) is correct with a probably 80% or whatever? Then, build-time policy can be set: Failure is when probability is less 95%, and we know we have to increase build-time computing resources until the compiler reaches this level of truth.

    • (Score: 2) by TheRaven on Friday October 06 2017, @11:05AM

      by TheRaven (270) on Friday October 06 2017, @11:05AM (#577935) Journal
      If evaluation time of the proof assistant corresponded to probability of correctness, then that would be a good idea. Unfortunately, it doesn't. There is no correlation between the length of a proof and whether the thing is true. I'll leave this as an exercise for the reader, but there's a fairly simple proof that what you are asking for reduces to the halting problem and is therefore not computable.
      --
      sudo mod me up