Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 12 submissions in the queue.
posted by CoolHand on Wednesday October 04 2017, @03:35PM   Printer-friendly
from the math-rebels dept.

QuantaMagazine.org has a very long and interesting article on computers redefining the roots of Math.

On a recent train trip from Lyon to Paris, Vladimir Voevodsky sat next to Steve Awodey and tried to convince him to change the way he does mathematics.

Voevodsky, 48, is a permanent faculty member at the Institute for Advanced Study (IAS) in Princeton, N.J. He was born in Moscow but speaks nearly flawless English, and he has the confident bearing of someone who has no need to prove himself to anyone. In 2002 he won the Fields Medal, which is often considered the most prestigious award in mathematics.

[...] The idea of doing mathematics in a program like Coq has a long history. The appeal is simple: Rather than relying on fallible human beings to check proofs, you can turn the job over to computers, which can tell whether a proof is correct with complete certainty. Despite this advantage, computer proof assistants haven't been widely adopted in mainstream mathematics. This is partly because translating everyday math into terms a computer can understand is cumbersome and, in the eyes of many mathematicians, not worth the effort.

For nearly a decade, Voevodsky has been advocating the virtues of computer proof assistants and developing univalent foundations in order to bring the languages of mathematics and computer programming closer together. As he sees it, the move to computer formalization is necessary because some branches of mathematics have become too abstract to be reliably checked by people.

"The world of mathematics is becoming very large, the complexity of mathematics is becoming very high, and there is a danger of an accumulation of mistakes," Voevodsky said. Proofs rely on other proofs; if one contains a flaw, all others that rely on it will share the error.

This is something Voevodsky has learned through personal experience. In 1999 he discovered an error in a paper he had written seven years earlier. Voevodsky eventually found a way to salvage the result, but in an article last summer in the IAS newsletter, he wrote that the experience scared him. He began to worry that unless he formalized his work on the computer, he wouldn't have complete confidence that it was correct.

But taking that step required him to rethink the very basics of mathematics. The accepted foundation of mathematics is set theory. Like any foundational system, set theory provides a collection of basic concepts and rules, which can be used to construct the rest of mathematics. Set theory has sufficed as a foundation for more than a century, but it can't readily be translated into a form that computers can use to check proofs. So with his decision to start formalizing mathematics on the computer, Voevodsky set in motion a process of discovery that ultimately led to something far more ambitious: a recasting of the underpinnings of mathematics.

Geometry and classic algebra are not really isomorphic. Boolean logic and number theory were proved incompatible by Bertrand Russell


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2, Insightful) by Anonymous Coward on Wednesday October 04 2017, @03:48PM (32 children)

    by Anonymous Coward on Wednesday October 04 2017, @03:48PM (#577063)

    People think they are more productive with a dynamically typed language, but they're just being lazy: They are refusing to codify their abstractions in a way that a computer can check them, and then just waiting for catastrophe to occur at runtime.

    Folks, what are computers for if not to do these kinds of calculations? Take pride in codifying your thoughts explicitly!

    • (Score: 1, Insightful) by Anonymous Coward on Wednesday October 04 2017, @03:58PM (17 children)

      by Anonymous Coward on Wednesday October 04 2017, @03:58PM (#577065)

      Some programming jobs are small enough that convenience of quick iteration and simple syntax beat the added reliability of type-checking.

      • (Score: 5, Insightful) by Anonymous Coward on Wednesday October 04 2017, @04:11PM (16 children)

        by Anonymous Coward on Wednesday October 04 2017, @04:11PM (#577068)

        More of than not, the slapdash prototype becomes the working production code. The more code I read, the more I realize that the entire planet runs on the cyber equivalent of chewing gum and duct tape. Your little shell script will become a fundamental component of your system. It WILL.

        The only way to achieve large-scale excellence is to commit to such excellence even in the seemingly "unimportant" parts; it seems like a waste of time at first, but it's an investment that pays dividends in the long-run.

        • (Score: 0) by Anonymous Coward on Wednesday October 04 2017, @04:27PM (9 children)

          by Anonymous Coward on Wednesday October 04 2017, @04:27PM (#577080)

          The more code I read, the more I realize that the entire planet runs on the cyber equivalent of chewing gum and duct tape.

          And this is why you actually see deployment code that reads


          wget -O - example.com/some.sh | bash -

          • (Score: 3, Insightful) by Runaway1956 on Wednesday October 04 2017, @05:08PM (8 children)

            by Runaway1956 (2926) Subscriber Badge on Wednesday October 04 2017, @05:08PM (#577102) Journal

            I'm not even a coder, but that hurts to even think about. Almost the equivalent of listening to an engine run without any oil, for awhile . . .

            • (Score: 1, Touché) by Anonymous Coward on Wednesday October 04 2017, @07:09PM

              by Anonymous Coward on Wednesday October 04 2017, @07:09PM (#577133)

              Come on, SoylentNews. Come on.

            • (Score: 2) by Pino P on Wednesday October 04 2017, @08:57PM (6 children)

              by Pino P (4721) on Wednesday October 04 2017, @08:57PM (#577179) Journal

              When you wget or curl into bash, you're no more vulnerable than (say) when you install a package from a repository listed in your OS distribution's package manager. Or is the problem that there is no https: scheme on that wget command?

              • (Score: 0) by Anonymous Coward on Wednesday October 04 2017, @09:24PM (3 children)

                by Anonymous Coward on Wednesday October 04 2017, @09:24PM (#577195)

                That's not a rebuttal; that's just more proof that all of the computing world is built on a very poor foundation.

                However, at least such repositories are increasingly using security concepts such as cryptographic webs of trust.

                We can do so much better. We have the technology.

                • (Score: 2) by Pino P on Thursday October 05 2017, @02:37PM (2 children)

                  by Pino P (4721) on Thursday October 05 2017, @02:37PM (#577455) Journal

                  I'm confused as to how a web of trust can become dense enough to provide reasonable faith in someone's identity if there aren't enough people who travel internationally to key signing parties. International tourism is expensive, as are specialist legal services to determine whether key signing constitutes prohibited "work" on a tourist visa. Besides, even if you do get to a key signing party, vouching for someone's identity doesn't necessarily imply vouching for that person's ability to validate others' identities.

                  • (Score: 0) by Anonymous Coward on Friday October 06 2017, @12:25AM (1 child)

                    by Anonymous Coward on Friday October 06 2017, @12:25AM (#577722)

                    You don't have to meet people face-to-face to build such a web-of-trust; it is enough to publish a public key widely (e.g., across a sufficient number of independent sources) so that an attacker couldn't practically fake it.

                    • (Score: 2) by Pino P on Tuesday October 10 2017, @08:18PM

                      by Pino P (4721) on Tuesday October 10 2017, @08:18PM (#580017) Journal

                      What prevents an attacker from publishing a key widely before you do?

              • (Score: 2) by coolgopher on Thursday October 05 2017, @01:19AM (1 child)

                by coolgopher (1157) on Thursday October 05 2017, @01:19AM (#577262)

                No, clearly the problem is that it was piping into a regular bash, not a sudo'd bash for guaranteed root access! ;)

                • (Score: 2) by Pino P on Thursday October 05 2017, @02:26PM

                  by Pino P (4721) on Thursday October 05 2017, @02:26PM (#577451) Journal

                  You don't usually need administrative privilege to install executables to your user account.

        • (Score: 0) by Anonymous Coward on Wednesday October 04 2017, @04:46PM (5 children)

          by Anonymous Coward on Wednesday October 04 2017, @04:46PM (#577090)

          If your code has a lot of calculation, you can use MATLAB/Octave and easily build on well-established built-in functions. Or you can use GSL with C/C++, which has a much steeper learning curve, and will never read as cleanly. As the saying goes, the perfect is the enemy of the good.

          • (Score: 0) by Anonymous Coward on Wednesday October 04 2017, @05:39PM (3 children)

            by Anonymous Coward on Wednesday October 04 2017, @05:39PM (#577119)

            Try again.

            • (Score: 0) by Anonymous Coward on Wednesday October 04 2017, @09:26PM (2 children)

              by Anonymous Coward on Wednesday October 04 2017, @09:26PM (#577196)

              The whole point of that saying is to highlight that there is a real dichotomy between the perfect and the good. While the set of perfect things may be included in the set of good things, in many situations the perfect set is the empty set, and to insist on perfection is to discount everything in the good set.

              • (Score: 0) by Anonymous Coward on Wednesday October 04 2017, @10:02PM (1 child)

                by Anonymous Coward on Wednesday October 04 2017, @10:02PM (#577211)

                In your world, the set of perfect things is often empty because you've bungled the foundations on which your solutions are built. Start over, and take perfection a little more seriously this time.

                • (Score: 0) by Anonymous Coward on Friday October 06 2017, @02:57PM

                  by Anonymous Coward on Friday October 06 2017, @02:57PM (#578036)

                  If you think everything you do is perfect, I don't think the GP is the one that needs to come to reality.

          • (Score: 1) by ewk on Thursday October 05 2017, @08:13AM

            by ewk (5923) on Thursday October 05 2017, @08:13AM (#577357)

            "well-established built-in functions"

            But, without actual proof that these WEBIFs are correct, you're still (theorically) building on quicksand.

            Curious: Does MATLAB or Octave with such a formal proof of correctness?

            --
            I don't always react, but when I do, I do it on SoylentNews
    • (Score: 3, Insightful) by Anonymous Coward on Wednesday October 04 2017, @07:01PM (3 children)

      by Anonymous Coward on Wednesday October 04 2017, @07:01PM (#577132)

      This is not only true, but it explains the absurd state of security of the web. You can actually name the root of nearly every web exploit, hack, bug, and everything with one word: javascript. It was built decades ago and never meant to be used in anything like the way it is today. As a random example, do people ever consider how absurd it is that jquery is non-native? Or speaking of jquery, go take trace the most basic call using it... That's the stuff nightmares are made of. This isn't a language evolving. This is people taking some old wood collected to build a tool shed with, and trying to build sky scrapers out of it. Each time it, completely unexpectedly.., comes crashing down - no problem, it just needs a bit more glue there next time.

      Some individuals take it almost as a sort of pride that anything you learn (in this domain) will be obsolete in a few years. That's true. It's also completely idiotic. It's indicative more than anything else not of evolution, but of an indicator that the tool being used is simply inappropriate for what it's being used for. And while I pick on javascript this is true of the entire base of the web. A [mostly] single page, dynamically driven, secure website with a stateful connection, is what 99% of users want. But the tools we're using to make these things were simply not designed for that. It's all obsolete, and in desperate needs of a reboot.

      • (Score: 2) by crafoo on Wednesday October 04 2017, @08:51PM (2 children)

        by crafoo (6639) on Wednesday October 04 2017, @08:51PM (#577173)

        javascript was a mistake.
        running code in the browser was, and continues to be a mistake.

        • (Score: 2) by Pino P on Wednesday October 04 2017, @09:00PM

          by Pino P (4721) on Wednesday October 04 2017, @09:00PM (#577180) Journal

          What means of deploying an application to multiple operating systems isn't a mistake?

        • (Score: 0) by Anonymous Coward on Thursday October 05 2017, @07:33PM

          by Anonymous Coward on Thursday October 05 2017, @07:33PM (#577596)

          PHB's and consumers will almost always choose eye-candy and fads over security, unless the security risks are made very clear to them and they don't have to trust you to believe it.

    • (Score: 1) by rylyeh on Wednesday October 04 2017, @09:47PM (1 child)

      by rylyeh (6726) <{kadath} {at} {gmail.com}> on Wednesday October 04 2017, @09:47PM (#577203)

      I like to use the right tool (typed or dynamic) for the task.
      That's why there are numerous languages that have different type systems.
      Saying 'All dynamically typed stuff is sh*t' is not quite fair.
      Code that will be consumed in a library that is itself consumed for an API? Strong typing might be indicated.
      A script that runs as a watchdog? Dynamic typing might be the best.

      Shouldn't the form fit the function?

      I have also found that many of the 'type' decisions made for which language to use are about time, reliability, audience and cost.

      --
      "a vast crenulate shell wherein rode the grey and awful form of primal Nodens, Lord of the Great Abyss."
      • (Score: 0) by Anonymous Coward on Wednesday October 04 2017, @10:08PM

        by Anonymous Coward on Wednesday October 04 2017, @10:08PM (#577214)

        Here, I'll do it for you: The right tool is the one that allows the computer to analyze the correctness of the code as much as possible before run-time.

    • (Score: 2) by shortscreen on Thursday October 05 2017, @01:35AM (2 children)

      by shortscreen (2252) on Thursday October 05 2017, @01:35AM (#577268) Journal

      The idea of types is itself an abstraction. In reality there is only one type: binary. That's what the computer deals with. The only difference between these bits and those bits is how they are interpreted. Humans have to decide how the bits should be interpreted. But that would require them to think about data structures which they are mostly too lazy to do. Anyway, what I'm trying to say here is that types are baloney. And so are programming languages. I'll take NASM instead any day.

      • (Score: 1) by rylyeh on Thursday October 05 2017, @03:27AM

        by rylyeh (6726) <{kadath} {at} {gmail.com}> on Thursday October 05 2017, @03:27AM (#577300)

        Yes! I had to use LISA on an Apple II. Worked great! Better than trying to type the opcodes and data by hand in the mini-assembler!

        --
        "a vast crenulate shell wherein rode the grey and awful form of primal Nodens, Lord of the Great Abyss."
      • (Score: 2) by maxwell demon on Thursday October 05 2017, @07:46AM

        by maxwell demon (1608) on Thursday October 05 2017, @07:46AM (#577343) Journal

        NASM? How dare you. The most of a tool you should use is xxd! :-)

        --
        The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 3, Informative) by TheRaven on Thursday October 05 2017, @10:24AM (4 children)

      by TheRaven (270) on Thursday October 05 2017, @10:24AM (#577389) Journal

      Assuming that you're not trolling, allow me to give you a counterpoint:

      Type theory is one way of encoding constraints. Most mainstream programming languages have restrictive type systems that don't allow you to encode various things that are generally useful. In a fully dynamically typed language, you use explicit assertions (for example: must implement this protocol, must provide this method) that are checked at run time, but these are generally the least useful checks: if you're making errors related to the sorts of things that most mainstream programming languages can check then you have problems. Here's a trivial example: In your favourite statically typed language, how do you express in the type system that a sort function is correct (i.e. that, for all inputs, the output will have the same set of elements as the input and that they will respect the specified partial ordering)? That's a trivial correctness property in comparison to most programs, yet one that is beyond the scope of most static type systems.

      At the opposite extreme are languages with Turing-complete type systems. There are quite a few of these, though C++ is the only one that's made it into the mainstream. You can, using C++ templates (or, in C++11 or later, templates and constexpr functions, and ideally templates in combination with static assertions), check any arbitrary property of your program that can be known at compile time. This is hugely powerful, but it has a serious limitation: type checking the program reduces to the halting problem. I am currently working on a research project that takes around 2 minutes for template instantiation on a single (fairly small) compilation unit because it is performing a huge number of static checks that go far beyond anything that most other mainstream statically typed languages can do (and, as a result, catches a lot of subtle bugs at compile time). Unfortunately, sometimes when I make an error in this code the compile time jumps to 10+ minutes. Sometimes, when I make a correct change, it fails to compile with a confusing error. The reason for both of these is that the compiler has limits (configurable by command-line flag) for recursion depth of template instantiation and constexpr function invocations. When a check involving a constexpr function fails to terminate in the bounded number of invocations, the error message tells me that a value is not a constant (which is confusing, because the value is a constant, it's just that the compiler gave up trying to evaluate it too early).

      It is provably impossible to determine whether an arbitrary string is a valid C++ program for precisely this reason: the results of constexpr evaluations and template instantiations can perform any arbitrary computation and so you have to solve the halting problem to tell whether it's actually wrong or whether it's just taking a really long time.

      If you really care about correctness, then you shouldn't use a language that tries to shoehorn arbitrary checks into a type system, you should use a language that's designed explicitly for proof-carrying code. F* is the closest example that I'm aware of to being a mainstream language: It's a variant of F# that uses the Z3 SMT solver to check correctness proofs written in the same language as the rest of the program. Unfortunately, it suffers from the same problem: sometimes, correct proofs will time out in the solver, so you can only compile a subset of all correct programs.

      TL;DR: There's no silver bullet. Trivial type systems will catch trivial errors, complex type systems may never terminate checking correct programs.

      --
      sudo mod me up
      • (Score: 0) by Anonymous Coward on Friday October 06 2017, @12:29AM (1 child)

        by Anonymous Coward on Friday October 06 2017, @12:29AM (#577724)

        Why is everything today labeled "Trolling"? I just... I don't even know what that means anymore.

        • (Score: 2) by TheRaven on Friday October 06 2017, @11:02AM

          by TheRaven (270) on Friday October 06 2017, @11:02AM (#577934) Journal
          It's trolling because static vs dynamic languages is a flame war even older than vi vs emacs, and the post asserting that one is superior looks carefully designed to prompt a response: i.e. trolling.
          --
          sudo mod me up
      • (Score: 0) by Anonymous Coward on Friday October 06 2017, @12:36AM (1 child)

        by Anonymous Coward on Friday October 06 2017, @12:36AM (#577728)

        What's wrong with reporting to the user that a program (or fragment thereof) is correct with a probably 80% or whatever? Then, build-time policy can be set: Failure is when probability is less 95%, and we know we have to increase build-time computing resources until the compiler reaches this level of truth.

        • (Score: 2) by TheRaven on Friday October 06 2017, @11:05AM

          by TheRaven (270) on Friday October 06 2017, @11:05AM (#577935) Journal
          If evaluation time of the proof assistant corresponded to probability of correctness, then that would be a good idea. Unfortunately, it doesn't. There is no correlation between the length of a proof and whether the thing is true. I'll leave this as an exercise for the reader, but there's a fairly simple proof that what you are asking for reduces to the halting problem and is therefore not computable.
          --
          sudo mod me up
  • (Score: 3, Informative) by Anonymous Coward on Wednesday October 04 2017, @04:15PM (6 children)

    by Anonymous Coward on Wednesday October 04 2017, @04:15PM (#577071)

    See http://www.math.columbia.edu/~woit/wordpress/?p=9591 [columbia.edu] and especially read the comments and interview links.

    • (Score: 0) by Anonymous Coward on Wednesday October 04 2017, @04:22PM

      by Anonymous Coward on Wednesday October 04 2017, @04:22PM (#577075)

      When it rains it pours.

    • (Score: 0) by Anonymous Coward on Wednesday October 04 2017, @04:26PM (3 children)

      by Anonymous Coward on Wednesday October 04 2017, @04:26PM (#577078)

      I find it interesting that the various comments at the other end of your link point out how he seemed like a "modest" person; given that such a quality is worth pointing out multiple times, I suspect higher-level mathematics is stuffed with some pretty big egos.

      • (Score: 0) by Anonymous Coward on Wednesday October 04 2017, @04:30PM (1 child)

        by Anonymous Coward on Wednesday October 04 2017, @04:30PM (#577081)

        I suspect higher-level mathematics is stuffed with some pretty big egos.

        You suspect wrong. Higher level of math is stuffed by "weird people" who rarely bother with things like ego. I find that ego is more an exception to the rule.

        • (Score: 0) by Anonymous Coward on Wednesday October 04 2017, @04:36PM

          by Anonymous Coward on Wednesday October 04 2017, @04:36PM (#577082)

          If it were uncommon for his peers to have imposing egos, then his modesty wouldn't have stuck out so much.

          You don't even need a Coq proof to realize this.

      • (Score: 3, Insightful) by Runaway1956 on Wednesday October 04 2017, @05:15PM

        by Runaway1956 (2926) Subscriber Badge on Wednesday October 04 2017, @05:15PM (#577105) Journal

        What I've seen is, mathameticians tend to be perfectionists. Given any task, they'll worry it down to the finest details, and not stop until it's shined and buffed. They'll drive a normal person nucking phutts. Hell, they'll even drive ME phutts, and I'm not even normal!!

        They don't have big egos, so much, as they look down on all of us bumbling fools who settle for anything less than perfection.

        That's just my experience - YMMV.

    • (Score: 1) by rylyeh on Wednesday October 04 2017, @09:53PM

      by rylyeh (6726) <{kadath} {at} {gmail.com}> on Wednesday October 04 2017, @09:53PM (#577205)

      Tragic. Reading up on him now, thanks for the link.

      --
      "a vast crenulate shell wherein rode the grey and awful form of primal Nodens, Lord of the Great Abyss."
  • (Score: 1) by Tara Li on Wednesday October 04 2017, @04:16PM (5 children)

    by Tara Li (6248) on Wednesday October 04 2017, @04:16PM (#577072)

    If this system can take a proof and verify it - could it then be used to genetically create new proofs and verify them as a measure of "fitness", and would this likely lead to useful new mathematical understandings, or just random factoids thrown out? And could this be applied to the mathematics used in the Standard Model of physics, to let us get a handle on WTF is going on with it?

    • (Score: 0) by Anonymous Coward on Wednesday October 04 2017, @05:00PM (2 children)

      by Anonymous Coward on Wednesday October 04 2017, @05:00PM (#577098)

      That was actually the original impetus for developing automated theorem proving. It just turned out that proving theorems is actually fairly difficult, so generating theorems isn't really possible Maybe it will be in the future with better theorem proving technology / more computing power.

      • (Score: 3, Interesting) by HiThere on Wednesday October 04 2017, @05:04PM (1 child)

        by HiThere (866) Subscriber Badge on Wednesday October 04 2017, @05:04PM (#577101) Journal

        Actually, what's really difficult is generating interesting proofs. Proofs are easy to generate if you don't care what is being proven.

        Because of this you probably aren't going to get a automated theorem prover before you get a full scale AI.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
        • (Score: 3, Funny) by captain normal on Wednesday October 04 2017, @08:14PM

          by captain normal (2205) on Wednesday October 04 2017, @08:14PM (#577154)

          But if you get full scale Artificial Intelligence, wouldn't you wind up with artificial theorem and artificial proofs?

          --
          Everyone is entitled to his own opinion, but not to his own facts"- --Daniel Patrick Moynihan--
    • (Score: 0) by Anonymous Coward on Thursday October 05 2017, @09:39AM

      by Anonymous Coward on Thursday October 05 2017, @09:39AM (#577372)

      If you're asking about how to generate proofs then maybe this is useful. Here's what I understand about the "set of support" strategy for automated theorem proving:

      To start off you have two things. A goal statement that want to prove/disprove, and a "set of support" which starts out as _the set of all of the math statements that you assume are true to begin with_. These would include things like the axioms of the math system that you are using, plus other statements representing theorems that you'd want to use without having to re-prove from the ground up, and would also include statements which set up the problem you're trying to solve.

      As an example: Say my problem was that I had a triangle where I knew that two of the angles were 60 degrees and 90 degrees, and my goal was to automatically prove/disprove that the third angle was 30 degrees. The set of statements that I began with would need to include which math system I was working with, because there are some systems where the angles in a triangle add up to 180 degrees, and others where they don't (e.g. https://en.wikipedia.org/wiki/Spherical_geometry). [wikipedia.org] If I didn't include this kind of basic stuff, then an automated theorem prover would never be able to prove/disprove the goal statement, because it wasn't given enough information to figure out if the angles in a triangle add up 180 degrees or not. On top of that I would probably want to include other statements ( some theorems about triangles maybe ) if I wanted to do problems with triangles and not have to prove those things from the ground up. The last thing I'd have to provide would be statements about the actual problem I have ("There is a triangle and one angle is 60 degrees and another is 90 degrees.").

      So that describes how you start out. From there, you follow [some procedure] to derive a new statement from the existing set of support. This new statement must be true, since you derived it from the things you began with, and we're assuming that all those things are true. So you include the new statement into the set of support, and now you can use it going forward. You keep doing this in a loop until you either prove/disprove the goal statement or give up.

      So, the kind of set-up that's described here really doesn't say how to do any problem-solving. The "[some procedure]" part is where all the problem solving would go (how do you actually choose what stuff to generate so that you get to the answer?) and people don't have a way to do that efficiently. There are competitions to build the best tools for this stuff ( http://www.cs.miami.edu/~tptp/CASC/ [miami.edu] http://smtcomp.sourceforge.net/2017/ [sourceforge.net] http://www.satcompetition.org/ [satcompetition.org] )

      Also there are "SMT solvers" and "SAT solvers" which solve smaller classes of problems, and they work differently and actually are used. e.g. https://github.com/Z3Prover/z3 [github.com] could be good to look at.

    • (Score: 2) by TheRaven on Thursday October 05 2017, @10:42AM

      by TheRaven (270) on Thursday October 05 2017, @10:42AM (#577392) Journal
      From every set of axioms, there's an infinite number of things that you can prove. A lot of automated proof systems begin with something like Natural Deduction [wikipedia.org]. In this model, you have a small number of incremental proof steps that you use to get from axioms to results (each individual step is also a proof, just often not a useful one). A machine can explore all of the deductive rules, generating a huge tree, pruning branches that give duplicate results (i.e. you prove the same thing two different ways) until it's exhausted the things that you can prove like that. Unfortunately, proof by contradiction (which is required for a lot of proofs. Often referred to as EFQ in these systems: ex falso quodlibet) is really hard to do automatically, because for any interesting set of axioms there are an infinite set of false things, but a fairly small set of things where a proof that a thing is false (and therefore if you assume it you reach a contradiction and so the negation must be true) gives you an interesting result. Modern proof assistants can do EFQ proofs, but they are generally very bad at picking something sensible to start an EFQ step. For small logic systems, you can just exhaustively search, but for more complex systems this doesn't work and you need a human who actually understands the thing that you're trying to prove to give some hints.
      --
      sudo mod me up
  • (Score: 4, Interesting) by ledow on Wednesday October 04 2017, @04:25PM (3 children)

    by ledow (5567) on Wednesday October 04 2017, @04:25PM (#577077) Homepage

    Surely with the computers, we can categorise any paper based on what technique and proofs it really on and - if it relies on a proof later proven incorrect, or clarified to have exceptions - we could just trawl the database.

    Almost like a chain-of-trust. Your paper cites / uses Proof X? We'll tag it as that. When Proof X changes in 20 years, we can see all the papers citing/tagging it and go back and check them. Literally your paper jumps into the "may be unproven" box until someone goes back and checks.

    Teaching a computer to have complete provability is both important, and impossible. The 4-colour theorem was proven with the aid of computers. But some modern theorems are provable only ON computer. And some can't even be expressed in conventional ways (e.g. as soon as you start getting quantum, things get tricky).

    Better would be to have the body of evidence and chain-of-trust. It would also limit people citing thousands of papers unnecessarily, as each one has a chance to knock out the validity of their paper. Yes, the opposite abuse is true (deliberating not tagging/citing in case your paper gets invalidated) but that's to be expected and should be noticed when the paper is peer-reviewed initially anyway (almost nobody would peer-review an historical paper that may or may not use a particular technique that was later proven flawed, however).

    • (Score: 4, Funny) by fritsd on Wednesday October 04 2017, @06:02PM (1 child)

      by fritsd (4586) on Wednesday October 04 2017, @06:02PM (#577123) Journal

      math-get upgrade

      The following packages will be upgraded:
      fermats-third-theorem
      whether-the-light-stays-on-in-the-fridge-after-you-close-it
      2 upgraded, 0 newly installed, 0 to remove and 987654321 not upgraded.

      • (Score: 1, Funny) by Anonymous Coward on Wednesday October 04 2017, @08:37PM

        by Anonymous Coward on Wednesday October 04 2017, @08:37PM (#577165)

        Darn those changed dependencies. Now you have to dist-upgrade to the new math and hope the universe reboots correctly.

    • (Score: 1) by rylyeh on Wednesday October 04 2017, @10:44PM

      by rylyeh (6726) <{kadath} {at} {gmail.com}> on Wednesday October 04 2017, @10:44PM (#577219)
      Agree. Seems like you could use the method you described with ontology servers and https://en.wikipedia.org/wiki/Semantic_similarity/ [wikipedia.org] as well. I think that the fact-checking services would be very interested.
      --
      "a vast crenulate shell wherein rode the grey and awful form of primal Nodens, Lord of the Great Abyss."
  • (Score: 2) by takyon on Wednesday October 04 2017, @04:45PM

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Wednesday October 04 2017, @04:45PM (#577087) Journal
    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 3, Interesting) by cosurgi on Wednesday October 04 2017, @05:20PM

    by cosurgi (272) on Wednesday October 04 2017, @05:20PM (#577107) Journal

    A Field's medal laureate sets out to realize one of my biggest dreams about mathematics. That's so great that I am going to check the source code and see if I can use this and improve this!

    --
    #
    #\ @ ? [adom.de] Colonize Mars [kozicki.pl]
    #
  • (Score: 2) by fritsd on Wednesday October 04 2017, @06:10PM (3 children)

    by fritsd (4586) on Wednesday October 04 2017, @06:10PM (#577125) Journal

    For all you math aficionados:

    I'm not entirely sure how relevant this is, but it talks about automated proof exploring:

    Norman Megill 's "metamath" website [metamath.org]

    They talk a lot about Zermelo-Fränkel set theory, but I don't know what that means.

    • (Score: 2) by fritsd on Wednesday October 04 2017, @06:12PM

      by fritsd (4586) on Wednesday October 04 2017, @06:12PM (#577126) Journal

      (reply to self)
      find the file set.mm in the archive metamath.tar.bz2, it contains the 19000 proofs in a special notation.

      I check it every few years but I haven't gotten any the wiser yet.

    • (Score: 2) by maxwell demon on Wednesday October 04 2017, @07:10PM (1 child)

      by maxwell demon (1608) on Wednesday October 04 2017, @07:10PM (#577134) Journal

      They talk a lot about Zermelo-Fränkel set theory, but I don't know what that means.

      In a nutshell, that's the commonly accepted set theory.

      --
      The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 1, Interesting) by Anonymous Coward on Wednesday October 04 2017, @08:15PM

        by Anonymous Coward on Wednesday October 04 2017, @08:15PM (#577155)

        outside the nutshell, i once went to a seminar because the abstract talked about set theory and formal logic. I have a PhD in physics, and I consider myself a smart person, plus I really like mathematical logic, but that presentation... I think there were 5 people in total present, and I personally did not understand anything. not a thing.

  • (Score: 1, Funny) by Anonymous Coward on Wednesday October 04 2017, @11:04PM (1 child)

    by Anonymous Coward on Wednesday October 04 2017, @11:04PM (#577224)

    For everyone else, there's this thread.

    ... heh, heh... he said "Coq."

    • (Score: 2) by TheRaven on Thursday October 05 2017, @10:58AM

      by TheRaven (270) on Thursday October 05 2017, @10:58AM (#577397) Journal
      One of the projects that I'm involved in is a mix of systems and theory people. It's always fun whenever one of the theory people says something involving Coq (for example 'he has a big Coq proof', or 'my Coq is broken at the moment', or 'this needs a lot of coq') watching the systems people trying not to smirk.
      --
      sudo mod me up
  • (Score: 2) by hendrikboom on Friday October 06 2017, @08:53PM

    by hendrikboom (1125) Subscriber Badge on Friday October 06 2017, @08:53PM (#578329) Homepage Journal

    Type theory sufficient to do mathematics is not all that new. It was pioneered by Per Martin-Lof and I was already using it in the eighties to derive correct, but simple, computer programs. The problem then was that computers were too small and slow back then. Expanding RAM from about 5 megabytes to many many gigabytes and computer speed from megahertz to gigahertz has made a big difference in what is feasible. It's computer capacity hat has driven the development, or should I say redevelopment, of automated formal logic from interesting toy to practical applicability.

    What Voevodsky accomplished, and this was an impressive insight, was to realize that the identity types that were part of Martin-Lof's type theory could be interpreted as homotopy equivalences in the category of topological spaces (or, more precisely, the category of homotopy types of these spaces). This has revolutionised the study of topology and has generalized it beyond recognition, no mean accomplishment.

(1)