apropos

Proof, Truth, and Computation

intuitionistic logic and the foundations of mathematics

I gave a talk at CUMC over the summer, on intuitionistic logic and the foundations of mathematics. I spent quite a bit of time on it, and thought it went relatively well, so decided to upload the slides and transcript here!

Around the turn of the 20th century, mathematicians ran into a problem. A pretty big problem. A crisis, if you will. They started caring about their foundations: their justification for assuming the principles of mathematics they did. And when they started looking closer, they discovered that things were not exactly as they seemed. Little paradoxes and concerning, very weird statements began to pop up - most famously Russell’s paradox. This led to a very real worry that if their foundations weren’t solid, mathematics was building atop a house of cards - with total collapse being inevitable. I’m going to talk about this a little more later. But this is the backdrop of our story - intuitionism and subsequently intuitionistic logic came out of this setting.

So - intuitionistic logic, which is what this talk is mostly about, is a formal system, and I’m going to need to introduce some formal notation to sufficiently describe it. This up here is an inference - you can think of it as vaguely analogous to natural deduction. On the left hand side here we have ΓΓ - ΓΓ is a context, and can be thought of as the set of arbitrary propositions. On the right hand side is AA - some proposition. In the middle here, we have a single tack - annotated with a cc. The cc stands for classically - in the wild, the logical system is frequently implicit, but I want to be very explicit in this talk about when we are talking about classical logic and intuitionistic logic. As a whole, we pronounce this as Gamma classically entails A.

So the reason logicians do not simply re-use the standard implication arrow is - aside from notational confusion - that there are actually two notions of validity, or entailment, that are interesting to talk about! We have here, syntactic validity. This corresponds to proof, or in philosophical terms, sense. We say that AA is syntactically derived by ΓΓ when applying dumb rewrite rules to ΓΓ can yield us AA. Semantic validity, on the other hand, concerns itself with preserving truth. This corresponds to a philosophical of meaning, or denotation. We say that AA is semantically entailed by ΓΓ if AA is true only when ΓΓ is.

So, what kind of falls out of having separate notions of validity are equivalences between them. This is soundness and completeness. If you’ve heard about a system being sound, there is actually a formal definition of what sound means. We say that a system is sound if every syntactically valid inference - every proof - is semantically correct. This is very important! Unsound logics are… not great. We say that a system is complete if every semantically valid inference - everything that is true - is provable. It’s less common for people to talk about completeness, because a lot of useful logics are incomplete - including, famously, second-order classical logic. A guy named Gödel had interesting things to say about that - he showed that there are true propositions that cannot be proved, in a sufficiently complex system with finite axioms. (If you had infinite axioms, you could simply have an axiom for each true proposition… so that’s cheating.)

Alright, so let’s go back to our foundational crisis. A major result of the crisis was the creation of Hilbert’s program - David Hilbert is this guy by the way, mostly known for his work in foundations and his cool hat - which aimed to provide a consistent (or sound) and complete framework for mathematics, as a set of axioms. Kind of subsequently, and we’ll talk about this a little bit later, he also wanted mathematics to be decidable - for there to exist some algorithm that can spit out true or false for any well-founded mathematical statement. This actually took some time after Gödel to be resolved, but once again, it ended in decisive failure for Hilbert’s program - both Church and Turing showed this with their work on the entscheidungsproblem (“decision problem”) and the halting problem.

On the other hand - diametrically and vehemently opposed, lay Brouwer. LEJ Brouwer was an extremely accomplished mathematician, most famously known for his work in topology. He was also Dutch. The Dutch care about their logic. Separate from his work in standard mathematics - he freely used classical logic - he also had some interesting ideas about the philosophy of mathematics: he believed that mathematics was a pure form of reasoning, making mental constructions - “constructions” here is important - and could only be properly captured in the mind of the mathematician. Writing it down as a formal system, then - Hilbert’s program - was a fool’s game, and the two of them exchanged rather heated words over this point. Brouwer had pretty good philosophical arguments for his side of things - one being that the principle of excluded middle - that a proposition is either true or false - doesn’t really, make sense? He held the excluded middle to be an overused axiom: valid in some contexts, not valid in others or generally, yet consistent nonetheless. This is an important point! Classical logic is not wrong! Classical logic is just perhaps not the best foundation of mathematics.

Alright. So Hilbert wants a formal system for all of mathematics, Brouwer believes it can be done only in the mind. Enter Heyting. Heyting was Brouwer’s student: firmly an intuitionist. What he did is quite funny: he took Brouwer’s philosophy of mind, his anti-syntactic intuitionism: and wrote down syntax for it! Brouwer was not too happy about this. But uh, I am. You might notice that these are called Heyting semantics, but they are using this single-tack symbol that we said earlier was syntactic: this is Heyting’s somewhat brilliant idea. No longer are we modelling truth. We are modelling proof. We say that AA means that there is a proof of AA, ABA ∧ B means that there is both a proof of AA and a proof of BB, ABA ∨ B means that there is either a proof of AA or a proof of BB. AA implies BB means that one can construct - again, “construct” is important - a proof of BB from AA. The negation says that there is a proof that there is no proof of AA. Not that AA is false!

Here in the corner I’ve snuck a very important result of these rules - the excluded middle does not hold in intuitionistic logic! It is not a tautology. It can still be true, in certain conditions, but reading it: we say “A proof of AA or a proof there is no proof of AA”. This is not always true! What if we simply had no proof of AA? What about, say, the twin primes conjecture? On the other hand, what if we had a proof there is no proof of AA? What about the continuum hypothesis?

I do not have time to go into the last two, but these inspired the field of dependent type theory. The talk following this is going to go over a profound connection between intuitionistic logic and dependent type theory - well, between propositions and types as a whole, but dependent type theory is where it gets interesting. So stick around for that. Propositions as types is, far and away the most beautiful and interesting singular thing that I’ve ever learned about, I think. And I’m very sad that I don’t have time to talk about it here.

Okay, back on topic. This is intuitionistic logic: it is separate from intuitionism. So why do we do this? What does this give us? Well, for one, this perhaps more accurately models what mathematics really is! There are propositions neither true nor false: because we have not proven them yet, or because there is no proof. There are a lot more technical details baked up in this, but I’m going to have to move on.

I’ve explicitly mentioned “construction” a number of times: we typically construct a proof, for example. There are some proofs that intuitionists would consider nonconstructive: for example, pulling an object out of an existential proposition - this subtly uses proof by contradiction, which does not hold in intuitionistic logic.

Actually, a lot of things do not hold in intuitionistic logic. The principle of the excluded middle is not valid. The axiom of choice actually implies the principle of the excluded middle, in an intuitionistic system, and so it also does not hold. Again, this is “hold” as a tautology - as propositions, they can be true, in some cases. Something that is straight-up false, though, is the intermediate value theorem! The intermediate value theorem says that if ff is continuous over an interval [a,b][a, b] it takes on all values between aa and bb at some point. That “some” is not constructive!

So why would you do this? Why would you ever want to work with an intuitionistic system, when you lose so much of mathematics? First, intuitionistic logic provides for a more intuitive notion of proof, as we talked about earlier. Second, intuitionistic logic is amenable to computational interpretations, which we’ll talk more about in a minute. But most importantly - you can still do nonconstructive mathematics! You can simply assume the principle of the excluded middle as an axiom. And having a way to do constructive mathematics is useful! For example, while classical analysis does not hold in an intuitionistic system - without additional axioms, that is - constructive analysis (which is a little bit more difficult - weaker) will give you concrete mathematical objects! Which, for the constructive intermediate value theorem, takes the form of an algorithm for arbitrarily precise approximations!

This principle of the excluded middle really can be considered a principle, or axiom, of decidability: it lets us assume that we either have a proof of AA or a proof there is no proof of AA. Which is exactly decidability! And I really want to emphasize again that this principle is not false - it is just sometimes very helpful to treat it like an independent axiom of your logical system, for constructive, computational reasons.

I am very low on time, so I am going to rush through the rest of this, which I kind of designed to be rushed through. First off - there are these things called proof assistants. There are different types of proof assistants - the most successful ones building atop dependent type theory, which is related to the beautiful intuitionistic propositions-as-types correspondence I mentioned earlier. They are an embodiment of this constructive, computable, way of mathematics that intuitionism and intuitionistic logic set the stage for. Even though these proof assistants are intuitionistic - they can admit classical reasoning, too, you just lose some of your computational interpretation. This here is the standard representation of the natural numbers in a proof assistant, I put this here because I wanted to make a point that I no longer have time to make.

Anyway. Here is a split-second overview of the correspondence between propositions and types - you can see that each of our propositional symbols, our logical connectives, have a corresponding type constructor. You’ll see more of this in the next talk though!

But we’ve talked about why intuitionistic logic is useful - why are proof assistants useful? They’re useful for a number of reasons - there are many, many stories of people outside of the mathematical community or without or without access to a standard background in mathematics who pick up a proof assistant, and able to reason in a system that does not admit errors, learn mathematics. But they are also useful because math is hard! In an ideal world, we have peer review to catch mistakes in proof, and mathematics builds atop a solid, concrete, justifiable foundation. But in the real world, we have proofs like Wiles’ proof of Fermat’s Last Theorem spanning over a hundred pages, taking years to verify, and other proofs, disputed for their complexity, like the possible proof of the ABC conjecture!

A more recent example is a proof mechanized by Sky Winshaw that New Foundations - an alternative foundations of mathematics - is consistent with respect to ZFC! You can’t really prove that it is consistent relative to itself because - Gödel - but this is just here to emphasize that proof assistants work for complicated proofs. And provide a stronger notion of validity! Another example comes from the field of condensed mathematics, actually: one Peter Scholze was having some trouble believing his proof. He turned to the Lean community, started up a mechanization project, and as it turned out: mechanization both gave justification to the proof, and simplified it by the way of encouraging constructivism!

And I want to talk about exactly what they give you, real fast, because it’s useful to talk about this notion of validity and where it can go wrong. There are not a lot of places where you can make mistakes, when you are using a proof assistant! You can accidentally use an escape hatch - for example, sorry provides a proof of whatever in Lean - but this is something trivially catchable. Just search your codebase for the word “sorry”. And I think Lean will yell at you, too. You can have mistakes in the theoretical model - the specific form of logic underpinning the proof assistant. But this is very well studied. You can have mistakes in the implementation of the proof assistant - but what most proof assistants do is they have a kernel, a small core core of the language that everything else is built atop. And these kernels, in turn, are very well studied. And you can have a mistake in the type - the proposition - of a theorem, where you write down and accidentally prove the wrong thing. But this is unlikely and the very act of mechanizing your proof invites closer scrutiny. And I want to emphasize that - bugs in the proof will be caught! With these exceptions, the actual reasoning of the proof - must be sound!

My main takeaway from all of this is that these systems are cool, and also sometimes useful, and make us think deeply about what we are really doing. I like foundations for the sake of foundations - I just think they’re neat!

So I hope I’ve at least interested some of you in exploring this further. If you’re a mathematician newly-interested in this, I would recommend checking out proof assistants! I think Lean is quite cool as a system: design-wise it has learned from the last two decades of theorem provers, and it seems to attract mathematicians otherwise not interested in foundations fairly well. The Coq theorem prover - soon to be Rocq, it is being renamed for… obvious reasons, is also a very good language despite its age, and the next talk is going to go into some examples of working through proofs in it. Another popular one is Agda, which category theory people seem to like. And there are various other proof assistants, most commonly used for software verification.

Question: So can you work with classical logic in these intuitionistic-based systems?
Answer: Yes! In these proof assistant systems, you can freely - but explicitly - assume classical axioms. This breaks some computational interpretation - in particular, “extracting” programs from proofs (which I didn’t get into) no longer holds, but checking proofs is unaffected.