Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> First, theorem proving is NOT the same as an advanced form of static typing.

This Hacker News post is about a theorem prover based on dependent types. That's the context for our discussion.

> You will still have buggy programs in which you use those components

No one is disagreeing with this claim. But eliminating some bugs is better than nothing, even if you don't eliminate all bugs. You and the other commenters repeating this strawman are doing a lot of harm to people trying to socialize their research.

> Chances are you will be doing some new (=cutting-edge) math if you try to verify new things.

Citation needed. Most software is not doing anything interesting at all from a mathematical perspective, just shuffling data around. But either way the point is moot—Martin-Löf type theory (which is what this "magmide" thing seems to be based on) can do arbitrarily fancy math if needed (which is rarely).

I've been verifying bits of software for about 10 years, and I've never needed to invent new math to do it (though I would be happy if I ever did!).



Well, if you created a new data structure not known before, and proved theorems about it, that's new math. If you copied a well-known data structure, and prove theorems about it, that's not new math.

What do you think mathematicians do? They just examine certain things rigorously and with utmost scrutiny. These things are simpler than things appearing in real-life. Software interfaces with real-life, so cutting-edge math is really a subset of what's needed for software. That is obvious to me, but I don't have a citation. You can cite me, if you want to.

Finally, dependent types as it is done today in Coq and Lean etc. is not nice enough a logic to attract mathematicians. The reason for that is that it is not nice enough a logic, full stop. So why would it be nice enough for computer scientists? Oh, because your problems are simpler, so you don't need a nice logic?

Saying that Martin-Löf type theory can do arbitrarily fancy math is both false and right. Just as saying that anything can be programmed in assembler is both false and right. Yeah, with enough focus and discipline you probably can, but who would want to?


> The reason for that is that it is not nice enough a logic, full stop. So why would it be nice enough for computer scientists?

Type theory has many attractive properties over traditional foundations like set theory. See, for example: https://golem.ph.utexas.edu/category/2013/01/from_set_theory...


Citing your link:

> At this point, however, you may be feeling that type theory sounds very complicated. Lots of different ways to form types, each with their own rules for forming elements? Where is the simplicity and intuitiveness that we expect of a foundational theory?

It's just not there, Mike. Type theory is neither simple nor intuitive, and it doesn't make a good foundation.

Type theory certainly has advantages, but it gets lost in esoteric texts like Mike's. There are two simple advantages over set theory within first-order logic: general variable binding, and that things are kept separate, for example that a boolean is not a number. Now, how to do general variable binding without static types I show in [0]. How to still keep things separate, in [1]. All without having to divide the mathematical universe into static types.


Professional mathematicians are today using dependent types in LEAN to prove leading edge mathematics. Google LEAN and mathlib.


I am aware of Lean and mathlib. It's the group around Kevin Buzzard which is doing mathlib, and it is for sure a great effort. Being practical, they latched onto the theorem proving facilities that are currently available, Lean certainly being one of the best at this moment. But just because they can make type theory work for them, with much effort, doesn't mean that it is the right way to do formal math. They are still a drop in the ocean compared to all the mathematicians who have never touched a proof assistant. I find it telling that in the Liquid Tensor experiment Peter Scholze didn't touch Lean himself, but worked through this group as intermediaries, who mediated the type theory details for him.

The state of proof assistants is such that currently almost all of them are based on some form of type theory. There are exceptions like Mizar and Metamath (Zero), which are based on first-order logic. Type theory seemed to be the only way to have proper general variable binding, via lambdas, and that is the main reason for its popularity today. Type theory is a particular mathematical theory, though, and not particularly well suited to most things mathematicians like to do, like forming completions or subsets of types/sets. Type theory also cannot deal with undefinedness properly. Of course, there are methods of working around these issues like coercions and option types, but they are cumbersome.

Until recently I also thought that a minimum of type theory is necessary, to get general variable binding. For example, I tried to embed set theory within simply-typed higher-order logic ([2]).

But since last year I know that you don't need type theory for variable binding (see [0])! Of course, (dependent) types are still useful, but now you can work with them in a much more flexible way, without having to divide the mathematical universe into separate STATIC and A PRIORI chunks labelled by types (see [1], but that needs to be updated with the understanding gained from [0]).

If type theory works for you, fine. But I know there is a better way.

[2] https://link.springer.com/chapter/10.1007/978-3-319-20615-8_...


You might also want to check out nuPRL. It uses a completely different approach (computational types). It might be a better match for what you are talking about.


I've heard of nuPRL, it seems to be also based on type theory, with special emphasis on constructivity. It's basically the same as Coq, foundationally. At a summer school Bob Constable once said that he would refuse to fly in a plane with software which had been verified formally using classical logic, instead of constructive logic... Well, I wouldn't mind an intuitionistic verification, but I'd definitely take even "just" a classical one.


No it is actually quite different. nuPRL starts with an untyped programming language and you then prove that an untyped expression has a certain behaviour. The behaviour is called a Type but it is fundamentally a very different idea from Martin Loff type theory (IMHO). They do say that it is MLTT, and in principle they are right, but MLTT is as powerful as set theory so that is true of anything mathematically. LEAN for example supports non-constructive mathematics. But it is still based on a type theory. Anyways … YouTube has a great talk about the ideas: https://www.youtube.com/watch?v=LE0SSLizYUI


Thanks for a great reply. I would like to check out your references. However you mention more than one but I can only see [2]?


No worries, happy to talk about this stuff all day long! The links are in a higher up post, they are:

[0] https://obua.com/publications/philosophy-of-abstraction-logi...

[1] https://obua.com/publications/practical-types/1/


Same with me. It is really interesting stuff!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: