> Somewhat similar to intuitionism and constructivism - interesting to people interested in computability but considered almost silly or at least niche by “mainstream” math theoreticians.
This was a source of minor culture clash for me: I spent some time over the past couple of years working through some Coq-based educational materials (mainly Software Foundations led by Pierce) and talking to one of the grad student coauthors about his (PL theory/type theory) research. So I rather strongly internalized from this material that "we" don't automatically assume the Law of the Excluded Middle, but at least warn people if we had to use it (as it makes proofs nonconstructive: you no longer have a method for calculating the objects you are talking about). After this, I commented on something on Scott Aaronson's blog and I happened to say something like "wait, but doesn't that require excluded middle, and you didn't explicitly say so?" and Aaronson was rather confused, basically saying something like "we normal mathematicians assume excluded middle all the time, and we don't say so".
So yeah, there is a strong trend in favor of those things in some parts of type theory, but other mathematicians are then surprised that anyone would care. (And Aaronson in particular is someone very interested in and expert about computability, though one might say from a more traditional core computer science perspective.)
I'm going to reply to my own comment to make a digression.
It's an intuitionistic logic theorem that the excluded middle is "irrefutable", that is, that you can never exhibit a counterexample. You can prove this constructively! This leads to something called the double negation translation, where all theorems that could be proved using excluded middle can be proven constructively to be "irrefutable" (rather than "true"), that is, that it would be impossible for an exception or counterexample to the theorem to exist. So you will get ~~P where classical logic could establish P (and deducing P from ~~P is equivalent to excluded middle).
One argument for not assuming excluded middle in proofs is then that the deductive system automatically shows you exactly which conclusions are constructive and which are not constructive, and you can interpret your deductive system in such a way that you're getting the same answers, just with this additional information or metadata about whether the answer was obtained constructively or nonconstructively. If you prove P, you did it constructively (you can exhibit the objects that P talks about, or extract an algorithm for locating or identifying them); if you prove ~~P, you did it nonconstructively, if "it" is considered as "proving P itself" (your argument shows that it would be impossible for the objects in question definitively not to exist, but you don't necessarily know how to find or calculate them).
So one can say, again, that in working this way you're learning the same stuff about your underlying subject matter, but with additional information attached at the end!
This was a source of minor culture clash for me: I spent some time over the past couple of years working through some Coq-based educational materials (mainly Software Foundations led by Pierce) and talking to one of the grad student coauthors about his (PL theory/type theory) research. So I rather strongly internalized from this material that "we" don't automatically assume the Law of the Excluded Middle, but at least warn people if we had to use it (as it makes proofs nonconstructive: you no longer have a method for calculating the objects you are talking about). After this, I commented on something on Scott Aaronson's blog and I happened to say something like "wait, but doesn't that require excluded middle, and you didn't explicitly say so?" and Aaronson was rather confused, basically saying something like "we normal mathematicians assume excluded middle all the time, and we don't say so".
So yeah, there is a strong trend in favor of those things in some parts of type theory, but other mathematicians are then surprised that anyone would care. (And Aaronson in particular is someone very interested in and expert about computability, though one might say from a more traditional core computer science perspective.)