I agree it sounds like pretty naive enthusiasm. Not just about how hard the formal verification problem is, but about how hard it is to get ANY kind of programming system up to the point where it's actually usable.
However, every time somebody starts going negative on formal verification by talking about decidability, I get itchy. Sure, we know, all interesting properties of programs in all interesting languages are undecidable in general. Rice's theorem, blah, blah, blah. For any given property and any given language, I can show you a program in that language such that you cannot determine whether that program has that property.
... but it turns out that most actually INTERESTING PROGRAMS aren't like that. When you really write code to get something done, you you generally have a reason to believe that code is going to work. That means that the code is least approximately correct by construction, and correct in a way that's comprehensible and locatable. Often the machine will be able to find a proof that it's correct. If not, you'll often be able to provide hints. And if neither you nor your computer can find a proof, that probably means you don't understand the code enough to want to use it to begin with.
Oh, and on computational cost of verification, you can get some relief by passing around precomputed proofs along with the source code, so they only have to be checked on installation, or incrementally recomputed when the code is modified, rather than being rebuilt from scratch every time.
A lot of people take undecidability to mean “no program can be proven to terminate” when in reality it means “there exist programs which are impossible to prove termination,” and like you said most of the useful programs we write can be shown to terminate just fine.
I agree in general and also take issue with the "we can never prove interesting things about our programs" statements that are sometimes uttered, but it is surprisingly easy to write certain programs that nobody to this date knows whether they terminate, such as: "find the least even number > 2 that cannot be written as the sum of two primes". :)
You forget about Rice’s theorem. Termination is not that exciting, but the existence of race conditions and a million other properties are - and those are not possible to prove true in general.
No, docandrew is correct. You are incorrectly applying Rice's theorem.
Rice's theorem states that those properties can't be automatically decided in general. But that's irrelevant to this discussion, because this "magmide" tool doesn't claim to do that. It merely checks proofs that have already been found (e.g., by a human), which is trivially decidable.
I disagree that most interesting programs don't need things that are Hard to Verify. For instance, approximately every C program would benefit from correct pointer analysis, yet this remains an open research question.
For a personal example, I had a recent lunch interaction with our security team about some work I wanted to do formally verifying bits of a bootloader we have. Halfway through the conversation, one of them pulls up a report they made about how susceptible our specific CPU is to power-glitching. This is something the system design ought to be resistant to, so now my "straightforward idea" has a bunch of thesis-sized holes in it unless I either cut that out of the proof (what will probably happen) or I can make it someone else's problem.
Formal methods are just hard, and open research questions lurk in every slightly dark corner.
> approximately every C program would benefit from correct pointer analysis, yet this remains an open research question.
To me this falls under "you don't understand the code enough to want to use it". Which in case of unsafe pointers is a well established conclusion. In the C-land even well vetted security-related code keeps on delivering pointer bugs.
Pointer analysis is not just about lifetimes. It asks questions like “can these two names alias?, must these two names alias?, what objects can this name point to?”
This is considerably harder than ownership analysis.
Ok, which of those programs can _only_ be implemented via aliasable pointers?
I don't think anyone is arguing that C's loose memory model and rampant undefined behavior are amenable to verification. The point is that neither of those are necessary in many of the places they've been used.
Passing an object by reference to a function creates an alias. Storing a pointer as a field creates an alias. It would be very odd to have a program with zero aliasing relationships. Do do this you would need to never assign from a pointer type.
This has nothing to do with C (mostly - you havoc more frequently when analyzing C if you want to be sound). Pointer Analysis papers usually target Java and in that space it is still stunningly difficult.
Sure, but how many of those C programs are inherently hard to verify vs how many of them are hard because they happen to be built with tools that make them hard to verify?
I don't think those are fully separable concepts in legacy, spaghetti code.
Regardless, I agree that if you're willing to architect your system around the need to formally verify it, the problem is somewhat tractable and getting more so every year as tools improve. Non-experts still might need the occasional "justifiable shortcut" instead of proving everything, but it's doable.
Sure, I don't think anyone is realistically expecting to verify all existing legacy spaghetti code -- _that's_ an interesting research direction, at best.
The point upthread (at least the point I took) is not that existing code doesn't do things to make it difficult to verify, but rather there's no reason inherent to the desired behavior of those programs that it _needs_ to be difficult to verify them. Nor is there good reason going forward to not lean towards tools that make verification more tractable for anything new we write.
> However, every time somebody starts going negative on formal verification by talking about decidability, I get itchy. Sure, we know, all interesting properties of programs in all interesting languages are undecidable in general. Rice's theorem, blah, blah, blah. For any given property and any given language, I can show you a program in that language such that you cannot determine whether that program has that property.
Decidability has nothing to do with this kind of formal verification. The tool doesn't have to decide the correctness of a program. The tool merely needs to check the validity of a proof of the program's correctness, which is rather trivial. Coming up with the proof is the hard part, and that responsibility still falls mostly on humans.
However, every time somebody starts going negative on formal verification by talking about decidability, I get itchy. Sure, we know, all interesting properties of programs in all interesting languages are undecidable in general. Rice's theorem, blah, blah, blah. For any given property and any given language, I can show you a program in that language such that you cannot determine whether that program has that property.
... but it turns out that most actually INTERESTING PROGRAMS aren't like that. When you really write code to get something done, you you generally have a reason to believe that code is going to work. That means that the code is least approximately correct by construction, and correct in a way that's comprehensible and locatable. Often the machine will be able to find a proof that it's correct. If not, you'll often be able to provide hints. And if neither you nor your computer can find a proof, that probably means you don't understand the code enough to want to use it to begin with.
Oh, and on computational cost of verification, you can get some relief by passing around precomputed proofs along with the source code, so they only have to be checked on installation, or incrementally recomputed when the code is modified, rather than being rebuilt from scratch every time.