But in a way, imperative is more natural, because it captures the notion of computation more precisely. Haskell -- like other pure FP languages -- is built around the approximation of denotational semantics, which does have a bit of a mismatch with computation (not to say it isn't extremely useful much of the time). Anyway, mathematical thinking about programs didn't start with PFP, nor is PFP the most common way of formalizing programs. See:
I believe that the best way to get better programs is to teach programmers how to think better. Thinking is not the ability to manipulate language; it’s the ability to manipulate concepts. Computer science should be about concepts, not languages. But how does one teach concepts without getting distracted by the language in which those concepts are expressed? My answer is to use the same language as every other branch of science and engineering—namely, mathematics.
It makes me sad that some PFP enthusiasts enjoy the mathematical aspects of it -- as they should -- yet are unfamiliar with the more "classical" mathematical thinking. I think it's important to grasp the more precise mathematics first, and only then choose which approximations you'd make in the language of your choice. Otherwise you get what Lamport calls "Whorfian syndrome — the confusion of language with reality".
Dijkstra, in his "on the cruelty of really teaching computing science" makes a very similar point that people should learn how to reason about programs before learning to program.
This is why dependent types are a great framework to program in. If a program is worth writing, it should at least contain everything the programmer thinks about the program, including the reasoning of why it is the program he wants to write!
Right. In computational semantics there are two schools. The "school of Dijkstra" (now championed most vocally by Lamport) -- which has largely taken hold in the field of formal verification -- and the "school of Milner" (Backus?) -- which has largely taken hold in the field of programming language theory. The former reasons in concepts and abstract structures (computations, Kripke structures), and the latter reasons in languages ("calculi").
The interesting philosophical question is this: can programs be said to exist as concepts independent of the language in which they are coded (in which case the language is an artificial, useful construct) or not (in which case the concept is an artificial useful construct)?
Whatever your viewpoint, the "conceptual" state machine math is a lot simpler than the linguistic math offered by PFP.
> Haskell -- like other pure FP languages -- is built around
> the approximation of denotational semantics,
Interesting, do you have any references for this? I thought that the primary reason for purity was to enable equational reasoning, but I have no sources for this. Also, AFAIK, there are no formal semantics for Haskell?
http://research.microsoft.com/en-us/um/people/lamport/pubs/s...
I believe that the best way to get better programs is to teach programmers how to think better. Thinking is not the ability to manipulate language; it’s the ability to manipulate concepts. Computer science should be about concepts, not languages. But how does one teach concepts without getting distracted by the language in which those concepts are expressed? My answer is to use the same language as every other branch of science and engineering—namely, mathematics.
It makes me sad that some PFP enthusiasts enjoy the mathematical aspects of it -- as they should -- yet are unfamiliar with the more "classical" mathematical thinking. I think it's important to grasp the more precise mathematics first, and only then choose which approximations you'd make in the language of your choice. Otherwise you get what Lamport calls "Whorfian syndrome — the confusion of language with reality".