Just for the record, I am as sick of this part of the book showing up on HN as everyone else. I do not post it here, other people do, and I apologize in advance for it showing up here all the time half-finished.
But, if you have a problem with this I ask that you write code disproving that the copy() function does not terminate for all possible inputs. Or, that you write a function that can check if a C style string is valid. If someone can do that it'd help me with the book.
Please don't feel bad or sorry here. Your critic to the code is VALID. Besides, you are not the only one who have understood the problem in K&R. I had a boss who was aware of this since 90's and talked about it when I was quite ignorant of his points then until I got bitten. But I am glad you rediscovered the problem and made an effort to write it on web.
K&R has its value in historical context. I hope you are not discouraged by the general reaction and I really think we should put more effort in creating new material to teach people in modern context. Please keep on working on it!
For people who say "C strings are null-terminated and you don't pass their length, get used to it", let me point out https://www.securecoding.cert.org/confluence/display/seccode..., which is part of the "CERT C Secure Coding Standard", and the new bounds-checking features in C11 (which I haven't actually seen yet).
"because it does not terminate cleanly for most possible inputs"
"Most possible inputs", meaning "random garbage"? Insisting that an algorithm "terminate cleanly" on every possible input is... a somewhat high standard.
Nope, you don't, in this specific case. My issue is with him asserting that an algorithm is automatically "broken" because you can't determine whether or not it will terminate. That standard would exclude every single programming language in common use.
Quite a few algorithms can be proven to terminate on all inputs (even malformed ones). Zed is asserting that when you have available an algorithm that will definitely terminate, you should tend to prefer it over an algorithm that may not. I tend to agree, especially in an educational text because that's where a lot of people will start picking up habits, both good and bad. Non-terminating algorithms aren't broken in general, but this one is because there is a terminating alternative.
Any Turing-complete language will allow you to implement functions that don't terminate, and indeed in general implement functions for which it is impossible to determine whether they terminate or not.
Better?
"Non-terminating algorithms aren't broken in general, but this one is because there is a terminating alternative."
An algorithm is only "broken" if it doesn't work. That's not the case here. Among working algorithms, one may be a better than another because it is faster, takes up less memory, is more robust when faced with garbage inputs... many engineering tradeoffs. His implementation is probably safer, but it's going to use more space and be slower. Now, that's likely a good tradeoff for most situations, but not all.
However, his claim that the algorithm is "defective" because it doesn't work on random garbage is absurd. It's as if someone claimed that a car was "defective" because it grinds to a halt when you fill the tank with soft-serve ice cream, or that the most common algorithm for integer addition is "defective" because it doesn't work for vectors.
To be specific, my main issue is with this statement: "This provides a formal proof that the function is defective because there are possible inputs that causes the while-loop to run forever or overflow the target."
By that standard, one can easily construct a proof that any Turing-complete language is "defective".
But, if you have a problem with this I ask that you write code disproving that the copy() function does not terminate for all possible inputs. Or, that you write a function that can check if a C style string is valid. If someone can do that it'd help me with the book.