Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's an endless debate in programming languages about how to handle errors in software. Error codes, exceptions, optional types, NaNs.

One solution I try is to redesign the cause of the error out of the system. For example, compilers often have maximum quantities of language constructs that are supported, like the maximum length of a string literal. Then, when the length is exceeded, an error message is concocted and generated, then error recovery has to be done, then the compiler has to not generate an object file, etc.

I don't know what other compilers do, but one day I realized that it was less work in the compiler to not have a limit, but to keep enlarging the string literal buffer. There was only one limit left on all these things, that was globally running out of memory. Globally running out of memory is a fatal error for compilers, and so error recovery isn't necessary. Just print a message and exit.

This works great. Large numbers of errors just go away, like "line length too long", "string literal too long", "too many cases in switch statement", "too many symbols", etc.

There are, of course, still some limits, like the object file formats often have hard limits, and of course you don't want to overflow the program stack.



Yup, these days any serious use you can afford to throw memory at the problem. So many things become so much easier when the limits are pushed back to out of memory or integer overflow territory.

30 years ago I had to live with problematic memory limits and made some design decisions that over time I would come to hate because I had to shoehorn data into EMS memory banks. Data objects ended up sliced and diced into separate arrays, never did they point to the relevant things because such pointers would always have been into a different bank and the only possible allocation was the whole bank.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: