Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's on my free-time backlog to spend more time with Fil-C, so I'm not disagreeing from lack of interest.

Most of annex-J is unrelated to memory safety. No, C has explicit UB because there wasn't a defined behavior that made sense to codify in the standards process. Signed overflow, invalid printf specifiers, and order of evaluation for example. I assume Fil-C doesn't fix things like uninitialized memory or division by zero either.

Wasn't really getting into the GC because that's "just" an engineering issue, rather than a structural issue with the approach.

It'd be great to not only terminate on detecting these issues as Fil-C does, but prevent them from happening entirely.



Fil-C absolutely does fix uninitialized memory.

It’s on my list to solve division. It’s easy to do and also not super important for the security angle that I’m addressing. But with doing precisely to provide clarity to these kinds of discussions.

I’ve mostly tackled signed overflow. I’ve fixed all the cases where signed overflow would let you bypass Fil-C’s own bounds checks. It’s not hard to fix the remaining cases.

In short: any remaining UB in Fil-C is just a bug to be fixed rather than a policy decision.

The reason why C has UB today is policy and memory safety.

Its a goal of Fil-C to address memory safety violations by panicking because:

- That’s the most secure outcome.

- That’s the most frantically compatible with existing C/C++ code, allowing me to do things like Pizlix (memory safety Linux userland)


How does Fil-C "fix" uninitialized memory?


I assume under the same "memory safety" rationale it just zeroes the RAM. That's "safe" and compatible with C.

In a good language this mistake is caught at compile time, like in Rust, the compiler says "Hey, I don't see how this variable is initialized before use" and you slap your forehead and fix it. But zeroing everything is technically safe.

For the Casey "hand made" Muratori type zeroing might even seem like a better idea. It's cheap, it means now your code compiles and executes, who cares about correctness?


I guess I don't see how making inherently incorrect C code "safe" by sanitizing something it shouldn't be doing anyway is actually improving the C code.


The key thing that memory safety provides is local reasoning. You can look at some important piece of code and conclude that it does the right thing, even if there are a million other lines of code somewhere else. UB makes this impossible; no matter how carefully you review dont_launch_missiles() it might launch the missiles due to an integer overflow in totally unrelated code in the same process.


I agree with you, but this feels more of "fixing" inherently unsafe code in the same way we "fixed" the elephant's foot in Chernobyl by covering it with a giant roof. It doesn't actually fix the problem, just lessens and prolongs it.


Yes the resemblance to the sarcophagus is I suppose warranted. The best of bad options.

The assumption in Fil-C is that you can't or won't rewrite. So a modified C compiler which rejects the uninitialized variable (as Rust would) is not acceptable because now what? We were supposed to make the existing C program memory safe, not reject it, anybody could write a "compiler" which rejects the C programs as unsafe.

This is the same assumption C++ had. C++ 26 lands "Erroneous Behaviour" for this purpose. Previously if you write `int x; foo(x);` that's Undefined Behaviour in C++ just like C, game over, anything might happen if this executes.

In C++ 26 `int x; foo(x);` is Erroneous but not Undefined. It might tell you (perhaps at runtime) that you forgot to initialize x, because doing so is an error - but if it presses on it will behave like `int x = SOMETHING; foo(x)` where SOMETHING was chosen by your compiler and implemented exactly the way you'd expect, by quietly initializing your uninitialized variable.


By initializing it to zero.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: