Because something you do relatively early in the compiler is throw away all of the structure of the source, including flattening nested expressions into a linear IR. Mapping back to line numbers in the debugger is a bit hacky to begin with, and mapping back in an even more fine-grained way would be more complex still.
It's hacky in the sense that it's somewhat ad-hoc the manner in which line/column information makes it through from the source text, through the intermediate representation and optimizations, into the generated code and debug information.
In Clang/LLVM, for example, nodes are annotated with debug information using LLVM's metadata system. However, it is strictly optional for any given transformation to preserve the metadata on IR nodes. So what makes it through after optimizations is kind of a "best effort" affair.
But this is not ad-hoc at all. This is how it is deliberately designed so it degrades gracefully depending on what optimizations you choose.
Not all compilers even really degrade.
GCC does post-hoc variable tracking (and thus is mostly unaffected by optimization except when values are optiized out completely). For declarations and statements, the info is always on the declaration, so it won't be lost.
Both compiler guarantee that at -O0, all debug info will be kept.
True (currently, I see no reason this is a necessary step), but that suggests you could perform trivial expansion of lines like `if (a.what() && b == c && (d == f || d < 5))` into multiple lines like you see in this article, and then use the exact same hack to get those pseudo-lines into the final stages, and into your debugger. You could even explode each piece into extra variables, so you can see the results of `a.what()` without re-evaluating it.
Honestly, even if you had to hit an 'expand this statement' button in your debugger to see `if x() && y()` spread into:
x_val = x()
if x_val
y_val = y()
if y_val
...
it would completely remove the necessity to write strange things to get around this limitation. Why do we have compilers and a huge variety of languages if not to stop writing strange things unnecessarily?
Even more beneficial, it would give you a much better idea of what, in fact, the computer thought you meant. Seeing a complex nested structure flattened out would give you a more visual indication of what's going on, allowing you to spot misunderstandings earlier.
I don't think it's any harder to carry line-and-column annotations through the compilation process than it is to carry just line annotations. In fact many compilers do. Of course your debug information gets larger, but that's not usually a problem during development.
On the other hand, an optimizing compiler already makes it pretty hard to single-line-step through a program (what with reordering, CSE, and more sophisticated transforms). Single-expression-stepping would be an even more difficult "debugging illusion" to provide.
On the consumer side, knowing where you are is actually the least hard problem in optimized debugging, compared to things like tracking variables that got split up into multiple disjoint registers, or part in register/part in memory, etc
As for where you are, the line table already will tell you that the column number changed on pc address advance, but line number did not. Thus, you know that you moved an expression, but not a line.
GDB doesn't happen to support this, and simply looks steps until line number change.
But it's not fundamentally hard from the debugger perspective.
On the producer side:
When it comes to knowing where you are, you know you can't produce a 1-1 mapping, so you don't try. You can of course, properly present inlined functions as if they were function calls, and gdb will even do this. But there are times when lines or expressions were merged, and there simply is no right answer.