I think you're overcomplicating this. In a language with lazy evaluation, you can't know what gets evaluated without looking at the whole program (in the worst case). It's in that sense that performance becomes non-local.
Here's a simple specific example. Take the Haskell expression [1..n], for some big n. There is no general answer to the question "what would be the performance implications of replacing [] with [1..n]" – it depends on the context of [].
In a strict language, you can say (roughly at least) that replacing [] with [1..n] will add at minimum a certain number of extra CPU cycles and a certain amount of additional allocation. This kind of reasoning is pretty rough and ready, but it is accurate enough to be useful for reasoning about performance in practice.
I note that Simon Peyton-Jones has said these exact words in a public talk:
"Laziness makes it much, much harder to reason about performance."
I think it's extremely unlikely that he's mistaken or confused on this point.
> In a strict language, you can say (roughly at least) that replacing [] with [1..n] will add at minimum a certain number of extra CPU cycles and a certain amount of additional allocation.
It's not at minimum, it's always the worst case cost of N in strict languages, whereas the lazy setting provides you with amortized complexity.
I think you might be misunderstanding what I meant by "at minimum". I'm talking about the case of replacing a [] constant in some arbitrary piece of code with something like [1..10000]. Given strict semantics you'll of course incur at least the time and space costs associated with constructing the list (that's the "at minimum" bit), but you might also incur additional costs depending on what the rest of the code does with the list. For example, it might execute some arbitrarily expensive computation if the list has more than 20 elements, or whatever.
I think you might have thought I was saying that given a strict semantics it was somehow still possible that you wouldn't necessarily incur the full time and space penalty for constructing n elements (which of course is not the case).
Here's a simple specific example. Take the Haskell expression [1..n], for some big n. There is no general answer to the question "what would be the performance implications of replacing [] with [1..n]" – it depends on the context of [].
In a strict language, you can say (roughly at least) that replacing [] with [1..n] will add at minimum a certain number of extra CPU cycles and a certain amount of additional allocation. This kind of reasoning is pretty rough and ready, but it is accurate enough to be useful for reasoning about performance in practice.
I note that Simon Peyton-Jones has said these exact words in a public talk:
"Laziness makes it much, much harder to reason about performance."
I think it's extremely unlikely that he's mistaken or confused on this point.