I suppose the difference is that someone debugging at that level will be offered some sort of "dump" command or similar, whereas someone debugging in a browser is offered a "View Source" command. It's just a matter of convention and expectation.
If we wanted browsers to be fed code that for performance reasons isn't human-readable, web servers ought to serve something that's processed way more than just gzipped minification. It could be more like bytecode.
Let's be honest, a lot of non-minified JS code is barely legible either :)
For me I guess what I was getting at is that I consider source the stuff I'm working on - the minified output I won't touch, it's output. But it is input for someone else, and available as a View Source so that does muddy the waters, just like decompilers produce "source" that no sane human would want to work on.
I think semantically I would consider the original source code the "real" source if that makes sense. The source is wherever it all comes from. The rest is various types of output from further down the toolchain tree. I don't know if the official definition agrees with that though.
>If we wanted browsers to be fed code that for performance reasons isn't human-readable,
Worth keeping in mind that "performance" here refers to saving bandwidth costs as the host. Every single unnecessary whitespace or character is a byte that didn't need to be uploaded, hence minify and save on that bandwidth and thus $$$$.
The performance difference on the browser end between original and minified source code is negligible.
Last time I ran the numbers (which admittedly was quite a number of years ago now), the difference between minified and unminified code was negligible once you factored in compression because unminified code compresses better.
What really adds to the source code footprint is all of those trackers, adverts and, in a lot of cases, framework overhead.
The way I see it, if someone needs to minify their JavaShit (and HTML?! CSS?!) to improve user download times, that download time was horseshit to start with and they need to rebuild everything properly from the ground up.
Isn’t this essential what WebAssembly is doing? I’ll admit I haven’t looked into it much, as I’m crap with C/++, though I’d like to try Rust. Having “near native” performance in a browser sounds nice, curious to see how far it’s come.