TCP has built in checksums that prevent most data corruption. I believe this is why it’s not part of HTTP, because TCP should already be doing this for you.
I’m guessing that for your very large file download you had an unusually high number of corrupted TCP packets and some of those were extra unlucky and still had valid checksums.
Or something else went wrong, so the TCP packets are correct for what some backend told it to have, just wasn't what should have been served for 1-2 packets or whatever.
The most likely thing by far is that the download failed part way through, but the error was never reported, or the reported error was never checked.
Also, it's quite possible that the HTTP client didn't even know that the download failed: a common pattern is for the server to send a Content-Length of 0, and simply close the connection when it's done sending all of the traffic (i.e. set the TCP FIN flag on the last data packet). If the server decides to abandon the connection early for any reason, then it will... close the connection - which the client will just interpret as the end of the body, and have no idea that the file failed to download fully.
I’m guessing that for your very large file download you had an unusually high number of corrupted TCP packets and some of those were extra unlucky and still had valid checksums.