TAR was written for very simple/tiny machines by today's standards, and was designed to read/write full valid blocks on physical tapes with constraints on spool-up and spool-down times/distances.
I don't think any of the alternatives I listed can't be done on an embedded device or a tape drive though. I understand that the format is old and so perhaps many of the arbitrary constraints weren't seen as that bad at the time though.
This format was developed a long time ago before the luxury of (a) experience and (b) newer more capable storage hardware.
It's really strange to complain that a legacy format is full of bad features for modern tastes and hardware - how do you think it was worked out what bad and good features of formats and hardware might be?
The history in the Wikipedia page that I linked is instructive.
We used to have files consisting of many multiple tapes. We changed the OS to do dead reckoning on the end of each tape so we could stop well clear of the actual end mark. That way individual tapes could be copied and substituted if needed. Hard to see the reason if you don't just know.
I don't think I understand but maybe that's the point. Maybe it seems mysterious because there were other requirements at the time which were themselves already mysterious?
Because some tape drives could only read and write whole (512B) blocks, and the way to be relatively sure that you didn't have a new file was to see two blocks of zeros.
I think I'm confused. You make it sound like the "tape reader" hardware/driver isn't talking to the "file reader" part in software . Didn't the file reader tell the tape reader the size of the file, so it would already know where the end was (how many blocks it should read)?
TAR was written for very simple/tiny machines by today's standards, and was designed to read/write full valid blocks on physical tapes with constraints on spool-up and spool-down times/distances.
The description here seems reasonable: https://en.wikipedia.org/wiki/Tar_(computing)