When in the past you learned that the recommended value for the TTL was 64 and you didn't think any operating system would pick a value much larger than that.
there it gives what seems to be a good rundown on TTL, and it seems like this could be DNS activity, or CDN caching tuned to quench back propagation.
e.g. [about hafway down page]
In Internet Protocol (IP) multicast, TTL may have control over the packet forwarding scope or range.
0 is restricted to the same host
1 is restricted to the same subnet
32 is restricted to the same site
64 is restricted to the same region
128 is restricted to the same continent
255 is unrestricted
TTL is also employed in caching for Content Delivery Networks (CDNs). TTLs are used herein for specifying the duration of serving cached information until a new copy is downloaded from an origin server. A CDN can offer updated content without requests propagating back to the origin server if the time between origin server pulls is properly adjusted. This accumulative effect enables a CDN to efficiently offer information closer to a user while minimizing the amount of bandwidth required at the origin.
TTL is also employed in caching for Domain Name Systems (DNS). TTL is a numerical value that refers to the duration used herein by the DNS Cache server for serving a DNS record before contacting the authoritative server to get a new copy.
I'm checking the TTL of IP packets, which is only 8-bits in size, and in practice, are decremented per hop (the early IPv4 RFCs state it is in seconds; I doubt it was ever used that way). DNS TTLs are 32-bits in size and represent the number of seconds a DNS record can be cached, They are separate from the TTL of IP packets. The TTL for CDNs is specified in HTTP headers and again has its own specification.
Getting back to TTLs for IP packets---I recalled the recommended TTL of 64 from admittedly years ago. I just now checked my copy of _TCP/IP Illustrated, Volume 1_ by W. R. Stevens, published in 1994, so yeah, a few decades ago. Of all the Unix systems mentioned in that volume, they all defaulted to a TTL of 60, except for Solaris 2.2, which used 255 (surprised me!). I no longer have access to Solaris to check (did at my previous job) but I don't think there are many people using Solaris to view my site.
I've checked the page you linked, and they don't link to the source for the table given, where the various values of TTL denote forwarding scope or range, nor have I ever seen such a table before. I know my Linux and Mac OS-X systems use TTLs less than 70, and I can get content from other continents. My comment on that: [citation needed].
Wikipedia (https://en.wikipedia.org/wiki/Time_to_live) at least links to references, so I found a list of TTLs per OS (https://web.archive.org/web/20130212114759/http://www.map.me...), but given the OSes listed, it's probably also from a few decades ago, but the majority are around 60, with Windows NT being 128, Solaris 255 and VMS anywhere from 60 to 128 (depending on version). So the TTLs being over 100 makes sense for what I was seeing---possibly a bunch of zombie Window boxes participating in a half-assed SYN attack using Brazil IPs for some reason. I can't say I'm horribly upset at that. But actual readers on Windows is concerning. I have no easy way to test for that, and I'd hate to go back to having ~100 half-open connections on my server.
In my 19 years of greylisting, I have yet to have legitimate email fail due to it. And it was one of the easiest ways to significantly decrease the amount of spam. It's been worth it in my opinion.
You may have not realised that legitimate email has failed (and it might even be true) but my experience suggests it's unlikely that it hasn't happened. I only have a handful of users, but when I was greylisting I'd get reports of missing mail at least annually.
Which isn't to say it's not worth it, although nowadays I'd recommend that https://www.postfix.org/POSTSCREEN_README.html pre-greet checks are just as good at stopping spam and better at not blocking legit mail.
Greylistibg is very effective in my experience, but there are definitely some confirm your email loops that won’t work without whitelisting. It’s a combination of multiple ip addresses and retry times greater than the life of the code.
You forgot difficult. How do you test a system call failure? How do you test a system call failure when the first N calls need to pass? Be careful how you answer, some answers technically fall into the "undefined behavior" category (if you are using C or C++).
There was (and still is to a degree) a group of people critical of TLS. One half of the group (which I think you belong to) bitch about it being mandatory. The other half bitched about the use of TLS instead of <bespoke encryption system they just read about that is better/easier/smaller than TLS>. TLS was the main point of Gemini.
And about the lack of file size: I proposed a way to sneak it in, and it was rejected outright. Oh well.
You can use the Scorpion protocol that I made up if you want optional TLS and including the file size (and if you don't like the Han unification). You can use Spartan protocol if you want the Gemini file format (with one difference) but a different protocol that does not use TLS (although it is not the same as just Gemini without TLS, but works significantly differently), although if you have any dynamic files then you might need to handle them differently for Spartan than Gemini.
That's really interesting, because the Disney comparison could only be considered positive, and the death penalty thing is strictly speaking a fact and public knowledge.