Maintainer on the Ky library team here, a popular HTTP client for JavaScript.
We support these headers, but unfortunately there’s a mess of different implementations out there. The names aren’t consistent. The number/date formats aren’t consistent. We occasionally discover new edge cases. The standard is very late to the party. Of course, better late than never. I just hope it can actually gain traction given the inertia of some incompatible implementations.
If you are designing an API, I strongly recommend using `Retry-After` for as long as you can get away with it and only implementing the rate limit headers when it really becomes necessary. Good clients will add jitter and exponential backoff to prevent the thundering herd problem.
We also try to roll a Retry-Limit (max number of retries) header to prevent our clients from hurting our services too much if there are ongoing issues.
Personally, I like that UTC is the default time zone. Processing of dates should happen in a standardized time zone. It’s only when you want to display it that the date should become local.
UTC is a fine default time zone, but that doesn't matter here.
A datetime with a timezone and a datetime without one are two different things, both of them useful. My birthday does not have a time zone. My deadline does.
The company deadline for getting some document returned? It might or might not, that's policy.
Poetically: we are born free of time zones. We die bound to one.
It doesn't to me. It should be obvious that there are plenty of valid uses of dates and times which implicitly refer to either an exact instant in time, or the expression of a time in a certain reckoning.
A birthday doesn't have a time zone because the concept of a birthday is more about the date on the calendar on the wall, not any universally understood singular instant in time; and so what matters most when it comes to your birthday is where you are. Your birthday doesn't move to the day before or after just because you travel to the other side of the globe.
A deadline has a time zone because when your boss says he wants the project done by 4PM, he means 4PM wherever you both currently are -- and the specific instant in time he referred to doesn't change if you get on a train and move a time zone over before that 4PM occurs.
And it may in fact be time zone and not just UTC with an offset; because if your boss tells you he wants a certain report on his desk by 4PM every day; when your local time zone goes into daylight saving time, it doesn't suddenly mean the report needs to be filed by 5PM instead.
In the first of these cases, the date itself has no time zone and is valid in whatever context its being read from. In the second, the instant in time might be expressed in UTC time with or without a specific offset. In the third, each of the successive instants in time may shift around with respect to UTC even while it continues to be referred to with one constant expression.
None of these are subjective interpretations. They're a consequence of the fact that as humans we've overloaded our representation of date/time with multiple meanings.
idk about you but I can get a happy birthday on the hour of my actual birth from people in the know but I never literally prepare things for the exact hour of the deadline. It's more like a day sort of thing
It does not. I'm Australian and our timezones are ahead of the US (NSW time is about 15-17 hours ahead of US Eastern time). If I took a flight from Sydney to New York (22~ hours) on my birthday, the US custom's officer would wish me happy birthday when I landed the next day.
Therefore, birthdays are not bound by timezone at all.
This will result in incorrect behavior when, between converting to UTC and back to the original timezone, the timezone database has changed, which happens more often than you think.
Depends what you're actually storing. There are plenty of cases where the timezone is not metadata; it defines how the datetime should be interpreted.
For example: your local mom and pop corner store's daily opening and closing times. Storing those in UTC is not correct because mom and pop don't open and close their store based on UTC time. They open and close it based on the local time zone.
You conflate different concepts here. The actual moment of opening and closing can be stored in UTC, because it's proper time. Scheduling algorithm is an algorithm, not time. You can use DSL similar to time to code this algorithm, but being DSL it can go only so far to implement it.
You don't need to store the timezone anywhere, you just need to know the current local timezone when the stored UTC time is used. And that's why storing in UTC is better, because it only takes one conversion to represent it in some arbitrary local time.
If you stored it as a local time (ie: with TZ), then if it's ever later translated to a different local time (different TZ), you're now dealing with all the quirks of 2 different timezones. It's great way to be off by some multiple of 15 minutes, or even a day or two!
Heck, even if it's the same exact location, storing in local time can still require conversion if that location uses daylight savings! You're never safe from needing to adapt to timezones, so storing datetimes in the most universal format is pretty much always the best thing to do.
I started looking at self-hosting many applications at home once I realized that IPv6 could enable me to do that securely without any complicated router/firewall configuration that would need to be maintained.
The only wrinkle I ran into is that apparently ISPs are still reluctant to give out static IPv6 prefixes to residential customers. So you still need some kind of DDNS setup, which is lame.
When people say “creator’s intent”, it sounds like a flavor. Like how food comes out of the kitchen before you put toppings on it to make it your own.
But vivid mode (et al) literally loses information. When the TV tries to make everything look vibrant, it’s effectively squishing all of the colors into a smaller color space. You may not be able to even tell two distinct objects apart because everything is similarly bright and vibrant.
Same with audio. The famous “smile” EQ can cause some instruments to disappear, such as woodwinds.
At the end of the day, media is for enjoyment and much of it is subjective, so fine do what you need to do to be happy. But few people would deliberately choose lower resolution (except maybe for nostalgia), which is what a lot of the fancy settings end up doing.
Get a calibration if you can, or use Filmmaker Mode. The latter will make the TV relatively dark, but there’s usually a way to adjust it or copy its settings and then boost the brightness in a Custom mode, which is still a big improvement over default settings from the default mode.
I have a Samsung SlimFit HD tube TV from 2005 or so. It’s such an interesting piece of retro tech because it is widescreen, supports 1080i, and has HDMI, but it is a CRT! It’s also quite a bit thinner than most tubes. Super unusual.
I got it because LCDs always looked terrible to me and plasmas were still very expensive.
I’m only aware of two times that Apple has revoked certificates for apps distributed outside of the App Store. One was for Facebook’s Research App. The other was for Google’s Screenwise Meter. Both apps were basically spyware for young teens.
In each case, Apple revoked the enterprise certificate for the company, which caused a lot of internal fallout beyond just the offending app, because internal tools were distributed the same way.
Something may have changed, though, because I see Screenwise Meter listed on the App Store for iOS.
The article is about macOS apps, but you're talking about iOS apps.
Apple revokes macOS Developer ID code signing certificates all the time, mostly for malware, but occasionally for goodware, e.g., Charlie Monroe and HP printer drivers.
Also, infamously, Apple revoked the macOS Developer ID cert of Epic Games, as punishment for their iOS App Store dispute.
This sounds like a really good excuse to not bear any responsibility for anything other than what you want to work on. PR reviews will mean nothing; they will all be rubber stamps or style nits because no one else will understand the code other than the owner. Your bus factor will be crazy low.
Collaboration sucks because of the way it is done, not because it has to. Pointless meetings for decision making that should be async. Brainstorming over Slack when that's what a meeting is actually good for. Looping people in to collaborate at the end instead of at the beginning. This is all possible to fix.
What I do is have everyone work in pairs. Pairs are small enough that communication is easy and there's no design-by-committee. But there's always someone to have your back and help when you get stuck or bogged down (e.g. decision fatigue), which happens plenty even to senior engineers. The pair starts and finishes work together, which mostly eliminates the need to loop someone else in randomly and needing to explain the thinking and background context, because they can bounce ideas off of each other and leverage each other's different areas of expertise. Whatever the end result is of that collaboration is treated as a finished unit of work, it's already been looked at closely by two people, it doesn't need a complicated approval process. The automated tests run, the release manager looks for any obvious mistakes, and then it ships.
The hardest nut to crack is the "who is the driver and who is the navigator" problem. I find that it is best to leave that up to the pair to work out for themselves, since it depends on the personalities involved. But with some guidance to not step on each other's toes. Working on the same line of code at the same time constantly is clearly the "too much collaboration" extreme that the article's author dreads. It's better if one person designs while the other codes, or one works on the logic while the other does the TypeScript types, etc. Usually the pair struggles with this for a week or two and then they develop a groove and it's rarely a problem after that. Spontaneous or infrequent collaborators never reach that groove, hence it can be inefficient and frustrating. Long-term pairs get to know each other and then work fast and smooth.
If someone else steals your work, you should be proud. They found it to be valuable. If they managed to sell it or build something with it, they’ve demonstrated that you can do the same. Use it as a learning experience.
Keep in mind that you are in control of what people are allowed to do with your software. By default, your code is unlicensed even if it is public, which means no one else can distribute it or change it or do much of anything with it. Thus, if someone uses it and claims it to be theirs, you can sue them if you want to.
However, instead of leaving your code unlicensed, I would recommend choosing an open source license and applying it to your code when you make it public. There are many to choose from!
By applying a license to your code, you are establishing a clear framework for what other people are and aren’t allowed to do with it. And it’s legally enforceable. In fact, there are organizations that may step in to help you if someone violates your license or challenges it in court. For example, my preferred license is the Mozilla Public License. If someone tried to challenge me on any part of that license, Mozilla would have a vested interest in defending it, since it’s their license and they use it, too. Their lawyer is even available to chat with over email. I once reached out to ask if I could make a small tweak to the license without causing headaches. They got back to me within a few days and said it would be fine. That gave me a lot of confidence to continue using it.
Some licenses are very permissive, such as MIT. Others are much more restrictive, such as GPL. The MPL, which I use, is somewhere in between.
What’s right for you really just depends on what you consider to be fair. And every project can be different. Maybe you build some small tools that you release under MIT, essentially donating them to all of humanity. Meanwhile, you create a startup and build a product where you keep some of it private and release parts of it publicly, licensed under the GPL, because you don’t want huge corporations stealing the work for your day job without reciprocating. That’s a relatively common approach.
Whatever you decide, just make some of your code public. The feedback and experience will be well worth it. Good luck!
I think the main thing holding people back from leaving the cloud is simple inertia. There was a time when the cloud was obviously the right choice. Static IPv4 addresses were becoming scarce, IPv6 had not been deployed widely enough, and cloud providers made it easy to stand up a server and some storage with high speed links on the cheap. But over time, things have changed. Rate limits, data caps, and egress fees are now normal (and costly). IPv6 is now deployed widely enough that you might be willing to just run an IPv6-only stack, which greatly simplifies running a server on-premise. And of course, we've all seen time and again how providers will carelessly lock out your cloud account for arbitrary reasons with little to no recourse. The time has come to own your infrastructure again. But that won't happen until people realize it's easy to do.
New Trek is so awful. There’s no depth to anything. Interweaving stories to make them seem important. It’s just bad TV.
The previous, it gets better the closer to TNG it is. Granted DS9 was a different beast than TNG or Voyager. Those shows had episodes, individual stories, as well as seasonal arcs. Back when shows were written for TV. New Trek feels like a bad movie script broken into episodes with side character filler.
We support these headers, but unfortunately there’s a mess of different implementations out there. The names aren’t consistent. The number/date formats aren’t consistent. We occasionally discover new edge cases. The standard is very late to the party. Of course, better late than never. I just hope it can actually gain traction given the inertia of some incompatible implementations.
If you are designing an API, I strongly recommend using `Retry-After` for as long as you can get away with it and only implementing the rate limit headers when it really becomes necessary. Good clients will add jitter and exponential backoff to prevent the thundering herd problem.