What the top commenter probably failed to mention, and jensneuse tried to explain is that sync.Pool makes an assumption that the size cost of pooled items are similar. If you are pooling buffers (eg: []byte) or any other type with backing memory which during use can/will grow beyond their initial capacity, can lead to a scenario where backing arrays which have grown to MB capacities are returned by the pool to be used for a few KB, and the KB buffers are returned to high memory jobs which in turn grow the backing arrays to MB and return to the pool.
If that's the case, it's usually better to have non-global pools, pool ranges, drop things after a certain capacity, etc.:
Alpine is a solid distro for servers from my experience.
The only bad experiences I've had with it, come from the lack of parity between the x86_64 and aarch64 virt images. So our x86_64 setup doesn't work without building our own image with kernel params and addons. Even ZFS I don't think is built into the virt-aarch64 image.
All in all, I would recommend more devs/sysadmins to try alpine outside the container world, and run it in test VMs, host servers, etc.
Yes, when using Alpine in a container context you're getting the "MINI ROOT FILESYSTEM" [1] experience; the parity differences are in the kernel (they're easily ""fixable"" and the team is open to enabling things that people actually use, I've opened such issues on their GitLab and they're very active and friendly)
Yes it's been in phase 3 for what like 2-3 years at this point (judging by when it landed in browsers)? No eta and no next steps. The tp says they are waiting until "it's stable" so I'm assuming phase 4. It qualifies browser support and "at least one toolchain" criteria[0] (zig) and seems like all the other conditions too except maybe "CG consensus" whatever that means, so for all I know it could take anywhere between tomorrow and in a few years from now...
I cannot possibly trust Nextcloud again. I was very skeptical to begin with, because it's a rather large and complex project built on PHP over many years (call this a personal/subjective view). But what really made my decision was when they broke the ability to delete (and/or modify) encrypted files. [0]
Currently migrating from it (not writing any new files to it), but keeping it for read/shared files. The migration plan is to eventually move everything to a DS923 (Synology), with most files being mirrored on client machines and weekly offsite backups (encrypted). Would like something off-continent eventually but haven't given it any thought.
That bug seems to have been mostly (or at least partly) a configuration issue; E2EE and SSE were enabled at the same time, and they were not compatible.
For caching the query results you get from your database. Also it's easier to spin up Redis and replicate it closer to your user than doing that with your main database. From my experience anyway.
I think the idea is that if your db can hold the working set in RAM and you're using a good db + prepared queries, you can just let it absorb the full workload because the act of fetching the data from the db is nearly as cheap as fetching it from redis.
Of course? I'm not really sure what the original question actually is if you know that users benefit from caching the results of computationally intensive queries.
If that's the case, it's usually better to have non-global pools, pool ranges, drop things after a certain capacity, etc.:
https://github.com/golang/go/issues/23199 https://github.com/golang/go/blob/7e394a2/src/net/http/h2_bu...