Hacker Newsnew | past | comments | ask | show | jobs | submit | cplli's commentslogin

What the top commenter probably failed to mention, and jensneuse tried to explain is that sync.Pool makes an assumption that the size cost of pooled items are similar. If you are pooling buffers (eg: []byte) or any other type with backing memory which during use can/will grow beyond their initial capacity, can lead to a scenario where backing arrays which have grown to MB capacities are returned by the pool to be used for a few KB, and the KB buffers are returned to high memory jobs which in turn grow the backing arrays to MB and return to the pool.

If that's the case, it's usually better to have non-global pools, pool ranges, drop things after a certain capacity, etc.:

https://github.com/golang/go/issues/23199 https://github.com/golang/go/blob/7e394a2/src/net/http/h2_bu...


Alpine is a solid distro for servers from my experience.

The only bad experiences I've had with it, come from the lack of parity between the x86_64 and aarch64 virt images. So our x86_64 setup doesn't work without building our own image with kernel params and addons. Even ZFS I don't think is built into the virt-aarch64 image.

All in all, I would recommend more devs/sysadmins to try alpine outside the container world, and run it in test VMs, host servers, etc.


I haven't had that experience with the aarch64 / x86_64 container images.

It's good to know it gets a little wobbly in heterogeneous bare metal environments.


Yes, when using Alpine in a container context you're getting the "MINI ROOT FILESYSTEM" [1] experience; the parity differences are in the kernel (they're easily ""fixable"" and the team is open to enabling things that people actually use, I've opened such issues on their GitLab and they're very active and friendly)

[1]: https://alpinelinux.org/downloads/


Don’t forget to update your regex appropriately (/s hopefully)


First thing that comes to mind is an overhaul mod for OpenTTD which has a "Steeltown" economy that resembles the AIST Steel Wheel.

https://grf.farm/firs/4.15.1/html/economies.html#steeltown


Neat, thanks for sharing!


Threads are Phase 3

https://github.com/WebAssembly/proposals

You can also check out:

https://webassembly.org/roadmap/

And for Go, the proposal project on Github has many interesting conversations from the devs.

And as a reminder to anyone interested in using Go WASM, it’s experimental and does not come with the same compatibility promise as Go itself:

https://github.com/golang/go/wiki/WebAssembly


Yes it's been in phase 3 for what like 2-3 years at this point (judging by when it landed in browsers)? No eta and no next steps. The tp says they are waiting until "it's stable" so I'm assuming phase 4. It qualifies browser support and "at least one toolchain" criteria[0] (zig) and seems like all the other conditions too except maybe "CG consensus" whatever that means, so for all I know it could take anywhere between tomorrow and in a few years from now...

[0] https://github.com/WebAssembly/meetings/blob/main/process/ph...


Personally tried it, it can handle logs nicely. And from their page, many more things

https://clickhouse.com/use-cases


Uber wrote a blog on using Clickhouse to store logs: https://www.uber.com/blog/logging/


Now we need a tool called NeighbourhoodWatch to monitor the cluster monitors.


NeighborhoodWatch for US based resources


I cannot possibly trust Nextcloud again. I was very skeptical to begin with, because it's a rather large and complex project built on PHP over many years (call this a personal/subjective view). But what really made my decision was when they broke the ability to delete (and/or modify) encrypted files. [0]

[0]: https://github.com/nextcloud/server/issues/34744


Do you have any pointers/tips on replacement projects? What are you using now?


Currently migrating from it (not writing any new files to it), but keeping it for read/shared files. The migration plan is to eventually move everything to a DS923 (Synology), with most files being mirrored on client machines and weekly offsite backups (encrypted). Would like something off-continent eventually but haven't given it any thought.


Im planing to check out ONLYOFFICE Workspace, seems more sturdy and coherent overal.


That bug seems to have been mostly (or at least partly) a configuration issue; E2EE and SSE were enabled at the same time, and they were not compatible.


Someone will definitely get 42.zip or 42kb.zip just to host a “zip bomb as a service”.

On a more serious note, new gTLDs are so spammy, and “.zip” as others have mentioned will surely lead to some surprises somewhere.


For caching the query results you get from your database. Also it's easier to spin up Redis and replicate it closer to your user than doing that with your main database. From my experience anyway.


I think the idea is that if your db can hold the working set in RAM and you're using a good db + prepared queries, you can just let it absorb the full workload because the act of fetching the data from the db is nearly as cheap as fetching it from redis.


> For caching the query results you get from your database.

This only makes sense if queries are computationally intensive. If you're fetching a single row by index you aren't winning much (or anything).


Of course? I'm not really sure what the original question actually is if you know that users benefit from caching the results of computationally intensive queries.


OpenAI uses redis to store pieces of text. Fetching pieces of text is not computationally intensive.


Most likely they have them in an rdbms, so it's more like joining a forum thread together. Not expensive, but why not prebuild and store it instead?


> This only makes sense if queries are computationally intensive.

Or if the link to your DB is higher latency than you're comfortable with.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: