Hacker Newsnew | past | comments | ask | show | jobs | submit | gajus's commentslogin

Funny you ask. This project started as a very different project almost five years ago. It was called roarr.io, and the primary purpose was exactly that: adhoc collecting logs from remote machines. However, I've not ported this functionality (yet).


That would require restarting your services to redirect their output. Fine for one-off scripts, but impractical when you have long-running processes and don't want to restart them every time an agent needs to read logs.

With teemux, a persistent MCP server gives multiple AI agents access to logs as needed—without interrupting your development flow.


OK, but it isn't like agents react to flowing logs, they just connect to whatever server and query the past 5 minutes or 2 hours on demand depending on the debugging task at hand without mixing contexts together.


lowkey thought it is a genius name

tee (Unix command that splits output) + mux (multiplexer) = teemux


Pronounced tmux. That's a thing. A very related thing. A very well-known thing. It's a bad name. I do like the concept though (haven't tried using it yet).


Fair point


Will be interesting to see how this model performs in real-world creative tasks. https://creativearena.ai/


Man, we are living in the golden era of PostgreSQL.



Somewhat related to your ask https://github.com/gajus/pg-dump-parser

We use it internally to create folder-like structure representation of our database schema.

https://github.com/gajus/pg-dump-parser?tab=readme-ov-file#r...

Very handy when reviewing changes in version control, etc.


Zod is the default validator for https://github.com/gajus/slonik.

Zod alone accounts for a significant portion of the CPU time.


> In the context of the network overhead, validation accounts for a tiny amount of the total execution time.

> Just to give an idea, in our sample of data, it takes sub 0.1ms to validate 1 row, ~3ms to validate 1,000 and ~25ms to validate 100,000 rows.


Surprised not to see more people ask about performance profile of v4.

Zod is great in terms of API, but a no-go in terms of performance.

We ended up writing babel plugins (https://github.com/gajus/babel-plugin-zod/) and using it along with zod-accelerator, which improves performance, but breaks in various edge-cases.


I guess people are not asking because the article contains benchmarks


What's a relatively small dataset?

For someone that could be 1m records for others that could be 1bn records


Below 10M documents for single node. Below 100M documents for clustered setup. Total data size (including indices) that can comfortably fit in available RAM.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: