Hacker Newsnew | past | comments | ask | show | jobs | submit | ckbkr10's commentslogin

I'm still surprised how much drive this project has, a platform that doesn't want to support it and could introduce breaking changes any day.

Just why?


There are millions of Macbooks out there that will be out of MacOS support one day. If this project diverts just a fraction of them from becoming e-waste for a little, it will be a win.

And then beyond that, there is simply no laptop manufacturer that meets the quality of Apple's hardware design. I like Macs for their hardware, the software is a compromise. A linux macbook would be my ideal laptop.


dont most ppl just throw these laptops in the trash, or does someone give you money to turn an out of support mac in somewhere

> does someone give you money to turn an out of support mac in somewhere

Yes, marketplace


Because they want to. Their time, their choice.

This horse has been beat to death on HN. Because the apple laptop ecosystem is the highest quality laptop you can purchase.

Maybe so, but nothing beats the 2008-2013 Thinkpad keyboards. The key travel and tactility are unmatched even by later Thinkpads. Also no trackpoint.

> nothing beats the 2008-2013 Thinkpad keyboards

Maybe so, but 15-20 year old laptops are definitely starting to show their age.

An M2 MacBook Pro, on the other hand, is only 4 years old, has a fairly OK keyboard, and is still in striking distance of current high-end ultrabooks when it comes to performance.


The only thing my X230 struggles to do is run LLMs locally. My needs are simple, and I think normal people (i.e. probably not most people on this site) don't have needs that are any more demanding than mine.

Granted, this is running GNU/Linux rather than Windows. If you're running Windows then yeah, they show their age.


Asahi Linux is certainly not targeted at "normal people". Normal people would just run macOS

There's this saying, all progress is done by unreasonable people, because reasonable people just accept things are the way they are


I think an X230 would be performant enough for 95% of the things I do, but a 14 year old CPU is going to have pretty terrible battery life for anything more than very light usage. And things that would be light usage on a recent PC, like watching video encoded with a modern codec, would be fairly taxing on an old CPU with no hardware decode.

True. By the time I upgraded from my X200 (fantastic machine, noticeably outdated), the lack of software support for hardware decoding H264 was noticeable. Also being stuck with OpenGL 2.1 isn't the best either.

I don't know what I'll do if and when my X230 stops being sufficient. If I could buy an Apple motherboard in an X200 chassis I'd do it in a heartbeat.


> My needs are simple

Congrats, but I think you may be in a small minority when it comes to developers shopping for laptops.

Personally, I had to upgrade from a late-model i9 MacBook Pro to this M2 MacBook Pro, because the npm + docker setup at work was taking upwards of 20 minutes for a production build...


>The only thing my X230 struggles to do is run LLMs locally. My needs are simple, and I think normal people (i.e. probably not most people on this site) don't have needs that are any more demanding than mine.

People who edit video or make music and other such tasks are totally normal too, and there are hundreds of millions of them


I think maybe you don't understand what the needs are of normal people. It's only partially about what software they run.

I recommend Mac's to the people in my life because when they have a problem they can take the machine to the Apple Store in the mall. Or if they want to understand iPhoto or Pages better, they can go to the Apple Store and take a class. They like Apple laptops because they look nice, they feel great, sound amazing (for a laptop) and have excellent battery life.

Like you, I have a ThinkPad (a P-something) and, frankly, it kind of sucks. It's all plasticy, it flexes, battery life is a joke, the trackpad is meh, and the fans are almost always running. I do like the keyboard though (I'm a fan of backspace).


> The only silver lining is that newer devices will have to scale down memory, so developers will have to ditch memory-sucking frameworks and start to optimize things again.

No. Prices will just go up, less innovation in general.


A few places will have no choice - low price elasticity, combined with things that need to actually work.

they were prominent in game hacking 2005ish windows

made hooking into game code much easier than before


Aren't all DLLs on the Windows platform compiled with an unusual instruction at the start of each function? This makes it possible to somehow hot patch the DLL after it is already in memory


I believe you're thinking of the x86 Hotpatching hook[1], which doesn't exist on x86-64[2] (in the same form, it uses a x86-64 safe one).

[1] https://devblogs.microsoft.com/oldnewthing/20110921-00/?p=95...

[2] https://devblogs.microsoft.com/oldnewthing/20221109-00/?p=10...


yes, that's it. Thanks for clarifying


Sorry to say but how you are framing things is simply not true anymore.

You are not required to buy their "Glasfaser Modem 2" you can buy any ONT Modem.

You are not required to use any of their equipment, they give you the data to connect via PPPOE directly.

I bought a house with FTTH in 2023 and never used any Telekom hardware. Nobody forces you to use the peer DNS. The telekom DNS isn't complying to https://cuii.info/anordnungen/ because they want to but to avoid being sued everytime some company wants to block an illegal streaming site.


> Nobody forces you to use the peer DNS.

For practical purposes there's the problem (at least a few years ago?) though that Akamai in particular uses DNS to steer you to the correct portion of its CDN and the default IPs returned by independent DNS resolvers tended to have relatively abysmal peering with the Telekom network that was getting completely overloaded at peak times.

Unfortunately "use <insert favourite DNS provider here> everywhere except for Akamai CDN, for which use the Telekom DNS" isn't something that consumer routers support, so you'd have to start running your own custom DNS resolver to work around that problem…


Don't you have the small black glass fiber box that takes as input the fiber glass cable and outputs a rj45 port?


Comparing Redis to SQL is kinda off topic. Sure you can replace the one with the other but then we are talking about completely different concepts aren't we?

When all we are talking about is "good enough" the bar is set at a whole different level.


I wrote this article about migrating from Redis to SQLite for a particular scenario and the tradeoffs involved.

To be clear, I think the most important thing is understanding the performance characteristics of each technology enough that you can make good choices for your particular scenario.

https://wafris.org/blog/rearchitecting-for-sqlite


We're talking about business challenges/features which can be solved by using either of the solutions and analyzing pros/cons. It's not like Redis is bad, but sometimes it's an over-engineered solution and too costly


Well they move from one thing not designed for queues to another not designed for queues. Maybe use a queue!


Maybe Redis is just overkill


I wish you'd have expanded on that. I almost always learn about some interesting lower-level tech through people trying to avoid a full-featured heavy-for-their-use-case tool or system.


You're in luck, the article speaks about that at length!


Sorry, I went full typical HN commenter stereotype :-)


I do it all the time too.


congratz to receiving the fix


The idea is good. I do think that is going to be the future for high volume data leaks like the Snowden or Epstein files.

I do think though that this approach will become annoying quick:

https://github.com/mantisfury/ArkhamMirror/blob/main/scripts...


The cheesy noir persona is for the AI assisted install and that's it. Inside the app, the prompts are strictly business. (They still have roles, but not "characters" or "personas").


As another person that spends the whole day in the terminal. It's sad to see there is no Windows version. I do not understand why I would need gpu acceleration for a terminal, but I would still try it.

I use a company managed/provided machine that runs windows, I do not have to bother maintaining it. All I use is basically Firefox and a MinGW to have a bash


I am using it for ansible, php, java, c, linux configuration issues or general questions. Preparing excel sheets etc..

It's sped the time I need to produce projects from a usual span of 4-20 days to 1-2 days with another 2-3 Testing. Of course I still bill the time it would have taken me but for a professional it can be a great improvement.

While my country will be slow to adopt, we haven't even adopted to smartphones yet - hooray Germany, it will have to adopt eventually ( in 10 years or so )


> Of course I still bill the time it would have taken me but for a professional it can be a great improvement.

This may be a flippant comment, but it actually represents one of the reasons it is difficult to track GenAI usage and impact!

Multiple researchers have hypothesized (often based on discrepancies in data) that the gains from workers using GenAI are not necessarily propagated to their employers. E.g. any time savings may be dedicated to other professional or leisure pursuits.


> Of course I still bill the time it would have taken me but for a professional it can be a great improvement.

Isn’t this basically the definition of fraud?


Sounds overly complicated, I doubt this will have a widespread adoption


About 30% percent of traffic to Cloudflare uses HTTP/3 [0], so it seems pretty popular already. For comparison, this is 3× as much traffic as HTTP/1.1.

[0]: https://radar.cloudflare.com/adoption-and-usage#http1x-vs-ht...


and then cloudflare converts that to http/2 or even 1.1 for the backend


So? Those protocols work fine within the reliable low latency network of a datacenter.


I'd even go as far as claiming that on reliable wired connections (like between cloudflare and your backend) HTTP/2 is superior to HTTP/3. Choosing HTTP/3 for that part of the journey would be a downgrade


At the very least, the benefits of QUIC are very very dubious for low RTT connections like inside a datacenter, especially when you're losing a bunch of hardware support and moving a fair bit of actual work to userspace where threads need to be scheduled etc. On the other hand Cloudflare to backend is not necessarily low RTT and likely has nonzero congestion.

With that said, I am 100% in agreement that the primary benefits of QUIC in most cases would be between client and CDN, whereas the costs are comparable at every hop.


Is CF typically serving from the edge, or serving from the nearest to the server? I imagine it would be from the edge so that it can CDN what it can. So... most of the time it wont be a low latency connection from CF to backend. Unless your back end is globally distributed too.


Also, within a single server, you should not use HTTP between your frontend nginx and your application server - use FastCGI or SCGI instead, as they preserve metadata (like client IP) much better. You can also use them over the network within a datacenter, in theory.


Is the protocol inherently inferior in situations like that, or is this because we've spent decades optimizing for TCP and building into kernels and hardware? If we imagine a future where QUIC gets that kind of support, will it still be a downgrade?


There is no performance disadvantage at the normal speed of most implementations. With a good QUIC implementation and a good network stack you can drive ~100 Gb/s per core on a regular processor from userspace with 1500-byte MTU with no segmentation offload if you use a unencrypted QUIC configuration. If you use encryption, then you will bottleneck on the encryption/decryption bandwidth of ~20-50 Gb/s depending on your processor.

On the Linux kernel [1], for some benchmark they average ~24 Gb/s for unencrypted TCP from kernel space with 1500-byte MTU using segmentation offload. For encrypted transport, they average ~11 Gb/s. Even using 9000-byte MTU for unencrypted TCP they only average ~39 Gb/s. So there is no inherent disadvantage when considering implementations of this performance level.

And yes, that is a link to a Linux kernel QUIC vs Linux kernel TCP comparison. And yes, the Linux kernel QUIC implementation is only driving ~5 Gb/s which is 20x slower than what I stated is possible for a QUIC implementation above. Every QUIC implementation in the wild is dreadfully slow compared to what you could actually achieve with a proper implementation.

Theoretically, there is a small fundamental advantage to TCP due to not having multiple streams which could allow it maybe a ~2x performance advantage when comparing perfectly optimal implementations. But, you are comparing a per-core control plane throughput using 1500-byte MTU of, by my estimation, ~300 Gb/s on QUIC vs ~600 Gb/s on TCP at which point both are probably bottlenecking on your per-core memory bandwidth anyways.

[1] https://lwn.net/ml/all/cover.1751743914.git.lucien.xin@gmail...


"As of September 2024, HTTP/3 is supported by more than 95% of major web browsers in use and 34% of the top 10 million websites."

https://en.wikipedia.org/wiki/HTTP/3


A lot of servers still don't support that.

Go http webserver doesn't support http 3 without external libraries. Nginx doesn't support http 3. Apache doesn't support http 3. node.js doesn't support http 3. Kubernetes ingress doesn't support http 3.

should I go on?

edit: even curl itself - which created the original document linked above - has http 3 just in an experimental build.


> edit: even curl itself - which created the original document linked above - has http 3 just in an experimental build.

It's not experimental when built with ngtcp2, which is what you will get on distros like Debian 13-backports (plain Debian 13 uses OpenSSL-QUIC), Debian 14 and onward, Arch Linux and Gentoo.

Reference: https://curl.se/docs/http3.html


>Nginx doesn't support http 3

nginx do support it.

https://nginx.org/en/docs/quic.html


And I see I was not that wrong; the module is still marked as "experimental" and not built by default.

https://nginx.org/en/docs/http/ngx_http_v3_module.html


ah okay i was wrong there, mea culpa


The guy's point still stands - lots of popular software do not yet support http3.


Well this statement have to be precised.

caddyserver v2 supports HTTP/3 and it's an webserver written in go https://caddyserver.com/features

FYI: There is also an rust webserver which supports HTTP/3. https://v2.ferronweb.org/


Go built-in webserver.


Yes and, at the same time practical support within programming language standard libraries & common tooling lags way behind: https://httptoolkit.com/blog/http3-quic-open-source-support-...


You will get most of the benefits of HTTP 3 even if your app libraries run HTTP 1.1, as long as the app is behind a reverse proxy that speaks HTTP 3.



Yep, for example, Caddy (zero special configuration to enable HTTP 3)


Also apparently slower over fast connections https://arxiv.org/pdf/2310.09423


A decrease in throughput is a small price to pay for progress


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: