There are millions of Macbooks out there that will be out of MacOS support one day. If this project diverts just a fraction of them from becoming e-waste for a little, it will be a win.
And then beyond that, there is simply no laptop manufacturer that meets the quality of Apple's hardware design. I like Macs for their hardware, the software is a compromise. A linux macbook would be my ideal laptop.
Maybe so, but 15-20 year old laptops are definitely starting to show their age.
An M2 MacBook Pro, on the other hand, is only 4 years old, has a fairly OK keyboard, and is still in striking distance of current high-end ultrabooks when it comes to performance.
The only thing my X230 struggles to do is run LLMs locally. My needs are simple, and I think normal people (i.e. probably not most people on this site) don't have needs that are any more demanding than mine.
Granted, this is running GNU/Linux rather than Windows. If you're running Windows then yeah, they show their age.
I think an X230 would be performant enough for 95% of the things I do, but a 14 year old CPU is going to have pretty terrible battery life for anything more than very light usage. And things that would be light usage on a recent PC, like watching video encoded with a modern codec, would be fairly taxing on an old CPU with no hardware decode.
True. By the time I upgraded from my X200 (fantastic machine, noticeably outdated), the lack of software support for hardware decoding H264 was noticeable. Also being stuck with OpenGL 2.1 isn't the best either.
I don't know what I'll do if and when my X230 stops being sufficient. If I could buy an Apple motherboard in an X200 chassis I'd do it in a heartbeat.
Congrats, but I think you may be in a small minority when it comes to developers shopping for laptops.
Personally, I had to upgrade from a late-model i9 MacBook Pro to this M2 MacBook Pro, because the npm + docker setup at work was taking upwards of 20 minutes for a production build...
>The only thing my X230 struggles to do is run LLMs locally. My needs are simple, and I think normal people (i.e. probably not most people on this site) don't have needs that are any more demanding than mine.
People who edit video or make music and other such tasks are totally normal too, and there are hundreds of millions of them
I think maybe you don't understand what the needs are of normal people. It's only partially about what software they run.
I recommend Mac's to the people in my life because when they have a problem they can take the machine to the Apple Store in the mall. Or if they want to understand iPhoto or Pages better, they can go to the Apple Store and take a class. They like Apple laptops because they look nice, they feel great, sound amazing (for a laptop) and have excellent battery life.
Like you, I have a ThinkPad (a P-something) and, frankly, it kind of sucks. It's all plasticy, it flexes, battery life is a joke, the trackpad is meh, and the fans are almost always running. I do like the keyboard though (I'm a fan of backspace).
> The only silver lining is that newer devices will have to scale down memory, so developers will have to ditch memory-sucking frameworks and start to optimize things again.
No. Prices will just go up, less innovation in general.
Aren't all DLLs on the Windows platform compiled with an unusual instruction at the start of each function? This makes it possible to somehow hot patch the DLL after it is already in memory
Sorry to say but how you are framing things is simply not true anymore.
You are not required to buy their "Glasfaser Modem 2" you can buy any ONT Modem.
You are not required to use any of their equipment, they give you the data to connect via PPPOE directly.
I bought a house with FTTH in 2023 and never used any Telekom hardware. Nobody forces you to use the peer DNS. The telekom DNS isn't complying to https://cuii.info/anordnungen/ because they want to but to avoid being sued everytime some company wants to block an illegal streaming site.
For practical purposes there's the problem (at least a few years ago?) though that Akamai in particular uses DNS to steer you to the correct portion of its CDN and the default IPs returned by independent DNS resolvers tended to have relatively abysmal peering with the Telekom network that was getting completely overloaded at peak times.
Unfortunately "use <insert favourite DNS provider here> everywhere except for Akamai CDN, for which use the Telekom DNS" isn't something that consumer routers support, so you'd have to start running your own custom DNS resolver to work around that problem…
Comparing Redis to SQL is kinda off topic. Sure you can replace the one with the other but then we are talking about completely different concepts aren't we?
When all we are talking about is "good enough" the bar is set at a whole different level.
I wrote this article about migrating from Redis to SQLite for a particular scenario and the tradeoffs involved.
To be clear, I think the most important thing is understanding the performance characteristics of each technology enough that you can make good choices for your particular scenario.
We're talking about business challenges/features which can be solved by using either of the solutions and analyzing pros/cons. It's not like Redis is bad, but sometimes it's an over-engineered solution and too costly
I wish you'd have expanded on that. I almost always learn about some interesting lower-level tech through people trying to avoid a full-featured heavy-for-their-use-case tool or system.
The cheesy noir persona is for the AI assisted install and that's it. Inside the app, the prompts are strictly business. (They still have roles, but not "characters" or "personas").
As another person that spends the whole day in the terminal. It's sad to see there is no Windows version. I do not understand why I would need gpu acceleration for a terminal, but I would still try it.
I use a company managed/provided machine that runs windows, I do not have to bother maintaining it. All I use is basically Firefox and a MinGW to have a bash
I am using it for ansible, php, java, c, linux configuration issues or general questions. Preparing excel sheets etc..
It's sped the time I need to produce projects from a usual span of 4-20 days to 1-2 days with another 2-3 Testing. Of course I still bill the time it would have taken me but for a professional it can be a great improvement.
While my country will be slow to adopt, we haven't even adopted to smartphones yet - hooray Germany, it will have to adopt eventually ( in 10 years or so )
> Of course I still bill the time it would have taken me but for a professional it can be a great improvement.
This may be a flippant comment, but it actually represents one of the reasons it is difficult to track GenAI usage and impact!
Multiple researchers have hypothesized (often based on discrepancies in data) that the gains from workers using GenAI are not necessarily propagated to their employers. E.g. any time savings may be dedicated to other professional or leisure pursuits.
About 30% percent of traffic to Cloudflare uses HTTP/3 [0], so it seems pretty popular already. For comparison, this is 3× as much traffic as HTTP/1.1.
I'd even go as far as claiming that on reliable wired connections (like between cloudflare and your backend) HTTP/2 is superior to HTTP/3. Choosing HTTP/3 for that part of the journey would be a downgrade
At the very least, the benefits of QUIC are very very dubious for low RTT connections like inside a datacenter, especially when you're losing a bunch of hardware support and moving a fair bit of actual work to userspace where threads need to be scheduled etc. On the other hand Cloudflare to backend is not necessarily low RTT and likely has nonzero congestion.
With that said, I am 100% in agreement that the primary benefits of QUIC in most cases would be between client and CDN, whereas the costs are comparable at every hop.
Is CF typically serving from the edge, or serving from the nearest to the server? I imagine it would be from the edge so that it can CDN what it can. So... most of the time it wont be a low latency connection from CF to backend. Unless your back end is globally distributed too.
Also, within a single server, you should not use HTTP between your frontend nginx and your application server - use FastCGI or SCGI instead, as they preserve metadata (like client IP) much better. You can also use them over the network within a datacenter, in theory.
Is the protocol inherently inferior in situations like that, or is this because we've spent decades optimizing for TCP and building into kernels and hardware? If we imagine a future where QUIC gets that kind of support, will it still be a downgrade?
There is no performance disadvantage at the normal speed of most implementations. With a good QUIC implementation and a good network stack you can drive ~100 Gb/s per core on a regular processor from userspace with 1500-byte MTU with no segmentation offload if you use a unencrypted QUIC configuration. If you use encryption, then you will bottleneck on the encryption/decryption bandwidth of ~20-50 Gb/s depending on your processor.
On the Linux kernel [1], for some benchmark they average ~24 Gb/s for unencrypted TCP from kernel space with 1500-byte MTU using segmentation offload. For encrypted transport, they average ~11 Gb/s. Even using 9000-byte MTU for unencrypted TCP they only average ~39 Gb/s. So there is no inherent disadvantage when considering implementations of this performance level.
And yes, that is a link to a Linux kernel QUIC vs Linux kernel TCP comparison. And yes, the Linux kernel QUIC implementation is only driving ~5 Gb/s which is 20x slower than what I stated is possible for a QUIC implementation above. Every QUIC implementation in the wild is dreadfully slow compared to what you could actually achieve with a proper implementation.
Theoretically, there is a small fundamental advantage to TCP due to not having multiple streams which could allow it maybe a ~2x performance advantage when comparing perfectly optimal implementations. But, you are comparing a per-core control plane throughput using 1500-byte MTU of, by my estimation, ~300 Gb/s on QUIC vs ~600 Gb/s on TCP at which point both are probably bottlenecking on your per-core memory bandwidth anyways.
Go http webserver doesn't support http 3 without external libraries. Nginx doesn't support http 3. Apache doesn't support http 3. node.js doesn't support http 3. Kubernetes ingress doesn't support http 3.
should I go on?
edit: even curl itself - which created the original document linked above - has http 3 just in an experimental build.
> edit: even curl itself - which created the original document linked above - has http 3 just in an experimental build.
It's not experimental when built with ngtcp2, which is what you will get on distros like Debian 13-backports (plain Debian 13 uses OpenSSL-QUIC), Debian 14 and onward, Arch Linux and Gentoo.
Just why?
reply