Rigid ABIs aren't necessary for statically linked programs. Ideally, the compiler would look at the usage of the function in context and figure out an ABI specifically for that function that minimizes unnecessary copies and register churn.
IMHO this is the next logical step in LTO; today we leave a lot of code size and performance on the floor in order to meet some arbitrary ABI.
I would argue that is largely true because we got the ABIs and the hardware to support them to be highly optimized. Thing slow down very quickly if one gets off that hard-won autobahn of ABI efficiency.
Partly it's due to lack of better ideas for effective inter-procedural analysis and specialization, but it could also be a symptom of working around the cost of ABIs.
The point of interfaces is to decouple caller implementation details from callee implementation details, which almost by definition prevents optimization opportunities that rely on the respective details. There is no free lunch, so to speak. Whole-program optimization affords more optimizations, but also reduces tractability of the generated code and its relation to the source code, including the modularization present in the source code.
In the current software landscape, I don’t see these additional optimizations as a priority.
When looking at the rv32imc emitted by the Rust compiler, it's clear that there would be a lot less code if the compiler could choose different registers than those defined in the ABI for the arguments of leaf functions.
Not to mention issues like the op mentions making it impossible to properly take advantage of RVO with stuff like Result<T> and the default ABI.
> This means there's no target too small for the language, including embedded systems. It also means it's a good choice if you want to create a system that maximizes performance by, for example, preventing heap allocations altogether.
I don't think there's is any significant different here between zig, C and Rust for bare-metal code size. I can get the compiler to generate the same tiny machine code in any of these languages.
That's not been my experience with Rust. On average produces binaries at least 4x bigger than the Zig I've compiled (and yes, I've set all the build optimization flags for binary size). I know it's probably theoretically possible to achieve similar results with Rust, it's just you have to be much more careful about things like monomorphization of generics, inlining, macro expansion, implicit memory allocation, etc that happen under the hood. Even Rust's standard library is quite hefty.
C, yes, you can compile C quite small very easily. Zig is like a simpler C, in my mind.
The Rust standard library in its default config should not be used if you care about code size (std is compiled with panic/fmt and backtrace machinery on by default). no_std has no visible deps besides memcpy/memset, and is comparable to bare metal C.
I understand this, but that is a pain that you don't get with Zig. The no_std constraint is painful to deal with as a dev even with no dependencies and also means that if you're working on a target that needs small binaries, that the crates.io ecosystem is largely unavailable to you (necessitating filtering by https://crates.io/categories/no-std and typically further testing for compilation size beyond that).
Zig on the other hand does lazy evaluation and tree shaking so you can include a few features of the std library without a big concern.
Rustc does a good job of removing unused code, especially with LTO. The trick is to make sure the std library main/panic/backtrace logic doesn't call code you don't want to pay for.
IIRC there's also a mutex somewhere in there used to workaround some threading issues in libc, which brings in a bespoke mutex implementation; I can't remember whether that mutex can be easily disabled, but I think there's a way to use the slower libc mutex implementation instead.
Also, std::fmt is notoriously bad for code size, due to all the dyn vtable shenanigans it does. Avoid using it if you can.
Regardless, the only way to fix many of the problems with std is rebuilding it with the annoying features compiled out. Cargo's build-std feature should make this easy to do in stable Rust soon (and it's available in nightly today).
I'm curious about a system where capital gains are 100% for the first... I don't know, let's say a month. Then you ramp down over the course of the next year until it matches the regular income tax rate. I'm less concerned about the specific time periods than I am about the idea that it would be beneficial to society to have our financial systems encourage long-term thinking.
Exactly, the headline sort of paradoxically reflects the desire for news, and not news itself.
But still, the actual stock market behavior right now is PROBABLY (!!) more reflective of random motion than it is of a fundamental shift in investor behavior.
"My attention is a limited resource. In order to prove that you're a serious human, please donate exactly $157.42 to the Rust maintainers fund and paste the receipt link here".
> With the HDMI 2.2 spec announced at CES 2025 and its official release scheduled for later this year, 8K displays will likely become more common thanks to the doubled (96 Gbps) bandwidth.
Uncompressed, absolutely we need another generation bump with over 128Gbps for 8K@120Hz with HDR. But with DSC HDMI 2.1 and the more recent DisplayPort 2.0 standards is possible, but support isn't quite there yet.
Nvidia quotes 8K@165Hz over DP for their latest generation. AMD has demoed 8K@120hz over HDMI but not on a consumer display yet.
Is it actually good for productivity? The curve isn’t too aggressive? Could you, e.g. stack 3 independent windows and use all 3? Or you kind of give up on the leftmost / rightmost edges ?
I think window managers these days do a better job on 3 monitors than on a single one that could have the same area.
With an ultra wide you lose the screen concept for managing area and it gets awful because you lose grouping windows on different screens, picking per-monitor workspaces, moving windows across screens.
Either monitors need to present themselves as multiple screens, or window managers need to come up with virtual screens to regain the much needed screen abstraction.
I prefer 3 monitors because it eases window management while being cheaper. For gaming I only need one 240Hz+ monitor and for Lan parties I only take that one.
Although for sim racing I've been thinking about getting a single ultra wide and high refresh rate monitor, but I'd probably go for a dedicated setup with a seat, monitor and speakers. It gets pricey, but cheaper than crashing IRL.
Just don't try putting something convenient in between, at least that's what my adventures in TB4 taught me: displayport from a TB port works fine, even when DP goes to a multiscreen daisychain and the TB does PD to the laptop on the side, but try multiscreen through a hub and all bets are off. I think it's the hubs overheating and I've seen that even on just 2x FHD (Ok, that one was on a cheap non-TB hub, but I also got two certified TB4 hub hubs to fail serving 2x "2.5k" (2560x1600). And those hubs are expensive, I believe that they all run the same Intel chipset.
That would require monitors supporting daisy chain in the first place and I never had any problems with them anyways. Likely related to not using a full on hub but a minimalistic dongle with a DP outlet, a PD inlet and a USB outlet (which then goes to a USB hub switch managing access to simple hubs serving all those low bandwidth peripherals like the mouse).
The failing hubs were either driving cheap office displays connected through HDMI or high resolution mobile displays connected through USB-C. Few of those support anything like daisy chaining or at least simple PD passthrough so that you can use the same port for driving the display and powering the laptop, and I absolutely do want dual mobile displays. Even if only so that I can carry them screen to screen for mutual protection of the glass.
I wouldn't hold my breath. Competing models seem to top out around 120 Hz but at lower resolutions. I don't imagine there's a universal push for higher refresh rates in this segment anyway. My calibrated displays run at 60 Hz, and I'm happy with that. Photos don't really move much, y'know.
In one of James Gosling's talks he tells a funny story about the origin of this design decision. He went around the office at Sun and gave a bunch of seasoned C programmers a written assessment on signed/unsigned integer behaviors. They all got horrible scores, so he decided the feature would be too complicated for a non-systems programming language.
Non-systems languages still need to interact with systems languages, over the network or directly. The lack of unsigned types makes this way more painful and error-prone than necessary.
It’s rare I have to do bit math but it’s so INCREDIBLY frustrating because you have to do everything while the values are signed.
It is amazing they haven’t made a special type for that. I get they don’t want to make unsigned primitives, though I disagree, but at least makes something that makes this stuff possible without causing headaches.
Sometimes I'd like to have unsigned types too, but supporting it would actually make things more complicated overall. The main problem is the interaction between signed and unsigned types. If you call a method which returns an unsigned int, how do you safely pass it to a method which accepts a signed int? Or vice versa?
Having more type conversion headaches is a worse problem than having to use `& 0xff` masks when doing less-common, low-level operations.
This adds an extra level of friction that doesn't happen when the set of primitive types is small and simple. When everyone agrees what an int is, it can be freely passed around without having to perform special conversions and deal with errors.
When trying to adapt a long to an int, the usual pattern is to overload the necessary methods to work with longs. Following the same pattern for uint/int conversions, the safe option is to work with longs, since it eliminates the possibility of having any conversion errors.
Now if we're taking about signed and unsigned 64-bit values, there's no 128-bit value to upgrade to. Personally, I've never had this issue considering that 63 bits of integer precision is massive. Unsigned longs don't seem that critical.
I think the only answer would be you can’t interact directly with signed stuff. “new uint(42)” or “ulong.valueOf(795364)” or “myUValue.tryToInt()” or something.
Of course if you’re gonna have that much friction it becomes questionable how useful the whole thing is.
It’s just my personal pain point. Like I said I haven’t had to do it much but when I have it’s about the most frustrating thing I’ve ever done in Java.
I don't know about gifted programs, but anything that separates kids who don't want to learn from those who do is a good thing; far to much time is wasted in America's schools catering to bad behavior.
This. Frankly, it is aggravating if not depressing that it is somehow issue that should be considered at a national level. It is an issue that it even is an issue.
This isn't really that different to GWT, which Google has been scaling for a long time. My knowledge is a little outdated, however more complex applications had a "UI" server component which talked to multiple "API" backend components, doing internal load balancing between them.
Architecturally I don't think it makes sense to support this in a load balancer, you instead want to pass back a "cost" or outright decisions to your load balancing layer.
Also note the "batch-pipelining" example is just a node.js client; this already supports not just browsers as clients, so you could always add another layer of abstraction (the "fundamental theorem of software engineering").
https://youtu.be/eMefy5VK9TI - Toto, Montreaux, 1991