I don't make any efforts to make mobile devices work, since usually their GPUs and drivers are not good enough for this, but PRs and patches are welcome. At one point Mobile Safari was working.
That Wayland keylogger is not the same thing. X11 has several mechanisms (XTest, XRecord, XI raw inputs) to receive a global raw key input stream, accessible to anyone who connects to the X server, without even making a visible window surface. It even bypasses grabs, meaning that your lock screen password entry can be snooped on.
The Wayland keylogger acts like an application; all Wayland compositors will only send key events to the focused surface, so the user has to focus an active surface in order to get key events. Even in the scenario where you've LD_PRELOAD-hooked all applications, you still will never get the lock screen password, as the compositor never sends it out across the wire.
LD_PRELOAD is problematic from a security perspective, but it's not Wayland-specific: the same issue is true for CLI applications and X11 applications, and any attacker with the ability to write files could also just replace your binaries with malicious ones (stuff them somewhere into ~/.hidden, and then add a $PATH entry to the start).
I think you did not understand my point. X11 has several such mechanisms, yes, but it also has the concept of untrusted clients that disallow the use of these mechanisms and could be used to provide safety similar to Wayland. The point is that this mechanism of untrusted X clients was neglected and I gave an explanation way.
Yes and your response in the whole thread reading top to bottom was the first one that really taught an old dog a new trick. I've been using gnu on x11 since 1991, been annoyed by fellow student's audio streams on my work station back then, and I've never heard about trusted vs untrusted x11 apps.
I wonder how this debate was mainstream? did some gamers try to squeeze 3 extra percent by taking the protocol out of local stacks? there must have been better ways to do that, without throwing out all X11 benefits?
to this day I'm annoyed I can't have a decent window manager integration on gWSL because the compositor doesn't implement the full window manager protocol.
See the ssh manpage for an explanation of untrusted/trusted clients.
This debate was mainstream. Basically, some people presumable paid to work on Linux graphics decided to implement something new instead of doing their job, and gave talks about why X is fundamentally broken. I believe the driving force might have been the hope to support Linux on mobile or embedded devices, and X seems just unnecessary (although I think network transparency would be super useful on mobile devices). Some gamers certainly believed nonsense such as "all X programs are forced to use ancient drawing primitives and so programs will be much faster with Wayland". Wayland developers certainly did not do anything to stop such misconceptions. Later there was disappointment because obviously it was not faster (the drawing model for modern clients is essentially the same), but other myth such as the "fundamental security issue" prevailed.
It's like if Wayland is not just a graphical system, but a full business plan.
Control upstream, then companies wanting solutions will go to you first. Because why go to someone else in the FOSS market, when there is no certainty the code or standard (extension, protocol, etc) will get accepted, forcing you to maintain a fork? With IBM-RH and Ubuntu doings eg., it's hard to say FOSS is immune to vendor lock-in.
> It's like if Wayland is not just a graphical system, but a full business plan. Control upstream, then companies wanting solutions will go to you first.
Wayland is open source with a fixed core protocol that's extremely stable, which anyone can build on. New protocols are constantly proposed. The core is minimal and defines how applications interact with the compositor to render and produce the final output. Control by a single entity is virtually impossible. Wayland ensures everyone has a voice because it's an open protocol which means discussion and development are done in the public.
in _reality_ it gives stack owners full proprietary control.
specifically the wslg stack does not enable Linux gui apps to smoothly integrate with the Windows window manager, because some bits are missing in the Windows Wayland stack, clipboard, window decorations, thumbnails, maximize into a part of the monitor? nope. and no patches taken. supposed you figure where to offer them and how.
It's unfair to claim Wayland is inherently different from X11 in this regard. Both are just specifications, and there are also proprietary implementations of the X11 protocol, primarily for Windows and enterprise settings.
> Some gamers certainly believed nonsense such as "all X programs are forced to use ancient drawing primitives and so programs will be much faster with Wayland".
This is incorrect. Kristian Høgsberg has explicitly stated that a primary motivation for Wayland is the reduced need for a central X server, as many programs already bypass it.
A Wayland compositor is even more centralized as it combines compositor, Window manager, and server while in X these could be separate components. I also do not know any program that bypasses the X server. Are you talking about programs that you can start from the text console and which then do graphics directly? Those are very rare.
There are reasons for those architecture differences.
Wayland is an evolution of the previous design. X11's architecture had clients sending drawing commands to the X server, a method that became limited and required extensions over time. Wayland's approach is: applications perform their own rendering into their own separate buffers, then tell the compositor when they are ready. The compositor takes those buffers to produce the final image.
Because those buffers are separate, enhanced security is a direct side effect. Wayland is the result of decades of experience and represents the current way of doing things.
I'm aware that extensions exist now, like present, which make it possible to send buffers, similar to how Wayland operates, so you don't have to do things the primitive way.
However, to claim to speak the X protocol, you still need to support the older functionality, that's what I mean by a tremendous amount of functionality to support. The moment you get rid of that old functionality, you've essentially created a new protocol, which is what Wayland is.
How is that point nonsense? I don't want to see X go, but I don't think it's reasonable to prevent progress.
If you know these extension exist (for a long time), why spread the misinformation about "drawing commands" in the first place? A client does not need to support old functionality. A server does for backwards compatibility and this is a good thing! In fact, breaking decades of compatibility is the worst blunder of Wayland. The idea that this is a "tremendous amount of functionality" or a huge burden to maintain is also misleading, first because some drawing commands from the 80s are not a lot of functionality to support from a modern point of view, and also because all this is still being maintained anyway, just much worse because the resources redirected to Wayland. And even if one had deprecated some stuff eventually, this would not have broken compatibility and many other features at the same time as Wayland did.
It's not misinformation, that's how X still works. Clients do all kinds of things. New programs aren't like 80s ones but your X server still must support every operation clients expect.
Wayland doesn't break anything, it's a completely new protocol. Claiming Wayland breaks your use case is like saying systemd broke old init scripts. It did because it's a different system.
Wayland isn't trying to be Xorg 2. It's a protocol. At its core it's only a compositor protocol. Everything built on top is up to the implementation developers.
> Everything built on top is up to the implementation developers.
and that's exactly creating the problem: Window management for example is left as an excercise to the reader. thus (my point above) the WSLg interop for graphcial applications _sucks_ compared to where X Servers already were. and if MS doesn't implement what's needed, it won't come. no way to fix it in the Linux or on the Windows side. the MS Wayland thingie in between tightly controls what is possible.
The logical problem with your argument is that as long as we want to support old clients, we now must support the X server in parallel to Wayland. So there is nothing gained. And the moment we can stop supporting them, we could do this also in X. And yes, Wayland being new and incomplete both creates a huge amount of problem which nobody needs.
So far Wayland gave me only headaches and I do not see what it offers that X does not already provide. And the fact that Wayland make their case by lying, etc. the drawing commands BS, network transparency does not work (I feature we do use every day), etc. and the fact that important use cases such as accessibility are now treated as an afterthought that there are diverging implementations with inconsistent support for important functionality, ... all this does not build confidence that the developers even remotely know what they are doing outside of their narrow view on the Graphics pipeline itself.
And this after decades of effort. Maybe it is too late now to save X, but Wayland was a terrible idea, not the idea of developing Wayland itself as an experiment with open result, but to declare X dead and Wayland its successor long before it was ready and before it was clear that it is actually a better replacement (so far, it isn't).
Those are not lies. I don't think you know what you are talking about. If you knew, you would know that waypipe + xwayland-satellite works even for forwarding X11 clients over waypipe. I use it myself every day, but it's pointless to discuss it with someone who isn't interested in listening, only in spreading the same lies as everyone else.
Sorry, how was your comment "Wayland is an evolution of the previous design. X11's architecture had clients sending drawing commands to the X server, a method that became limited and required extensions over time. Wayland's approach is: applications perform their own rendering into their own separate buffers, then tell the compositor when they are ready. The compositor takes those buffers to produce the final image." not highly misleading, if X had the composite extension in 2004 and Wayland project was started in 2008? Last time I tried waypipe it did not work and its design seems flawed as it has to have hard-coded knowledge about each protocol used on the wire.
I apologize for my previous misleading comments. You're right, Wayland causes many problems. As a long time Linux user, I miss how capable X was and don't want to see it go. Wayland compositors feel like toys in comparison, and its advocates sometimes seem to be coping. However, with major DEs and toolkits dropping X11 support, what options do we truly have?
I dropped Gnome a long time ago and I have never used KDE, so I don't this is an immediate problem for me. As long as there are enough people using it, X will live on. I think the main thing one can do is to not accept the argument that whatever the industry wants is inevitable. Free software would not exist if this were the case.
Most GPU drivers these days are DRM drivers, which implement fbdev support for backwards compatibility only [0]. The fbdev API is primarily "fake" these days.
DRM/KMS using dumb buffers are the preferred API if you want to do software rendering and modesetting. You can find several examples of this online if you search for drm_mode_create_dumb.
The words "virtual machine" and "interpreter" are mostly interchangeable; they both refer to a mechanism to run a computer program not by compiling it to machine code, but to some intermediate "virtual" machine code which will then get run. The terminology is new, but the idea is older, "P-code" was the term we used to use before it fell out of favor.
Sun popularized the term "virtual machine" when marketing Java instead of using "interpreter" or "P-code", both for marketing reasons (VMware had just come on the scene and was making tech headlines), but also to get away from the perception of classic interpreters being slower than native code since Java had a JIT compiler. Just-in-time compilers that compiled to the host's machine code at runtime were well-known in research domains at the time, but were much less popular than the more dominant execution models of "AST interpreter" and "bytecode interpreter".
There might be some gatekeepers that suggest that "interpreter" means AST interpreter (not true for the Python interpreter, for instance), or VM always means JIT compiled (not true for Ruby, which calls its bytecode-based MRI "RubyVM" in a few places), but you can ignore them.
You could contribute to a goal that they care about, or give them something they need, or help find somebody who will. Or you can promise to do so in the future.
Using "value" as a medium is problematic because we increasingly don't value the same things. It worked ok back when food took so much effort to grow that securing it represented a significant portion of our mindshare. Then money was reliably a proxy for our shared agreement that food was good.
But now that it's so much easier to make the necessities that we agree on, we spend more time perusing contradictory outcomes. Which is more valuable, if my endeavor succeeds and yours fails, or visa versa? Whose agenda am I furthering when I decide to value a dollar? It's hard to participate in because I can never figure out whether I'm hurting or helping.
Better would to let people be explicit about what they want so that we can the things that have consensus and work towards those. As it is we're letting the ownership of scarce abstractions determine what gets done, which is just bonkers. It was once the best we could do with only the laws of physics at hand to enforce the rules (re: the scarcity of gold), but now we can do better.
I'm much more inclined to believe these guys have something to actually deliver but that something will be a lot less exciting to the average HN reader than the vague marketing implies.
Basis also drops quality on the floor by targeting subsets (ETC1s and UASTC) in hopes of having a "common transcodeable subset". In practice, a number of runtime transcoders seem to just decode to RGBA and then fully re-encode anyway, but on the CPU.
leetcode seems to agree with your definition [0]. the meme isn't to say that inverting a binary tree is particularly difficult - anyone familiar with coding challenges and trees could trivially produce a solution. the meme is more pointing out how ludicrous it is that senior/staff/principal interviews can hinge on these types of problems, despite the engineer proving their proficiency by doing something like running DOOM in typescript types or writing homebrew [1].
I think those challenges (especially leetcode) are heavily misused.
When my team conducts technical interviews, we are asking for a couple simple programming solutions - but we're asking because we want to hear the candidate talk through it and see what their problem solving process is like.
If you aren't evaluating based on conditions like those, I don't really see the value of coding questions.
I agree with this. I got to experience both sides when I interviewed at FB/Meta. I practiced the leetcode and Cracking the Code Interview stuff of course and one of my interviewers asked something like that. I guess it was insulting and pointless but whatever - I just did it.
Another interviewer asked a much more interesting question: you are writing an IM client. How do you build the client-server communication?
That was a great conversation I enjoyed on its own without regard for the interview. Asking questions: do we have online/offline status? (yes) What are the target devices? (mobile).
IIRC I said I'd want to optimize for bandwidth and latency. Cellular networks can be spotty at times and stall out in really annoying ways. I'd design the protocol to use an efficient encoding with a pre-shared dictionary (the list of friends doesn't change that much after all and lots of the same words/emoji are used frequently). I also said I'd make a flexible format that would let things like online/offline status or you have a new message from X ride along with an actual message in the current conversation and explore options like QUIC or other UDP-based eventually-consistent options given how a cellular dead band can put you in TCP retransmit jail for minutes at times.
For closure I was offered a position but went to a different company.
> If you aren't evaluating based on conditions like those, I don't really see the value of coding questions.
The way I think about it, you're really trying to evaluate a candidate on about 10 different metrics all at once. Metrics like programming skill (writing & debugging), communication skills (listening and explaining), capacity to learn, domain knowledge (eg if you're hiring a react dev, do they know HTML & react?), likeability, and so on.
A good interview gives the candidate the chance to show their worth in all of those different areas. But time is limited - so you want some different challenges which will show many capabilities at once.
Asking a candidate to talk through how they'd solve a "leetcode problem" sort of does that - you can see their CS knowledge and their communication skills. But if thats all you ask, you end up overemphasising the candidate's CS knowledge. Most people aren't very good at thinking and talking at the same time. And you don't learn about other stuff. How good are they at debugging? At reading code? Do they have domain knowledge? Can they talk to clients? Are they good at design? Its also quite easy for the interviewer to be distracted by the question of whether or not the candidate solved the problem you gave them. - Which isn't really what anyone is there for.
As part of a larger interview, and especially for systems engineering roles, I think they're still fine questions to ask. But if thats the entire job interview, its a bad interview - because it won't let you evaluate a candidate properly. Especially in product roles where CS knowledge isn't very relevant anyway.
Our technical questions typically stay within the realm of the position we are hiring for, so technical usually revolves around “would you use X or Y in this scenario? Why?”
Understanding how someone thinks is more core to evaluating candidates, so questions like “let’s say you own a window washing company and you’ve been hired to wash every window on every skyscraper in New York City - how do you do it?” provide a much better insight into how someone goes about approaching a challenge.
A coworker has a simple diagram they use outlining a tech stack: backend, cache, frontend, and they give a brief overview of the “application.” Then they explain that there’s a big report that “customer says X isn’t working - how would approach fixing this?” It’s less technical on details and more about again how they would approach finding the issue.
This is absolutely the way. My interviews are conversations with someone that I want to work closely with, and while leet code might be an interesting lunch conversation it’s not going to be part of any of our day to day work (c/c++/swift/obj-c)
Maybe the catch was in saying that left right labels are arbitrary, could be called node1 and node2 as well, inverting is not necessary per se, just visit it in node2, node1 order if needs to be flipped - ie. no physical rearrangement is necessary.
Also the best answer to "how do you reverse an array". You don't. You just read it in the opposite order. Especially in any iterator-based language it should be trivial.
In a pure ASCII world, this doubles as "how do you reverse a string". In a Unicode world, the answer to "how do you reverse a string" is "you should never want to do that".
I can't remember the last time I reversed an array. Honestly it's at least a code smell. Why can't your code just read it in the other direction? Or have it in the correct order in the first place (e.g., sort it correctly instead of sorting it backwards and then reversing it). It's not that hard, even in a C-style for loop language.
Because usually the code to handle it is not mine and it expects to go through the list in forward order. Nobody makes APIs that take a list of, idk, files to delete and lets you pass in a parameter that is “please actually run your for loop over this in reverse”. They expect you to flip your array before you hand it in.
That's what it means, and people use it as an example not because it's like, some sort of super difficult unreasonable challenge, but because it's completely unrelated to the work you'd be doing on the job like 99.99% of the time. It's like interviewing for a line cook and asking them to make a spatula.
Depends on how you look at it, I guess. A binary tree has a couple of properties. Usually there is some ordering on the type. Something like: the element on the left is smaller than the element on the right. (e_L < e_R). Invert would be than turning the ordering into e_R > e_L. I guess this is the left and right branches exchange answer.
If you see a binary tree T e as some kind of function, it can test whether an element exists, a typical set operation. So f : e -> {0,1}, where (e,1) means the element is in the binary tree. (e,0) means it is not in the binary tree. All those (e,0) creates some sort of complement tree, which might also be seen as inverting it.
What would be really weird is seeing it as a directed acyclic graph and invert the direction of every edge.