It’s not ready for use yet. I guess the idea is R&D. Come up with new approaches and solutions. Experiment with different things. The current major browser engines are probably quite constrained because it’s important that they remain stable, so they can’t just make drastic changes. Experimentation is good. I hope Servo comes up with innovative new solutions.
Servo is now under the Linux Foundation umbrella. It's refreshing to finally see some more players in the browser engine space than the tri-opoly of Google (Blink), Apple (WebKit) and Gecko (Mozilla). LibWeb (Ladybird Browser) is another browser engine, but also in very early days of development.
Gecko is depressingly tied to Firefox and not easily embedded. Blink has somehow become the de facto browser engine thanks to Electron. Even Qt (QtWebEngine) is now based on Blink instead of WebKit. A lot of web apps like Teams only support Blink/Chromium, just like the dark old days of Internet Explorer. Chrome and Edge are basically spyware which people are most drawn to because of being somewhat default and/or well known for their phone/PC.
I do hope GeckoView gets wider support, but at the same time, the competition is already insanely entrenched. People arn't using Firefox because they prefer Gecko's rendering, after all.
There is also the Flow browser by Ekioh https://www.ekioh.com/flow-browser/ which is closed source and marketed for use in set-top boxes and smart TVs.
Honestly why would anyone do anything other than Blink at this point?
For one, there's so many examples to crib from about how to do it.
But more generally, I just feel like every other player in the space has adopted an adversarial stance. For sure, sometimes Safari or less typically Firefox do lead, but it's rarely by much & usually the set of capabilities overall is far far less.
I don't want a monoculture either, but until we get a other pro-web player who can web-forward their shit, who is aggressive about making the web better, there's just zero hope for this conversation. Blink plus two boat anchors isn't good.
The IE comparison is so woefully out of touch & distasteful. Hard pass. Chrome tries. There wasn't the ecosystem of standards bodies back when IE was inventing stuff whenever they felt like, but today there are tons of expectations & reviews happening at multiple levels to try to refine & figure out what makes sense. In some ways it works great & a lot of review happens, but Moz + Apple hate any real power for the web & kick & scream & don't actually review what should be done if we did want to do the capability & reject on principle making a bigger web platform. There's no real debate because 2/3 players actively believe & push for a small web. It's a miserable rock & hard place situation, trying to figure out what to do when there's only one ayer who believes in a web platform at all.
I love the new entrants, but I really worry they'll also be into their own jam & not excited or interested in making a broader better web platform, and just turn the 2 Vs 1 anti/pro web into a 3 Vs 1 battle.
Google/Blink consistently pushes for bad things like EME, FLoC and Topics API.
They have a very different vision of the web where users live in a corporate playground and complex browser engines which only a few large corps can manage.
At this point what is making the web a better platform? The web has feature overload, even with features like Server Push which are seldom used. FWIW I think there are some exciting possibilities and new features for the web, especially around the P2P space, but I don't think it's in Google's interests to push for that at all.
It's a huge company of hundreds of competing ideas.
I think most no one has any respect or appreciation for the circumstance flocs & topic Api was raised in. The dogfucker skanks at Internet Advertising Bureau were actively pushing government regulators to replace cookies with some gobshit anti user trash, far worse. I genuinely feel for Google. No one sees or knows any of the other context going down at the time, but all eyes are on the team of like 40 trying to find some way to preserve some privacy, in an org & task that is the hugest fucking lightning rod for attention & negativity, being the most visible & one of the most hated companies on the planet.
I agree that Google seems to have kind of lost the will to fight for a lot of good shit. There's still tons of great Google initiatives, but if someone can't tick it off their OKR within 8 months & call it a raring success, the effort & the team has seemingly no backing, no one with real principle intelligence or spine to keep the really really smart good shit going. That's just not a reasonable time frame for adoption. The web's early adopters take 3 years, minimum, for most interesting capabilities, and there's seemingly no one anywhere with that kind of patience for rolling out. Fuck this industry. This is why we can't have nice things.
Google is a huge company, yes, but the number of people who drive Blink development is relatively small (of course it is; if most of Google was focused on or even had significant input into Blink, they'd both fail).
I think you have it backwards. It's not that Google has lost the will to fight for good, it's just that Google is a corporation, and their "don't be evil" mantra was only a thing when they were small and it was convenient and profitable for them.
Google has not given a damn about doing "good shit" or about privacy for well over a decade now. Their entire business model is predicated on surveillance capitalism. If they ever truly were the "good guys", they have not been that for most of the evolution of the modern web.
Google is not our friend. They suck, for entirely predictable capitalistic reasons, and their stranglehold overv web standards needs to stop.
While I think your viewpoint is interesting, the language prevented me from appreciating it fully. Would you mind expanding on your opinion on privacy and Mozilla/Apple’s role in the standards process a little more?
It feels shallow to me to let oneself be rebuffed by such petty small barbs. I try to follow the "Highest Form of Disagreement," to find the best interpretations of things first. Rather than let small fry shit cause roadblocks in my mind.
Your ask about Moz/Safari I've said quite a few words on elsewhere in this post & on others. I think the far more interesting topic is what a bunch of jackal villains the IAB is. I cannot stress enough how hard a time Google has had trying to preserve any privacy on the net when there is a huge lobbyist group close to regulators pushing so hard to end user privacy. These people have the worst most anti-user outlook imaginable, are up to absolutely no good. My strong language is a just a start on describing how awful the IAB is & what sinister monsters Google has to go to the mat & wrestle to try to preserve user privacy in a post 3rd party cookie world.
Personally I don't think it's right that I get flagged for my previous reply, but I'm glad to have made a better go at my reply this time. https://news.ycombinator.com/item?id=35565707
>pushes for bad things like EME, FLoC and Topics API
EME is better than browsers having to implement their own proprietary APIs for DRM. If EME didn't exist DRM would still be used by sites like Netflix.
>FLoC and Topics API
These are better for privacy than learning interests by tracking via third party cookies. These are moves to retain the positive uses of the web while increasing people's privacy.
>At this point what is making the web a better platform?
WebGPU released recently and provided big speedups to GPU intensive use cases.
>but I don't think it's in Google's interests to push for that at all.
What's the benefits to users or server hosters? Will it improve the user experience? Reduce latency? Save costs? If peer to peer features provide value I don't see why they wouldn't be interested. Peer to peel has its own set of drawbacks so there are many uses where it isn't a good option.
> If EME didn't exist DRM would still be used by sites like Netflix.
(Playing devil's advocate just a little here...)
...and maybe it'd be cumbersome/difficult/annoying for users, opening the door for big changes in the landscape.
Spotify-like streaming services for music (basically the same stuff everywhere, just choose where you get it from) only exist because they had to compete with the ease of getting DRM-free music free from P2P services.
The acceptance of easy standardised DRM for video has led to movie streaming services being the modern equivalent of the old cable networks. You want to watch X? You must subscribe to Netflix. But Y? Y is only available on Disney. Z? Amazon.
Personally I don't care. I've implemented EME and proprietary DRM playback numerous times, and I don't subscribe to any streaming services because I find 99% of TV/movies to be not worth my time. For people who do care though, EME is probably net negative.
> WebGPU released recently and provided big speedups to GPU intensive use cases.
On the contrary.
WebGL was going to get OpenGL ES compute shaders, contributed by Intel three years ago. Google blocked the effort with the reasoning WebGPU compute was around the corner. Again three years ago!
Due to politics between browser vendors, we have now yet another shading language to learn, and because Rust is fashionable, naturally it moves away from classical shading languages into a more Rust like syntax.
It arrived six years too late, so it represents Vulkan, Metal, DirectX when they were released in their version 1.0.
Still it doesn't fix the issue that after 10 years of WebGL, there is no reasonable debugging story and no Web games that can match PlayStation 3 graphics.
What an echo chamber. People are so desperate to hate, want to hate. Any outside view is just blasted away on this complex situations.
I do personally think p2p is absolutely key to unlocking a future where users are not so beholden. It creates a much more connected web, versus the ultra-federalized model. Yes there are drawbacks but simply turning our backs on connecting people, deciding the web is just going to stay hosted forever and ever, is a huge denial of galaxies of potential. We won't know how far we can go until we try, until we begin.
> There's no real debate because 2/3 players actively believe & push for a small web. It's a miserable rock & hard place situation, trying to figure out what to do when there's only one ayer who believes in a web platform at all.
Not to say I don't believe you, I'm just curious. In what way(s) does Google uniquely believe in the web as a platform where Apple/Mozilla don't? Can you provide some examples of this?
Like 18 months ago Safari launched a "look at us we are so great, we don't support these long list of web apis; isn't chrome evil" and Moz joined in like two days latter repeating the exact same claims in an obviously coordinated negativity-campaign. Web USB, web Bluetooth, web midi, ambient light sensor, bunch of other sensors.
I'm sorry I really want to find the links & show this off more. It was the most boldfaced & honest admission that basic useful interesting things were not welcome, profiteering off suspicion & hostility while telling users that the anti-feature was undecidedly the only acceptable way.
One can also review moz's standards positions. It's a great effort & I applaud Moz for their transparency & don't want to hurt the effort. There's aot of good too. But there's such a long sordid history of Moz saying no absolutely not this is awful, then eventually having to circle back around & at least make some effort to not be a huge stick in the mud, to at least help figure out at some degree what would fit if this was a goal. And often deciding yeah, we will do it
https://mozilla.github.io/standards-positions/
They just don't seem to have any ability to differentiate between what a privileged/permission-ed site should be granted versus what the baseline security model should be. Any potential information leak anywhere seems like cause to terminate effort.
What really happens is that Mozilla brings multiple well-argued objections (Safari, too) that span both technical and non-technical reasons, but Chrome just releases its half-baked non-standards and calls it a day.
Fear Uncertainty & Doubt are being used again and again to obstruct basic sensible user asks like being able to use Arduino Web Editor or work with their midi keyboard. Fear is the worst demagoguery of all.
Put it behind a permission! Only turn it on if the user installs a PWA! The idea that Moz/Safari know better than to give users what they want, to deny the web basic possibilities: that is demagoguery. It was never based in sound perspective.
I completely agree. The amount of FUD Mozilla spread about Web MIDI was truly distasteful. People say that Google is the enemy, and perhaps they are. But at least Google does not write off entire groups of users (like musicians) because of a swivel-eyed security paranoia.
If I wanted a paternalistic entity telling me what I can and can’t do with my device, I’d use an iPad.
> The amount of FUD Mozilla spread about Web MIDI was truly distasteful.
As in: everything they said is true, and the moment they launched it they found it's used for fingerprinting (and Google doesn't even hide it behind a permission prompt)
Anything can be used for fingerprinting. Your GPU can be used for fingerprinting. Your fonts can be used for fingerprinting. MIDI is so far down the list of fingerprinting threats.
If Mozilla are really serious about fingerprinting then they need to remove <canvas> right now and make every website render in Times New Roman.
Fingerprinting cannot be solved by disabling browser features in a standard browser. It can be mitigated by using content blockers such that the fingerprinting code never runs, or by using a specialist browser like the Tor browser.
> If Mozilla are really serious about fingerprinting then they need to remove <canvas> right now
Ad absurdum is not as great an argument as you think it is
> Fingerprinting cannot be solved by disabling browser features in a standard browser.
It also shouldn't be facilitated by just blindly turning them on without propert mitigation. And proper mitigation is complex
> It can be mitigated by using content blockers
So now you're shifting the responsibility onto the user. Even though it's been shown time and again that users can't really understand all the complexities of modern systems, their capabilities and the far-reaching results of what these systems can and do.
Strangling the web platform to keep users safe, forcing them onto much less secure much more invasive apps is not justified nor reasonable. Watching Mozilla adopt the same condescending paternalistic platform murdering "protect the children" absolutist authoritarianism with no possible consideration or affordances was a sad sad sad week. It's extremely reckless & hostile behavior, at deep deep deep injury to making so many great futures possible.
Prior to app store creation, Apple and Google were the same regarding the web as a legitimate platform. After it was released, only really Google was pushing for it. Hence the creation of Progressive Web Apps, thanks to Alex Russell et al at Google.
PWAs failed in the marketplace and are essentially dead. Google pushed them because they were already on track for web dominance, and the shift toward native apps was a threat to that. These days Google still wants to control the web (and more or less does), but they also have a healthy app ecosystem, albeit only for Android.
I'm sure it still bugs them to no end that they have zero control over the app experience on iOS, which they possibly could have had more input on had webapps ended up being the dominant way of doing things on mobile.
I think the expectation that tech wins & is everywhere in 5 years is murdering useful progress. It's not the climate we are in any more. Success is slow boiling, especially for web standards.
The lack of empathy for how long change takes keeps letting doom & gloomers send good things to the graveyard.
You don't even understand the term but you're being so dismissive. Maybe read the full thread and then comment.
By definition their stance is 100% about "a big web". It's an ideology that the browser shouldn't become another OS competing vs one that it should.
And while one side of the conversation tends to see their approach as 100% correct, the history was people downloading random exes with 0 sandboxing to do 99% of what SPAs offer today, so there's merit to both trains of thought.
> There's no real debate because 2/3 players actively believe & push for a small web. It's a miserable rock & hard place situation, trying to figure out what to do when there's only one ayer who believes in a web platform at all.
And dozens of replies have all managed to understand what they meant. They didn't need to define small vs big web because for those familiar with the subject matter it's a pretty intuitive way to describe the two opposing ideologies
Not every comment can be written for every reader. Which is why I recommended you read more of the thread to gain some understanding before replying.
-
> Of course this is not Firefox's stance.
The "of course" is unnecessarily condescending and simply doesn't follow. Firefox OS was code named "Boot to Gecko"... as in boot into Mozilla's browser engine. Mozilla openly stated they'd push for new Web APIs to enable it.
For a long time all the major players were for a "big web", it's no coincidence that the only one left pushing is the only one that didn't abandon their personal web based platform.
I wouldn't say this is an uncommon philosophy. I for one am very aware that Firefox is, for the most part, an objectively worse browser than Chrome; SpiderMonkey is worse than V8, Gecko is worse than Blink. I still use it in a futile attempt to avoid Google's monopoly on the web (and for tree-style tab), but I'm perpetually disappointed by Mozilla's mismanagement of Firefox.
It's as if Mozilla is trying to morph Firefox in to a discount Chrome clone by stripping it of its only appeal, killing it's viability to both markets simultaneously.
I agree with you that Mozilla's mismanagement of... everything... had been a disappointment, but Firefox is... fine. I've used it for years and have few complaints. I expect it's strictly worse than Chrome on several axes, but I doubt in ways that most people would notice or care about.
They began a replacement engine 7 years later to address shortcomings, and 3 years after that still aren’t sure if it’s actually better and worth switching over to?
Now that I write this, I kind of get it. But I’ve only experienced this for weeks worth of work, not for years.
I heard that Firefox incorporated some parts of Servo years ago as part of the Quantum project. However my knowledge is limited to those buzzwords and I don’t have any real sense of how big a portion of the work that went into Servo got used.
Mozilla abandoned work on Servo in 2020 (source: https://tildes.net/~tech/ra8/i_am_a_mozilla_employee_amaa) shortly after Layout 2020 came about. Servo was in limbo for some time before recently being handed over to the Linux Foundation, hence the time delay.
I feel like that would be helpful context at the top of this article, because before knowing this I was pretty confused why a project was publicly questioning a decision it made 3 years ago, whereas with that context it seems this was more a discovery exercise by the new maintainers.
The article is published in a blog for people working with/on servo and for people who are interested in servo already. It is not a "This is what servo is", "Announcing servo" or anything like that, it's a niche article on a niche blog for a niche project. It's perfectly fine to not lead with context that is obvious from the rest of the website.
I knew Servo was part of Mozilla, but was confused by the article. It said 2013 was in use, but listed multiple major issues with it that I would think would prevent FF from being acceptable by any normal person.
Knowing it was a project and not a part of the current FF engine (as I had assumed) is the context I was missing.
Thank you. I don’t follow FF development closely enough to know this stuff, I just know enough to recognize a few project names.
Also note that Servo is no longer a part of Mozilla. Servo's development now has nothing to do with Firefox's needs or roadmap. They're completely independent and are chasing their own goals.
Layout 2020 is an experimental attempt at parallel layout - an intersection between the complexities of "how the hell do I even do that in Rust" and "how the hell do I even do that at all".
Servo if memory serves was also experimenting with 3d layouts in VR as the wave of the future at one point. Make your own conclusions.
The really actually useful thing that came out of that effort was the CSS engine which is already incorporated into Firefox.
Love that servo is still being worked on. Don’t love that the new layout engine doesn’t support floats or flexbox. We need to do more to stave off the homogenisation of the web into the chromium spyware machine. What a shame that Mozilla’s funding was so drastically cut.
I've watched videos about implementing CSS fixes in Ladybird (https://www.youtube.com/@awesomekling) and whenever the web specs get pulled up, I find them pretty clear most of the time. There's a lot of context required if you want to read the details about one specific thing, but all the necessary context is usually linked and readily available if you need it.
That complexity does lead to bugs (and bugs in Chrome can become part of the de-facto standard if the Chrome devs don't fix the bug fast enough) but the spec itself seems quite complete in most areas. In the few areas that are underspecified, simply seeing what other browsers do usually fixes the problem as there's usually an overlap in behaviour between at least two out of three remaining browser render engines.
The basic algorithms and rendering steps are all laid out pretty well in the spec. Even Quirks Mode has a standard (https://quirks.spec.whatwg.org/), though that's far from complete as every browser has its own compatibility quirks because of browser detection and branded CSS properties.
I know CSS documentation used to be awful, but the current version of the WHATWG spec is quite readable in my opinion and doesn't leave as much room for confusion anymore.
Dumb question: if CSS is under-specified and everyone designs their sites for Chrome, could we analyze Chromium's CSS rendering code and use that to create a more well-defined CSS spec which other browsers can implement?
Perl was in exactly that situation many years ago. The specification for Perl 5.x was `perl` itself; if `perl` was found to conflict with the spec, sometimes the spec would change.
It was a nightmare, and part of the reason (I believe) that Perl 6 got so over-specified. It meant you couldn’t count on the spec, that you never knew if something was a bug or intended behavior.
HTML4 was in a similar situation. Microsoft and Netscape developed competing variations of under-specified features, leading to a decade of the worst Dark Ages for web development.
So if we use Chromium’s CSS engine, then either (a) Google now controls a fundamental part of the web (and there’s no evidence that they’ll behave in any sort of responsible manner), or (b) we now have the same specification problem.
Getting complex code and specs to agree 100% is a Hard Problem, unsolved almost everywhere. Giving more power to Google won’t help.
For a while while Google had a minority and everyone was still targeting IE instead of the standard, they would detect that and take a different rendering path even if it meant preserving broken behavior. At the same time they tried to fix/improve the spec and wage a publicity campaign against Microsoft for not following standards.
While that’s not necessarily a capability a smaller not well funded browser might have, certainly they have lots of forums to advertise “hey Google is doing this incorrect thing, this is how we detect existing sites relying on that buggy behavior, and this is how you fix it”. You could even show a banner while browsing “standard confirming vs not”. You leave that banner out for places the standard is underspecified to be fair and reinstitute it once you’ve got it clarified, assuming websites are still relying on those nonstandard paths / Chrome isn’t addressing the behavior.
That being said, it’s no surprise this is the situation when all commercial investment into browser tech is by commercial companies giving it away for free. Think about the hundreds of millions of dollars being pored in. That’s not out of the goodness of their heart and it’s going to be difficult to impossible for anyone else to compete.
For what it’s worth I think a huge regulatory improvement to monopoly laws would be an anti-dumping provisions for software. You’re not allowed to sell something below what it costs you to make (including accounting for R&D) and the only person’s effort your allowed to 0-rate is your own and any unpaid volunteers you convince to join you. It would mean that browsers would now cost actual money that could fund third party efforts (ie if my budget for browsers is X then maybe I want to invest in a browser that treats me better). Of course the challenge with this model is that it makes it hard to actually enforce. Are you giving a feature away for free or are you improving an existing product and it’s part of that overall cost? How do you budget recurring revenues for products that want to amortize the cost over periodic payments instead of upfront ones? Etc etc. I don’t necessarily know what the answers are. There may be none. But certainly Google’s control of the web comes from the fact that they’re poring in huge amounts of money to maintain a controlling interest over the thing that enables a good chunk of their revenue stream. It’s not as important to Apple and we see them not investing as heavily (it’s important for the product but it’s not intrinsically strategically as key hence the historic neglect). Same goes to Microsoft which ceded its spot in the internet ecosystem to Google back in the day.
Anyway, what I’m trying to say is that Google’s control won’t weaken because you cede to duplicate their particular implementation. Their control weakens when you can take away their market share and for a smaller entrant you want to replicate as much behavior verbatim as possible to lower friction for users. It doesn’t matter what the spec says. Aggressively prioritize what is important to customers and serve the market where Google is incapable of doing it. For example, memory usage is a trivial one to get them on. They’ve lost control of that beast and can’t figure out how to get better. Clobber them in the head over that failure. They can’t handle many simultaneous tabs. Make your browser work smoothly and without crashing or using terabytes of memory even if it’s handling 100k tabs. If I were to take on that endeavor, that’s how I would take on Google (and no, I wouldn’t use Blink as a starting point because you’re just picking up all the tech debt and you’re not going to do a better job than Google at trying to shovel shit away - you need to start greenfield like Chrome did and firefox and opera did for a time when they showed what a performance hog IE was and how it didn’t even have valuable features that people cares about). Similarly, ship the browser with adblock and actively fight websites that waste resources. Prioritize aggressively the user’s health and digital well being (meaningful privacy improvements by closing as many side channel data gathering techniques as possible) and respect the well being of the machine you’re running on.
The issue with this approach is that it would encode all the bugs and quirks into the specification which isn't a good idea.
(I'm an engineer on Blink's layout engine).
We've recently finished re-architecting Blink's layout engine, part of the reason why we did this investment was we were concerned that we couldn't fix WebKit era bugs (e.g. too many sites would depend on them due to our shared heritage). This makes other engines jobs (e.g. Gecko) super difficult as they'd need to encode even more quirks than they have time for.
I think we've broadly mitigated a large part of that risk. There are still large parts of CSS which are underspecified, e.g. tables/block/float layout. But its slowly getting better.
And this is kind of a huge deal. 3 different implementations rather than 2 is kind of like the old saying "never go to sea with two chronometers, take one or three".
I'm not saying Blink exactly matched WebKit anymore, before your re-architecture. But it shared a lot.
If you have a difference in behavior between 2 implementations, then it's easy to see think of it as saying one or the other is right. And which one is right will be partly influenced by the amount of exposure the feature has had, multiplied by market share. Which is not a great criterion to use if you want a spec that hangs together, especially if a reimagining like Servo is going to come along and rub against the grain of every arbitrary decision ever made.
Is CSS under-specified? when I've been curious about some thing, there has been a pretty detailed spec. There are differences in behavior on newer things as things get worked out, but I feel like so much is due to actual bugs relative to the spec than the spec itself.
Is there an example of under-specification that sticks around in practice?
In many cases has already happened (although most of it was written when IE, Firefox and Opera were the browsers to align with and Chrome just implemented the spec). There are bits that are still underspecified, but work is already ongoing to specify those.
What's preventing that is the large installed base of sites on the 'net. Quite possibly, the majority of web content has already been created (1), and it's getting worse as old sites are shut down.
Also, Safari is a thing (and personally I prefer FF for webdev, though Moz is certainly working hard to turn me away).
(1) those worth reading anyway, rather than generated content
The browser landscape today is composed of practically just one browser: Chrome. It is easier than ever to get browsers (read: Chrome) to implement different rendering schemes based on identifiers or descriptors provided by web devs.
Supporting the latest and greatest is a great marketing ploy, browsers (read: Chrome) are incentivized to support them. Browsers that don't will be forced to support them lest they become even further irrelevant.
I think it is a very bad idea for a single browser vendor to deliberately subvert standards process and attempt to steamroll the web with their own proprietary tech!
W3C's hasn't come out with anything HTML for a long time now. What we're calling HTML5 is spec'd by a Google-financed group of individuals (but mostly Chrome devs) collaborating at github.com/whatwg.
The last WHATWG HTML revision W3C has put into recommendation status is WHATWG HTML review draft published January, 2020. Last year's review draft was rejected due to privacy concerns, and also due to long standing issues with HTML5's so-called outlining algorithm ie the spec's written interpretation of heading levels for landmarks in navigation as a left-over from back when Ian Hickson was editor which didn't meet with the reality of what assistive technologies are actually doing. Meanwhile, the WHATWG spec has been edited (by a long-term W3C efitor) but no new consensus for a new W3C HTML recommendation has been achieved [1].
Really? The spec might maybe be hard? Perhaps? I don't know for sure.
But there is a long long long legacy of CSS Acid Tests that have been very well established & expected, that should guide most implementations to success.
I forget what it's called but a bunch of the major browsers get together each year & find a couple things to agree to focus on & make happen each year. Trying to just play catchup & go through the years catching up seems semi intuitive a path to getting to modern. I agree that maybe not all these specs have gotten great test suites set up, but I feel like CSS in general realized this was a problem we'll over a decade ago & upped their game. Maybe the situation has decayed since, I dont K ke, but would love more idea of where we lie atm.
Do you have any references or links or whatnot to support this? I thought css group was still big into testing. They're no longer called acid tests but I thought it was all still alive & well.
They are, they just don't use the Acid 1, 2, or 3 tests that you're familiar with. They currently use a manual test suite, which requires humans to look at reference output and compare it to what the rasterizer and compositor are doing.
How far is Servo from being daily-driveable? What's stopping someone from slapping it together with SpiderMonkey or some other JS runtime and calling it a browser?
ladybird isn't really what I would call efficient, it's very slow compared to any other browser, because they're still focusing on correctness primarily as they mature their engine.
servo is in development for years and last time I checked it was an extremely buggy web view that couldn't do much. On the other hand lady bird is already useable.
The main reason many people abandoned Firefox was because everything about Chrome was faster and more responsive. Render engine performance matters more than implementing APIs that one or two websites may ever use for anything other than browser fingerprinting. Servo was abandoned by Mozilla but the improvements to the browser engine that did make it into Firefox have sped up the browser significantly. They were the reason for a whole bunch of "check out Firefox it's fast again" posts all across the internet.
I care more about my browser being fast than I do about it supporting WebMIDI or WebSerial and I think most users agree with that. Performance and efficiency are also the reason (as far as I can tell) that macOS users stick with Safari.
I just wrote a long post about web platform & balance of power. And this is like 3000% more the real feels of the situation.
I really really hope it pans out & delivers. But it properly & rightly should be an ever shrinking piece of the puzzle. It underpins it all & flexility here would be key, better tech (parallelizable!) very empowering, put to the sword many criticisms, but yeah: it's an ever shrinking factor versus what we can do with the web.
And there are so so so few powers helping us make what we can do with the web better. Even the historic pro-web folk are trying to end the web as we know it. The "Towards a Modern Web Stack" Hixie mentality (Flutter CanvasKit) is to basically ignore destroy & end the contemporary web & html & make the browser a native app delivery platform with zero user-agency, to make it a giant moving picture show. The rest of the browsers have adopted a highly adversarial stance where every possible feature is portrayed as a threat to users, as ruin. There's just so few visionary hopeful excited people left building browsers that do stuff for people. You are so right on.