Coming from the 1980s where many apps were written by a single person with no source control and the only test performed was "does it compile?" to the point where you need ML just to figure out what the hell you need to test completely blows my mind.
The first plane was largely built out bicycle parts by two mechanics, they tested it by jumping in, starting the motor, and hoping for the best. Modern planes are built using a supply chain that spans the globe and tested against millions of different criteria years before they even roll on to the runway.
As technology progresses, we trade capability for complexity.
A lot of the reason for the aerospace "spans the globe" supply chain is politics - e.g. it's easier to convince other countries to buy Boeing planes, if they also supply parts for them. This is especially the case in Europe, as the main competing option is Airbus
> Current browsers are on the same level as entire operating systems.
Am i the only one who views the above statement as something alarming, not something to be admired?
Sure, when we're talking about the safety of planes the track record is improving, however their present complexity comes with lots of risks, for example, what happened with MCAS in Boeing 737 MAX planes.
There are systems that are simple enough for there to obviously be no bugs... and there are those that have no obvious bugs. And that's probably especially true in software with most big packages, just look at OpenVPN and its many CVEs over the years and then compare that to WireGuard. The same could probably be said about their respective resource usages and how easy they are to configure.
Firefox and browsers in general getting larger will simply lead us to a point in time, where nothing that competes with either of them can even be created anew in a reasonable time span - who has the resources to support numerous ECMAScript dialects, many CSS layout systems, a bunch of different HTML modes and tags, custom components, different system APIs, WASM and so on and on and on... It never stops.
I predict that there will be no new major non-Chromium (or even based on whatever Firefox now has at its core) browsers in the future and only smaller projects that make tradeoffs (e.g. no legacy supported) will pop up every now and then.
Depending on your point of view, the browser is more important than the operating system.
That was part of the early debate about IE6, where you would be forced into an OS just to use its browser. This is part of the debate about iOS not allow alternative browser engines. That is central to Chrome becoming more and more a browser you have to use or you're a second class citizen of a whole slide of the internet. It is the whole reason Apple started Safari in the first place. I kinda see it as natural that it became a behemoth that takes incredible efforts to maintain and grow from there.
I think it will take a hell of a time before we see another engine, yes. And it will probably come after a clear Google decline, or completely new applications emerging that boost new approaches to rendering. We need to pray it isn't straight bought by Google/Meta/Apple and get to live long enough to come to the mainstream independently.
> Am i the only one who views the above statement as something alarming, not something to be admired?
(I worked on Firefox for just under 9 years)
I agree, it is alarming. Alas, it is the current state of things.
(The following is not directed at you KronisLV, but rather at the community at large.)
As a (now former) open source browser developer, one of the most frustrating aspects of the job was discussing complaints from people who don't work directly with the web front-end.
Many have wishful thinking that because they believe that a browser should just be a document viewer, that therefore it is just a document viewer, and problems that emerge with browsers is due to developer incompetence.
It is more or less impossible to engage constructively with those people when the reality is that modern browsers are ridiculously complex.
But modern planes have a level of redundancy that is hard to explain. Often with entire operating systems in a subsystem. In such a way that entire subsystems should be able to not be present.
Versus early planes, that, as stated, were bikes with wings. Flight is a hundred yards with a crash at the end was a success.
Coming from a self taught dev who started in the late 90s, I don’t feel any need for ML in my work testing or otherwise. But source control, automated testing, and other computerized validation have been godsends even if they made me weep on introduction.
And if I were building anything with the scope and scale of Firefox, I would be reaching for every possible testing advancement I could find, at least to evaluate.
I’m already investigating property and mutation testing tools for hobby projects because I want to use them for real world things and want to be able to recommend them to friends, and hopefully eventually others. And because their scopes are large once applied, even if the code I’ve written is small.
The reality is that “does it compile” is great for personal projects but falls apart the moment it’s on a network, even a sneakernet.
I think ML is just an extension of test coverage and static analysis tools.
We already went in the realm of "this looks bad, you should check this block" warnings with CI tools searching for security vulnerabilities and bad coding practices, and personally I found it to help tremendously, making the traditional hand written test cases less useful in some ways. Going further to optimize the process seems a non brainer.
I have to imagine that the vast majority of the CI time is spent rendering HTML and executing javascript on a wide variety of hardware and software combinations, and has nothing to do with compiling. Even the simplest possible support matrix would look like this:
Also, it makes me cry.