I'm a maintainer of Servo which is another web engine project.
Although I dissented on the decision, we banned the use of AI. Outside of the project I've been enjoying agentic coding and I do think it can be used already today to build production-grade software of browser-like complexity.
But this project shows that autonomous agents without human oversight is not the way forward.
Why? Because the generated code makes little sense from a conceptual perspective and does not provide a foundation on which to eventually build an entire web engine.
For example, I've just looked into the IndexedDB implementation, which happens to be what I am working on at the moment in Servo.
Now, my work in Servo is incomplete, but conceptually the code that is in place makes sense and there is a clear path towards eventually implementing the thing as a whole.
In Fastrender, you see an Arc<Mutex<Database>> which is never going to work, because by definition a production browser engine will have to involve multiple processes. That doesn't mean you need the IPC in a prototype, but you certainly should not have shared state--some simple messaging between threads or tasks would do.
The above is an easy coding fix for the AI, but it requires input from a human with a pretty good idea of what the architecture should look like.
For comparison, when I look at the code in Ladybird, yet another browser project, I can immediately find my way around what for me is a stranger codebase: not just a single file but across large swaths of the project and understand things like how their rendering loop works. With Fastrender I find it hard to find my way around, despite all the architectural diagrams in the README.
So what do I propose instead of long-running autonomous agents? The focus should shift towards demonstrating how AI can effectively assist humans in building well-architected software. The AI is great at coding, but you eventually run into what I call conceptual bottlenecks, which can be overcome with human oversight. I've written about this elsewhere: https://medium.com/@polyglot_factotum/on-writing-with-ai-87c...
There is one very good idea in the project: adding the web standards directly in the repo so it can be used as context by the AI and humans alike. Any project can apply this by adding specs and other artifacts right next to the code. I've been doing this myself with TLA+, see https://medium.com/@polyglot_factotum/tla-in-support-of-ai-c...
To further ground the AI code output, I suggest telling it to document the code with the corresponding lines from the spec.
Back in early 2025 when we had those discussions in Servo about whether to allow some use of AI, I wrote this guide https://gist.github.com/gterzian/26d07e24d7fc59f5c713ecff35d...
which I think is also the kind of context you want to give the AI. Note that this was back in the days of accepting edits with tabs...
Though the fact that the code is so incoherent and inconsistent plausibly makes it more impressive that they still managed to make something that works at all, and weakens the argument that "all they did was copy/translate some existing other things to Rust."
That said, it's possible that none of that code even gets executed at run time, and the only code that is actually run is some translated glue code, with the other million lines essentially dead, so who knows.
I don't think it's all copy/pasted; it is quite an original byzantine architecture.
You're right that lots of code appears only used in unit-tests, of which there is an enormous amount(making it hard to tell whether what is being tested makes sense). In Servo we don't have a single line of unit-tests in the script component, because all of it is covered by the WPT integration test suite shared with all other engines...
Although I dissented on the decision, we banned the use of AI. Outside of the project I've been enjoying agentic coding and I do think it can be used already today to build production-grade software of browser-like complexity.
But this project shows that autonomous agents without human oversight is not the way forward.
Why? Because the generated code makes little sense from a conceptual perspective and does not provide a foundation on which to eventually build an entire web engine.
For example, I've just looked into the IndexedDB implementation, which happens to be what I am working on at the moment in Servo.
Now, my work in Servo is incomplete, but conceptually the code that is in place makes sense and there is a clear path towards eventually implementing the thing as a whole.
In Fastrender, you see an Arc<Mutex<Database>> which is never going to work, because by definition a production browser engine will have to involve multiple processes. That doesn't mean you need the IPC in a prototype, but you certainly should not have shared state--some simple messaging between threads or tasks would do.
The above is an easy coding fix for the AI, but it requires input from a human with a pretty good idea of what the architecture should look like.
For comparison, when I look at the code in Ladybird, yet another browser project, I can immediately find my way around what for me is a stranger codebase: not just a single file but across large swaths of the project and understand things like how their rendering loop works. With Fastrender I find it hard to find my way around, despite all the architectural diagrams in the README.
So what do I propose instead of long-running autonomous agents? The focus should shift towards demonstrating how AI can effectively assist humans in building well-architected software. The AI is great at coding, but you eventually run into what I call conceptual bottlenecks, which can be overcome with human oversight. I've written about this elsewhere: https://medium.com/@polyglot_factotum/on-writing-with-ai-87c...
There is one very good idea in the project: adding the web standards directly in the repo so it can be used as context by the AI and humans alike. Any project can apply this by adding specs and other artifacts right next to the code. I've been doing this myself with TLA+, see https://medium.com/@polyglot_factotum/tla-in-support-of-ai-c...
To further ground the AI code output, I suggest telling it to document the code with the corresponding lines from the spec.
Back in early 2025 when we had those discussions in Servo about whether to allow some use of AI, I wrote this guide https://gist.github.com/gterzian/26d07e24d7fc59f5c713ecff35d... which I think is also the kind of context you want to give the AI. Note that this was back in the days of accepting edits with tabs...