Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>I had the thought that you ought be able to provide a cargo doc or rust-analyzer equivalent over MCP? This... must exist?

I've found that there are two issues that arise that I'm not sure how to solve. You can give it docs and point to it and it can generally figure out syntax, but the next issue I see is that without examples, it kind of just brute forces problems like a 14 year old.

For example, the input system originally just let you move left and right, and it popped it into an observer function. As I added more and more controls, it began to litter with more and more code, until it was ~600 line function responsible for a large chunk of game logic.

While trying to parse it I then had it refactor the code - but I don't know if the current code is idiomatic. What would be the cargo doc or rust-analyzer equivalent for good architecture?

Im running into this same problem when trying to claude code for internal projects. Some parts of the codebase just have really intuitive internal frameworks and claude code can rip through them and provide great idiomatic code. Others are bogged down by years of tech debt and performance hacks and claude code can't be trusted with anything other than multi-paragraph prompts.

>I'm also curious how you test if the game is, um... fun?

Lucky enough for me this is a learning exercise, so I'm not optimizing for fun. I guess you could ask claude code to inject more fun.





> What would be the cargo doc or rust-analyzer equivalent for good architecture?

Well, this is where you still need to know your tools. You should understand what ECS is and why it is used in games, so that you can push the LLM to use it in the right places. You should understand idiomatic patterns in the languages the LLM is using. Understand YAGNI, SOLID, DDD, etc etc.

Those are where the LLMs fall down, so that's where you come in. The individual lines of code after being told what architecture to use and what is idiomatic is where the LLM shines.


What you describe is how I use LLM tools today, but the reason I am approaching my project in this way is because I feel I need to brace myself for a future where developers are expected to "know your tools"

When I look around today - its clear more and more people are diving in head first into fully agentic workflows and I simply don't believe they can churn out 10k+ lines of code today and be intimately familiar with the code base. Therefore you are left with two futures:

* Agentic-heavy SWEs will eventually blow up under the weight of all their tech debt

* Coding models are going to continue to get better where tech debt wont matter.

If the answer if (1), then I do not need to change anything today. If the answer is (2), then you need to prepare for a world where almost all code is written by an agent, but almost all responsibility is shouldered by you.

In kind of an ignorant way, I'm actually avoiding trying to properly learn what an ECS is and how the engine is structured, as sort of a handicap. If in the future I'm managing a team of engineers (however that looks) who are building a metaphorical tower of babel, I'd like to develop to heuristic in navigating that mountain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: