If you can generate a song with a two sentence prompt, so can anyone else. Music and art is only interesting when there’s originality or a point of view being expressed.
I really think art (as in art that’s made for it’s own sake, as opposed to jazzing up a PowerPoint slide or whatever) is by definition something AI will not make inroads in
The only refinement there is I think a command runner like `just` works really well for making scripts easily available. That also has the same benefit of helping agents by helping humans
I recently read Origins of Efficiency by Brian Potter, and one of the interesting things it talks about is the path of the Model T.
Ford invested heavily in an in-house, highly optimized production pathway for the Model T. Other manufacturers sourced a lot of their parts from vendors.
This gave the Model T a great advantage at first, but they had a lot more trouble than competitors in coming up with new models. Ford ended up converging with the rest of the industry in sourcing more of their parts externally.
The lack of new Tesla models makes me feel like a similar pivot is what Tesla needs. My suspicion is that they probably need a less terminally distracted Musk to pull it off.
One of the things Jim Farley, Ford CEO, brought up was they have a lot of 3rd party suppliers, and changes take a long time to implement. So a firmware update may require change notifications and responses from dozens of suppliers for something like door locks. This was in response to why Ford couldn't do firmware as fast or as often as Tesla. Vertically integrated means you have 1 big ship to turn around. Modern JIT manufacturing means your ship is built of 100s of cards and each one needs to be turned.
The lack of new models from updates I believe comes from the fact the CEO is busy elsewhere and the board is reluctant to address that. They have made the P/E so high that they can only continue to function in one direction, do just enough to bring in more outside investment.
It's been especially helpful in explaining and understanding arcane bits of legacy code behavior my users ask about. I trigger Claude to examine the code and figure out how the feature works, then tell it to update the documentation accordingly.
I read through it, scanning sections that seem uncontroversial and reading more closely sections that talk about things I'm less sure about. The output cites key lines of code, which are faster to track down and look at than trying to remember where in a large codebase to look.
Inconsistencies also pop up in backtesting, for example if there's a point that the llm answers different ways in multiple iterations, that's a good candidate to improve docs on.
Similar to a coworker's work, there's a certain amount of trust in the competency involved.
Contract? These docs are information answering user queries. So if you use a chatbot to generate them, I'd like to be reasonably sure they aren't laden with the fabricated misinformation for which these chatbots are famous.
It's a very reasonable concern. My solution is to have the bot classify what the message is talking about as a first pass, and have a relatively strict filtering about what it responds to.
For example, I have it ignore messages about code freezes, because that's a policy question that probably changes over time, and I have it ignore urgent oncall messages, because the asker there probably wants a quick response from a human.
But there's a lot of questions in the vein of "How do I write a query for {results my service emits}", how does this feature work, where automation can handle a lot (and provide more complete answers than a human can off the top of their head)
Isn’t lane keeping pretty standard for most new cars?
It’s like an upside down freemium model - try out our basic self driving product, which is (now) the worst in the market, so you’ll convert to the premium FSD offering.
This would be my critique of MCP-specific security implementations. I think robust tools for this already exist, and in general AI API calls can and should be treated like any other RPC call.
Looking at the post again, I think I agree that calling Claude Skills overengineered is too harsh. I think Skills is definitely an improvement over MCP.
However I still think it's a generally a mistake to put useful commands and documentation in AI-specific files. In my opinion a better approach is to optimize the organization of docs and commands for human usability, and teach the AI how to leverage those.
I really think art (as in art that’s made for it’s own sake, as opposed to jazzing up a PowerPoint slide or whatever) is by definition something AI will not make inroads in
reply