Hacker Newsnew | past | comments | ask | show | jobs | submit | jjfoooo4's commentslogin

If you can generate a song with a two sentence prompt, so can anyone else. Music and art is only interesting when there’s originality or a point of view being expressed.

I really think art (as in art that’s made for it’s own sake, as opposed to jazzing up a PowerPoint slide or whatever) is by definition something AI will not make inroads in


I feel like you’ve never ran into pop culture before. It was algorithmic before AI.

The only refinement there is I think a command runner like `just` works really well for making scripts easily available. That also has the same benefit of helping agents by helping humans

I recently read Origins of Efficiency by Brian Potter, and one of the interesting things it talks about is the path of the Model T.

Ford invested heavily in an in-house, highly optimized production pathway for the Model T. Other manufacturers sourced a lot of their parts from vendors.

This gave the Model T a great advantage at first, but they had a lot more trouble than competitors in coming up with new models. Ford ended up converging with the rest of the industry in sourcing more of their parts externally.

The lack of new Tesla models makes me feel like a similar pivot is what Tesla needs. My suspicion is that they probably need a less terminally distracted Musk to pull it off.


One of the things Jim Farley, Ford CEO, brought up was they have a lot of 3rd party suppliers, and changes take a long time to implement. So a firmware update may require change notifications and responses from dozens of suppliers for something like door locks. This was in response to why Ford couldn't do firmware as fast or as often as Tesla. Vertically integrated means you have 1 big ship to turn around. Modern JIT manufacturing means your ship is built of 100s of cards and each one needs to be turned.

The lack of new models from updates I believe comes from the fact the CEO is busy elsewhere and the board is reluctant to address that. They have made the P/E so high that they can only continue to function in one direction, do just enough to bring in more outside investment.


I think I read somewhere that the model T went something like 12 years without substantial changes to its design.

Ford wouldn’t have known about The Innovator’s Dilemma and possibly not about Sunk Cost Fallacy.

Deming had to go to Japan to get his ideas taken seriously and it nearly bankrupted American manufacturing that they wouldn’t listen to him.


> they had a lot more trouble than competitors in coming up with new models.

I'd read somewhere that it was mainly because Henry Ford was dogmatic that the Model T was perfect, all the car anyone would ever need forever.


That was true for me, but is no longer.

It's been especially helpful in explaining and understanding arcane bits of legacy code behavior my users ask about. I trigger Claude to examine the code and figure out how the feature works, then tell it to update the documentation accordingly.


> I trigger Claude to examine the code and figure out how the feature works, then tell it to update the documentation accordingly.

And how do you verify its output isn't total fabrication?


I read through it, scanning sections that seem uncontroversial and reading more closely sections that talk about things I'm less sure about. The output cites key lines of code, which are faster to track down and look at than trying to remember where in a large codebase to look.

Inconsistencies also pop up in backtesting, for example if there's a point that the llm answers different ways in multiple iterations, that's a good candidate to improve docs on.

Similar to a coworker's work, there's a certain amount of trust in the competency involved.


Your docs are a contact. You can verify that contract using integration tests

Contract? These docs are information answering user queries. So if you use a chatbot to generate them, I'd like to be reasonably sure they aren't laden with the fabricated misinformation for which these chatbots are famous.

It's a very reasonable concern. My solution is to have the bot classify what the message is talking about as a first pass, and have a relatively strict filtering about what it responds to.

For example, I have it ignore messages about code freezes, because that's a policy question that probably changes over time, and I have it ignore urgent oncall messages, because the asker there probably wants a quick response from a human.

But there's a lot of questions in the vein of "How do I write a query for {results my service emits}", how does this feature work, where automation can handle a lot (and provide more complete answers than a human can off the top of their head)


OK, but little of that applies to this use case, to "then tell it to update the documentation accordingly."

Isn’t lane keeping pretty standard for most new cars?

It’s like an upside down freemium model - try out our basic self driving product, which is (now) the worst in the market, so you’ll convert to the premium FSD offering.


I think the investors who put $300m in at a $12b valuation would disagree


I don’t think you understand how liquidation preferences work.

They will get $300m back.

Opportunity cost sure. But zero nominal loss.


Should be fixed now (albeit by hastily removing dark mode)


(I'm the author of the post)

This would be my critique of MCP-specific security implementations. I think robust tools for this already exist, and in general AI API calls can and should be treated like any other RPC call.


(I'm the author)

Looking at the post again, I think I agree that calling Claude Skills overengineered is too harsh. I think Skills is definitely an improvement over MCP.

However I still think it's a generally a mistake to put useful commands and documentation in AI-specific files. In my opinion a better approach is to optimize the organization of docs and commands for human usability, and teach the AI how to leverage those.

I do use Claude Skills, but only to wire up just commands. I wrote a little package to do this automatically: https://github.com/tombedor/just-claude


There's no reason they need to be treated as AI specific. They're just a description in markdown with a tiny frontmatter after all.


(I'm the author)

Thanks for flagging! I just pushed an update that should fix this


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: