Hacker Newsnew | past | comments | ask | show | jobs | submit | yojat661's commentslogin

This

You missed option c. C) keep all 8 engineers so the team can pump out features faster, all still working 8 hour days. The ceo will probably be forced to do it to keep up with their competition.

From the linked project:

> The reality: 3 weeks in, ~50 hours of coding, and I'm mass-producing features faster than I can stabilize them. Things break. A lot. But when it works, it works.


> The worst part is they got simonw to (perhaps unwittingly or social engineering) vouch and stealth market for them.

Lol. Any time I see something ai related endorsed by simonw, I tend to view it as mostly hype, and I have been right so far.


Can you give an example? His writing seems pretty grounded to me. He's not out there going on podcasts claimed that LLMs are going to cure cancer, afaik.

Are these software popular? Are these maintainable long term? Are you getting feedback from users?

RatatuiRuby is pretty new still: its beta launch was Jan 20. Octobox's TUI is built on it [0], and Sidekiq is using it to build theirs [1].

I believe they'll be maintainable long-term, as they've got extensive tests and documentation, and I built a theory of the program [2] on the Ruby side of it as I reviewed and guided the agent's work.

I am getting feedback from users, the largest of which drove the creation of (and iteration upon) Rooibos. As a rendering library, RatatuiRuby doesn't do much to guide the design or architecture of an application. Rooibos is an MVU/TEA framework [3] to do exactly that.

Tokra is basically a tech demo at this stage, [4] so (hopefully) no users yet.

[0]: https://ruby.social/@andrewnez@mastodon.social/1159351822843...

[1]: https://ruby.social/@getajobmike/115940044592981164

[2]: https://www.sciencedirect.com/science/article/abs/pii/016560...

[3]: https://guide.elm-lang.org/architecture/

[4]: https://ruby.social/@kerrick/115983502510721565


Appreciate the response. My primary concern with llm coding is long term maintainability. The paper you linked seems interesting, will give it a read!

Lol same. Didn't realize it was the ai hype master on my first read.

The parent comment is obviously cherry picking news and trying to push an agenda.

Uk investment: https://www.aboutamazon.co.uk/news/job-creation-and-investme...

Us investment: https://finance.yahoo.com/news/amazon-invest-50-billion-ai


The US investment link is broken, and most of the UK jobs are in "fullfillment", some of the least fullfilling jobs - piss bottles all round.

And the original link about investment in India is also about fulfillment jobs and even worse, “investing in AI”, aka building data centers, which contribute essentially no jobs at all.

The AI investment is largely earmarked for data centers. Low staff but expensive because the hardware is currently very expensive.

It's not equivalent in the least. They aren't expanding headcount by 20K, they're building more expensive AI tailored servers


Guessing x loses ad revenue when traffic goes to xcancel.

I don't know if it's fair to call him an ai addict or deduce that his ego is bruised. But I do wonder whether karpathy's agentic llm experiences are based on actual production code or pet projects. Based on a few videos I have seen of his, I am guessing it's the latter. Also, he is a research scientist (probably a great one), not a software developer. I agree with the op that karpathy should not be given much attention in this topic i.e llms for software development.

With the llm, it might work or it might not. If it doesn't work, then you have to keep iterating and hand holding it to make it work. Sometimes that process is less optimal than writing the code manually. With a calculator, you can be sure that the first attempt will work. An idiot with a calculator can still produce correct results. An idiot with an llm often cannot outside trivial solutions.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: