Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But you admit that fewer humans would be needed as “LLMs would have been good at subsets of that project”, so some impact already and these AI tools only get better.


If that is the only thing that you took out of that conversation, then I don't really believe that that job might've been suitable for you in the first place.

Now I don't know which language they used for the project (could be python or could be C/C++ or could be rust) but its like "python would have been good at subsets of that project", so some impact already and these python tools only get better

Did python remove the jobs? No. Each project has their own use case and in some LLM's might be useful, in others not.

In their project, LLM's might be useful for some parts but their majority of the work was doing completely new things with a human in feedback.

You are also forgetting trust factor, yes lets have your traffic lights system be written by a LLM, surely. Oops, the traffic lights glitched and all waymos (another AI) went beserk and oops accidents/crash happened which might cost millions.

Personally I wouldn't trust even a subset of LLM code and much rather have my country/state/city to pay to real developers that can be accountable & good quality control checks for such critical points to the point that no LLM in this context should be a must

For context, if LLM use can even impact 1 life every year. The value of 1 person is 7.5-13 million$

Over a period of 10 years in this really really small glitch of LLM, you end up in 10 years losing 75 million$

Yup go ahead save a few thousand dollars right now by not paying people enough in the first case to use LLM to then lose 75 million $ (on the least case scenario)


I doubt you have a clue regarding my suitability for any project, so I’ll ignore the passive l-aggressive ad hominem.

Anyway, it seems you are walking back your statement regarding LLM being useful for parts of your project, or ignoring the impact on personnel count. Not sure what you were trying to say then.


I went back because of course I could've just pointed out one picture but still wanted to give the whole picture.

my conclusion is rather the fact that this is a very high stakes project (both emotionally and mentally and economically) and AI are still black boxes with chances of being much more error prone (atleast in this context) and chances of it missing something to cause the -75 million and deaths of many is more likely and also that in such a high stakes project, LLM's shouldn't be used and having more engineers in the team might be worth it.

> I doubt you have a clue regarding my suitability for any project, so I’ll ignore the passive l-aggressive ad hominem.

Aside from the snark presented at me. I agree. And this is why you don't see me in a project regarding such high stakes project and neither should you see an LLM at any costs in this context. These should be reserved to the caliber of people who have both experience in the industry and are made of flesh.


Human beings are basically black boxes as far as the human brain is concerned. We don't blindly trust the code coming out of those black boxes, it seems illogical to do the same for LLMs.

Yes but at the end of the day I can't understand this take because what are we worried about for (atleast in this context) a few hundred thousand dollars for a human job than LLM?

I don't understand if its logical to deploy an LLM in any case, the problem is chances of LLM code slipping are very much more likely than the code of people who can talk to each other and decide on all meetings exactly how they wish to write and they got 10's of years of experience to back it up

If I were a state, there are so so many ways of getting money rather easily (hundreds of thousands of $ might seem a lot but they aren't for state) and plus you are forgetting that they went in manually and talked to real people




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: