I once wrote software that had to manage the traffic coming into a major shipping terminal- OCR, gate arms, signage, cameras for inspecting chassis and containers, SIP audio comms, RFID readers, all of which needed to be reasoned about in a state machine, none of which were reliable. It required a lot of on the ground testing and observation and tweaking along with human interventions when things went wrong. I’d guess LLMs would have been good at subsets of that project, but the entire thing would still require a team of humans to build again today.
Sir your experience is unique and thanks for answering this.
That being said, someone took the idea of you saying LLM's might be good at subsets of projects to consider we should use LLMs for that subset as well
But I digress because (I provided more in depth reasoning in other comment as well) because if there is an even minute bug which might slip up past LLM and code review for subset of that and for millions of cars travelling through points, we assume that one single bug in it somewhere might increase the traffic/fatality traffic rate by 1 person per year. Firstly it shouldn't be used because of the inherent value of human life itself but even from monetary sense as well so there's really not much reason I can see in using it
That alone over a span of 10 years would cost 75 million-130Million$ (the value of life in US for a normal perosn ranges from 7.5 million - 13 million$)
Sir I just feel like if the point of LLM is to have less humans or less giving them income, this feels so short sighted because I (if I were the state and I think everyone will agree after the cost analysis) would much rather pay a few hundred thousand dollars to even a few million$ right now to save 75-130 Million$ (on the smallest scale mind you, it can get exponentially more expensive)
I am not exactly sure how we can detect the rate of deaths due to LLM use itself (the 1 number) but I took the most conservative number.
And that is also the fact that we won't know if LLM's might save a life but I am 99.9% sure that might not be the case and once again it wouldn't be verifiable itself so we are shooting things in the dark
And we can have a much more sensitive job with better context (you know what you are working at and you know how valuable it is/can save lives and everything) whereas no amount of words can convey that danger to LLM's
To put it simply, the LLM might not know the difference between this life or death situation machine's code at times or a sloppy website created by it.
I just don't think its worth it especially in this context at all even a single % of LLM code might not be worth it here.
I had friend who was in crisis while the rest of us were asleep. Talking with ChatGPT kept her alive. So we know the number is at least one. If you go to the Dr ChatGPT thread, you'll find multiple reports of people who figured out debilitating medical conditions via ChatGPT in conjunction with a licensed human doctor, so we can be sure the numbers greater than zero. It doesn't make headlines the same way Adam's suicide does, and not just because OpenAI can't be the ones to say it.
Great for her, I hope she's doing okay now. (I do think we humans can take each other for granted)
If talking to chatgpt helps anyone mentally, then sure great. I can see as to why but I am a bit concerned that if we remove a human from the loop then we can probably get way too easily disillusioned as well which is what is happening.
These are still black boxes but in the context of traffic lights code (even partially) feels to me something that the probability of it might not saving a life significantly overwhelms the opposite.
I've had good luck when giving the AI its own feedback loop. On software projects, it's letting the AI take screenshots and read log files, so it can iterate on errors without human input. On hardware projects, it's a combination of solenoids, relays, a pi and pizerow, and a webcam. I'm not claiming that an AI could do the above mentioned project, just that (some) hardware projects can also get humans out of the loop.
Don’t you understand? That’s why all these AI companies are praying for humanoid robots to /just work/ - so we can replace humans mentally and physically ASAP!
I'm sure those will help. But that doesn't solve the problem the parent stated. Those robots can't solve those real world problems until they can reason, till they can hypothesize, till they can experiment, till they can abstract all on their own. The problem is you can't replace the humans (unilaterally) until you can create AGI. But that has problem of its own, as you now have to contend with previously creating a slave class of artificial life forms.
No worries - you’ve added useful context for those who may be misguided by these greedy corporations looking to replace us all. Maybe it helps them reconsider their point of view!
But you admit that fewer humans would be needed as “LLMs would have been good at subsets of that project”, so some impact already and these AI tools only get better.
If that is the only thing that you took out of that conversation, then I don't really believe that that job might've been suitable for you in the first place.
Now I don't know which language they used for the project (could be python or could be C/C++ or could be rust) but its like "python would have been good at subsets of that project", so some impact already and these python tools only get better
Did python remove the jobs? No. Each project has their own use case and in some LLM's might be useful, in others not.
In their project, LLM's might be useful for some parts but their majority of the work was doing completely new things with a human in feedback.
You are also forgetting trust factor, yes lets have your traffic lights system be written by a LLM, surely. Oops, the traffic lights glitched and all waymos (another AI) went beserk and oops accidents/crash happened which might cost millions.
Personally I wouldn't trust even a subset of LLM code and much rather have my country/state/city to pay to real developers that can be accountable & good quality control checks for such critical points to the point that no LLM in this context should be a must
For context, if LLM use can even impact 1 life every year. The value of 1 person is 7.5-13 million$
Over a period of 10 years in this really really small glitch of LLM, you end up in 10 years losing 75 million$
Yup go ahead save a few thousand dollars right now by not paying people enough in the first case to use LLM to then lose 75 million $ (on the least case scenario)
I doubt you have a clue regarding my suitability for any project, so I’ll ignore the passive l-aggressive ad hominem.
Anyway, it seems you are walking back your statement regarding LLM being useful for parts of your project, or ignoring the impact on personnel count. Not sure what you were trying to say then.
I went back because of course I could've just pointed out one picture but still wanted to give the whole picture.
my conclusion is rather the fact that this is a very high stakes project (both emotionally and mentally and economically) and AI are still black boxes with chances of being much more error prone (atleast in this context) and chances of it missing something to cause the -75 million and deaths of many is more likely and also that in such a high stakes project, LLM's shouldn't be used and having more engineers in the team might be worth it.
> I doubt you have a clue regarding my suitability for any project, so I’ll ignore the passive l-aggressive ad hominem.
Aside from the snark presented at me. I agree. And this is why you don't see me in a project regarding such high stakes project and neither should you see an LLM at any costs in this context. These should be reserved to the caliber of people who have both experience in the industry and are made of flesh.
Human beings are basically black boxes as far as the human brain is concerned. We don't blindly trust the code coming out of those black boxes, it seems illogical to do the same for LLMs.