And beyond the ethical points it makes (which I agree may or may not be relevant for LLMs - nobody can know for sure at this point), I find some of the details about how brain images are used in the story to have been very prescient of LLMs' uses and limitations.
E.g. it is mentioned that MMAcevedo performs better when told certain lies, predicting the "please help me write this, I have no fingers and can't do it myself" kinda system prompts people sometimes used in the GPT-4 days to squeeze a bit more performance out of the LLM.
The point about MMAcevedo's performance degrading the longer it has been booted up (due to exhaustion), mirroring LLMs getting "stupider" and making more mistakes the closer one gets to their context window limit.
And of course MMAcevedo's "base" model becoming less and less useful as the years go by and the world around it changes while it remains static, exactly analogous to LLMs being much worse at writing code that involves libraries which didn't yet exist when they were trained.
Same in Germany, and not just for elementary schools but also secondary schools. At least that's how it was decades ago when I was a student, maybe it's different now.
My relatives live in Germany and in all schools their kids were in school gave out lunches. They were not packing own luch and did not considered sandwitch as a proper lunch.
In German, we use "aufrufen", which means "to call up" if you translate it fragment-by-fragment, and in pre-computer times would (as far as I know) only be understood as "to call somebody up by their name or nummer" (like a teacher asking a student to speak or get up) when used with a direct object (as it is for functions).
It's also separate from the verb for making a phone call, which would be "anrufen".
Interesting! Across the lake in Sweden we do use "anropa" for calling subprograms. I've never heard anyone in that context use "uppropa" which would be the direct translation of aufrufen.
That's assuming the AI owners would tolerate the subsistence farmers on their lands (it's obvious that in this scenario, all the land would be bought up by the AI owners eventually).
I wouldn't believe that any sort of economy or governmental system would actually survive any of this. Ford was right in that sense, without people with well-paying jobs, no one will buy the services of robots and AIs. The only thing that would help would be the massive redistribution of wealth through inheritance taxation and taxation on ownership itself. Plus UBI, though I'm fairly sceptical of what that would do to a society without purpose.
>Though I have wondered about the idea of programming something like Dependabot, but telling it, hey, tell me about known CVEs and security releases, but otherwise, let things cook for 6 months before automatically building a PR for me to update.
E.g. it is mentioned that MMAcevedo performs better when told certain lies, predicting the "please help me write this, I have no fingers and can't do it myself" kinda system prompts people sometimes used in the GPT-4 days to squeeze a bit more performance out of the LLM.
The point about MMAcevedo's performance degrading the longer it has been booted up (due to exhaustion), mirroring LLMs getting "stupider" and making more mistakes the closer one gets to their context window limit.
And of course MMAcevedo's "base" model becoming less and less useful as the years go by and the world around it changes while it remains static, exactly analogous to LLMs being much worse at writing code that involves libraries which didn't yet exist when they were trained.
reply