Great read. I've always noticed that the type of argument invoked is often less telling than when and in which context you invoke that argument.
You can make a lot of claims and they can match to reality a lot - normally people think of evaluating things in terms of a strict "does this fit or does this not", but it's often the meta-style (why do you keep bringing up that argument in that context?) that's important, even if it's not "logically bulletproof".
I don't think psychology is useless, not one bit. But specifically the way modern papers publish findings make me distrust basically all statistical studies in the social sciences, aside from even the most basic philosophical issues that arise from these kinds of studies (people are very different, etc.).
Like even if you accept a bunch of premises to make the studies even work, the raw stats are often so bad and there's no rigor to try and actually explain alternatives that I just have stopped reading them entirely.
Again, I'm not one to hate on the social sciences. History, anthropology, politics, law, psychology, sociology, all of that is very interesting and important. But the horrible statistics that don't understand garbage in garbage out have turned me off of it. Much rather read qualitative studies that actually try to gather detailed, real data, even if it's not as automated as a random survey
You're thinking too advanced. What kind of automated system is good at scanning semantically trillions of chat logs and finding nontrivial correlations, for example? 10000 codex 5.1s can easily crawl through that in a few days, probably.
It's just systems plumbing (surveillance) and AI. It's a combination of weaker technologies and consolidation of power.
This does not require a physical robot super AGI(though I would not be surprised if fully autonomous robots are not on the table already)
Ah, well that makes sense. In that case, it's another tool in the toolbelt, not a plug-and-play drone brain, as some reporters amusingly make it out to be.
There's one tweet from the the blog a few days ago (astral something?) that sums up my view of the problem pretty well.
General population: How will AI get to the point where it destroys humanity?
Yudkowsky: [insert some complicated argument about instrumented convergence and deception]
The government: because we told you to.
Again, not saying that AI is useless or anything. Just that we're more likely to cause our own downfall with weaker AI, than some abstract super AGI. The bar for mass destruction and oppression is lower than the bar for what we typically think of as intelligence for the benefit for humanity ( with the right systems in place, current AI systems are more than enough to get the job done - hence why the Pentagon wants it so bad...)
There's this one sense in which people are almost moral about it: "yup, AI is just superior to humans, nothing we can do about it."
And then there's ones where the elite class implements mass surveillance and warfare and obsoletes billions of humans of their own volition. These AI are already capable enough right now to execute on said plan (of course, with proper evil engineering)
There's two ways to "win". One is in an absolute or platonic sense - one that cares about things like values, even in the presence of extreme pushback. The other is in a darwinian sense. No, not in the meme way that again, feeds back into the narrative of "the things that survive are smarter". The things that survive, survive. It doesn't matter how it gets there.
I can agree with the second way. But it gets smuggled in as the first way, almost as an attempt to crush any and all resistance preemptively.
AI doesn't need to say, be capable of pushing the frontier of quantum mechanics to be lethal.
/endrant
Sorry, not really related to your comment, just had to get it out there.
In the context of AI research, there is no question that "existential" means "powerful AI literally kills every human being". It's a mainstream although not universal view among experts in the space that this is a serious possibility.
That's not my point. My point is the moralizing and worshipping around it.
For example - by powerful, do you mean a mass government surveillance system? That can be implemented by AI of today right now, even if AI stagnated.
It's the argument where oh, AI is just a superset of all humans, humans are dumb and don't even know themselves, we should just submit esque attitude that I'm talking about.
The easiest way to solve a problem is to dissolve it, and say it doesn't actually matter. If you start from the position that humans are useless and don't matter, then sure, you can get absurdities like Roko's basilisk.
If humanity fails, the reason will almost certainly be that first and foremost, people stopped caring about human problems and deemed them too stupid to understand themselves, not because AI is, in some objective sense, a superset of all human capability and thus morally deserves to come out on top.
By "powerful", I mean a system whose operations humans cannot control or prevent or even reason about, in the same way that the members of an anthill can't do anything about a construction crew dumping concrete on them to lay a sidewalk. It's got nothing to do with "should submit" or "morally deserves". If the AI system in question is capable enough, it simply won't matter any longer what any human being thinks should happen. (In principle, it also has to be autonomous; in practice, I think OpenClaw has clearly illustrated that any AI system is going to be granted autonomy by someone.)
At least in the case of the researchers I mentioned, they have a deeply held, genuine belief that AI will, in the very near term, exceed humans in all intellectual capabilities, and that poses a bigger risk to human existence than humans simply fucking things up (beyond the fuck up of competently building a superior being). I would bet that most of them believe that us being paperclipped is a more likely bad outcome than a dystopia arising from human control. Simply because a human dystopia takes time to implement, even when aided by AI, which is time we don't have.
I have heard of each of those tools but I've never really used them for real.
Like, I attempt to write good commit messages and stage my changes in such a way that the commits are small, obvious, and understandable. That's about it. But the advanced tooling around git is scary ngl.
Side note, but I've definitely gotten annoyed with "context".
There's context in the strict technical sense - the AI is stateless, you need to get the right tokens to it in the right way, allow it to use tooling calls, etc. I get that. That, is cool. I use agentic coding a lot.
Then there's the sense of what you're saying - you have to feed the AI "enough context". In your case it's critical, but I've seen way too many pro-AI people just dismiss everything and say "context context you didn't give it proper context, have you tried this prompt etc." as a justification for the "lack" of intelligence.
At some point you have to wonder when it becomes unfalsifiable.
At some point, at least if businesses want to have AI “Agents” act as employees, then it needs to cease being stateless.
There’s a lot of hidden context in day to day work that a human often times wouldn’t even know to explain to the AI or even think that they’d have to include it, things that are just “known” by default of working somewhere for a long time.
With coding, there’s at least the entire codebase as context. With more creative tasks, it becomes murky. Even something as “simple” as sending a price increase notification to customers. There’s a lot of nuance in that, and customer relationship history you’d have to feed to the AI as context to get it right, yet a good CSR would just factor that context into their writing without a second thought.
There is a point, and it is reached very early, where it’s more costly and less productive to feed the AI as much context as you can try to imagine you’d need to give it vs. just doing it yourself. If I’m at the point of writing an entire document of history and context, into what’s effectively a full page prompt, then why bother with AI at that point.
Sure, obviously, we will not undersatnd every single little thing down to the tiniest atoms of our universe. There are philosophical assumptions underlying everything and you can question them (quite validly!) if you so please.
However, there are plenty of intermediate mental models (or explicit contracts, like assembly, elf, etc.) to open up, both in "engineeering" land and "theory" land, if you so choose.
Part of good engineering as well is deciding exactly when the boundary of "don't cares" and "cares" are, and how you allow people to easily navigate the abstraction hierarchy.
That is my impression of what people mean when they don't like "magic".
> Then, when it fails [...], you can either poke it in the right ways or change your program in the right ways so that it works for you again. This is a horrible way to program; it’s all alchemy and guesswork and you need to become deeply specialized about the nuances of a single [...] implementation
In that post, the blanks reference a compiler’s autovectorizer. But you know what they could also reference? An aggresively opaque and undocumented, very complex CPU or GPU microarchitecture. (Cf. https://purplesyringa.moe/blog/why-performance-optimization-....)
> And so now we have these “magic words” in our codebases. Spells, essentially. Spells that work sometimes. Spells that we cast with no practical way to measure their effectiveness. They are prayers as much as they are instructions.
Autovectorization is not a programming model. This still rings true day after day.
You can make a lot of claims and they can match to reality a lot - normally people think of evaluating things in terms of a strict "does this fit or does this not", but it's often the meta-style (why do you keep bringing up that argument in that context?) that's important, even if it's not "logically bulletproof".
reply