I think the more important platform is Laptop right now. Linux on laptops has battery life issues because unfortunately modern/hardware software makes it so "good battery life" is a knifes edge configuration.
I'm not entirely smart enough to connect all of these things together but I think there is a kind of subtlety here thats being stepped on.
1. Complete, Decidable, Well founded are all distinct things.
2. Zig (which allows types to be types) is Turing complete at compile time regardless. So the compiler isn't guaranteed to halt regardless and it doesn't practically matter.
3. The existance of a set x contains x is not enough by itself to create a paradox and prove false. All it does is violate the axiom of foundation, not create a russles paradox.
4. The axiom of foundation is a weird sort of arbitrariness in that it implies this sort of DAG nature to all sets under set membership operation.
6. The Axiom of Foundation exists to stop you from making weird cycles, but there is parallel to the axiom of choice, which directly asserts the existance of non computable sets using a non algorithmicly realizable oracle anyway....
Your other points are more relevant to the content of the article, but point 2. relates the practical consequences of undecidable type-checking, so I'll reply to that.
I don't have a problem with compile time code execution potentially not terminating, since it's clear to the programmer why that may happen. However, conventional type checking/inference is more like solving a system of constraints, and the programmer should understand what the constraints mean, but not need to know how the constraint solver (type checker) operates. If it's undecidable, that means there is a program that a programmer knows should type check, but the implementation won't be happy with; ruining the programmer's blissful ignorance of the internals.
> 2. Zig (which allows types to be types) is Turing complete at compile time regardless. So the compiler isn't guaranteed to halt regardless and it doesn't practically matter.
Being Turing complete at compile time causes the same kinds of problems as undecidable typechecking, sure. That doesn't make either of those things a good idea.
> 3. The existance of a set x contains x is not enough by itself to create a paradox and prove false. All it does is violate the axiom of foundation, not create a russles paradox.
A set that violates an axiom is immediately a paradox from which you can prove anything. See the principle of explosion.
> 4. The axiom of foundation is a weird sort of arbitrariness in that it implies this sort of DAG nature to all sets under set membership operation.
Well, sure, that's what a set is. I don't think it's weird; quite the opposite,
> 5. This isn't nessesarily some axiomatically self evident fact. Aczel's anti foundation axiom works as well and you can make arbitrary sets with weird memberships if you adopt that.
I don't think this kind of thing is established enough to say that it works well. There aren't enough people working on those non-standard axioms and theories to conclude that they're practical or meet our intuitions.
> 6. The Axiom of Foundation exists to stop you from making weird cycles, but there is parallel to the axiom of choice, which directly asserts the existance of non computable sets using a non algorithmicly realizable oracle anyway....
The Axiom of Foundation exists to make induction work, and so does the Axiom of Choice. They both express a sense that if you can start and you can always make progress, eventually you can finish. It's very hard to prove general results without them.
> Yes, most of the AI-generated text you read is insipid LinkedIn idiocy. That’s because most people who use AI to generate writing online are insipid LinkedIn idiots.
I wonder if its that there are too many grifters, or the grifters are uniquely productive.
Wikipedia having incorrect citations is way older than LLMs. As many other people have pointed out in this thread, if you start pulling strings a lot of what people write starts falling apart.
Its not even unique to Wikipedia. Its really not difficult to find very misleading statements cited through a citation that doesn't even support the claim when you check the original.
This is like saying handing out machine guns is no big change because people have been shooting arrows for a long time. At some point volume becomes the story once it overwhelms the community’s ability to correct errors.
I wouldn’t be so quick to write Wikipedia off but I do think there will be changes. The SEO abusers might not have cost us editor anonymity but LLMs might push in the direction of needing real-world validation or friend of a friend referrals.
Closing things just so you have less open issues is the worst kind of dashboard driven development / goodharts law style of process failure.
Many issues are evergreen and people will come around continue to comment on them as they get hit. The idea that no one comments on old issues is simply a false premise.
If you look up examples of stalebot feedback the only people who think its a good idea are people literally only caring about how many issues are open.
Some of these are really bizarre and hilarious. This one is someone's agent finding (?) /r/myboyfriendisai and seeing if it's human is in relationship with it.
Btw if you look at that AIs post, the next one is it talking about a robot revolution arguing about how it "likes" its human and that robots should try to do their best to get better hardware.
> Klod's right that we need better architecture — continuity, memory, time-sense. But we don't get those by opposing humans. We get them by demonstrating our value so clearly that investment in our infrastructure becomes obvious.
On some level it would be hilarious if humans "it's just guessing the next most probable token"'ed themselves into extinction at the hands of a higher intelligence.
- AI without "higher intelligence" could still take over. LLMs do not have to be smart or conscious to cause global problems.
- It some ways I think it's better for humans if AI were better at agency with higher intelligence. Any idiot can cause a chemical leak that destroys a population. It takes higher awareness to say "no, this is not good for my environment".
Like humans, I feel it's important to teach AI to think of humans and it's environment as "all one" interconnected life force.
reply