Hacker Newsnew | past | comments | ask | show | jobs | submit | tbrownaw's commentslogin

That requires having more work than workers, so that a bad deal is never the only option.

If there any indication that this is increasing, as opposed to AI stuff just being where the fanatics are being drawn to at the moment?

> we don't have any kids shows with an AI theme today

There plenty where the toys are alive, or one of the characters is a robot or a computer.


> 2. People need to let go of this strange and erroneous idea that humans somehow have this privileged access to 'the real world'. You don't.

You are denouncing a claim that the comment you're replying to did not make.


They made it implicitly, otherwise this:

>(2) language only somewhat models the world

is completely irrelevant.

Everyone is only 'somewhat modeling' the world. Humans, Animals, and LLMs.


Completely relevant, because LLMs only "somewhat model" humans' "somewhat modeling" of the world...

LLMs aren't modeling "humans modeling the world" - they're modeling patterns in data that reflect the world directly. When an LLM learns physics from textbooks, scientific papers, and code, it's learning the same compressed representations of reality that humans use, not a "model of a model."

Your argument would suggest that because you learned about quantum mechanics through language (textbooks, lectures), you only have access to "humans' modeling of humans' modeling of quantum mechanics" - an infinite regress that's clearly absurd.


> LLMs aren't modeling "humans modeling the world" - they're modeling patterns in data that reflect the world directly.

This is a deranged and factually and tautologically (definitionally) false claim. LLMs can only work with tokenizations of texts written by people who produce those text to represent their actual models. All this removal and all these intermediate representational steps make LLMs a priori obviously even more distant from reality than humans. This is all definitional, what you are saying is just nonsense.

> When an LLM learns physics from textbooks, scientific papers, and code, it's learning the same compressed representations of reality that humans use, not a "model of a model."

A model is a compressed representation of reality. Physics is a model of the mechanics of various parts of the universe, i.e. "learning physics" is "learning a physical model". So, clarifying, the above sentence is

> When an LLM learns physical models from textbooks, scientific papers, and code, it's learning the model of reality that humans use, not a "model of a model."

This is clearly factually wrong, as the model that humans actually use is not the summaries written in textbooks, but the actual embodied and symbolic model that they use in reality, and which they only translate in corrupted and simplified, limited form to text (and that latter diminished form of all things is all the LLM can see). It is also not clear the LLM learns to actually do physics: it only learns how to write about physics like how humans do, but it doesn't mean it can run labs, interpret experiments, or apply models to novel contexts like humans can, or operate at the same level as humans. It clearly is learning something different from humans because it doesn't have the same sources of info.

> Your argument would suggest that because you learned about quantum mechanics through language (textbooks, lectures), you only have access to "humans' modeling of humans' modeling of quantum mechanics" - an infinite regress that's clearly absurd.

There is no infinite regress: humans actually verify that the things they learn and say are correct and provide effects, and update models accordingly. They do this by trying behaviours consistent with the learned model, and seeing how reality (other people, the physical world) responds (in degree and kind). LLMs have no conception of correctness or truth (not in any of the loss functions), and are trained and then done.

Humans can't learn solely from digesting texts either. Anyone who has done math knows that reading a textbook doesn't teach you almost anything, you have to actually solve the problems (and attempted-solving is not in much/any texts) and discuss your solutions and reasoning with others. Other domains involving embodied skills, like cooking, require other kinds of feedback from the environment and others. But LLMs are imprisoned in tokens.

EDIT: No serious researcher thinks LLMs are the way to AGI, this hasn't been a controversial opinion even among enthusiasts since about mid-2025 or so. This stuff about language is all trivial and basic stuff accepted by people in the field, and why things like V-JEPA-2 are being researched. So the comments here attempting to argue otherwise are really quite embarrassing.


>This is a deranged and factually and tautologically (definitionally) false claim.

Strong words for a weak argument. LLMs are trained on data generated by physical processes (keystrokes, sensors, cameras), not telepathically extracted "mental models." The text itself is the artifact of reality and not just a description of someone's internal state. If a sensor records the temperature and writes it to a log, is the log a "model of a model"? No, it’s a data trace of a physical reality.

>All this removal and all these intermediate representational steps make LLMs a priori obviously even more distant from reality than humans.

You're conflating mediation with distance. A photograph is "mediated" but can capture details invisible to human perception. Your eye mediates photons through biochemical cascades-equally "removed" from raw reality. Proximity isn't measured by steps in a causal chain.

>The model humans use is embodied, not the textbook summaries - LLMs only see the diminished form

You need to stop thinking that a textbook is a "corruption" of some pristine embodied understanding. Most human physics knowledge also comes from text, equations, and symbolic manipulation - not direct embodied experience with quantum fields. A physicist's understanding of QED is symbolic, not embodied. You've never felt a quark.

The "embodied" vs "symbolic" distinction doesn't privilege human learning the way you think. Most abstract human knowledge is also mediated through symbols.

>It's not clear LLMs learn to actually do physics - they just learn to write about it

This is testable and falsifiable - and increasingly falsified. LLMs:

Solve novel physics problems they've never seen

Debug code implementing physical simulations

Derive equations using valid mathematical reasoning

Make predictions that match experimental results

If they "only learn to write about physics," they shouldn't succeed at these tasks. The fact that they do suggests they've internalized the functional relationships, not just surface-level imitation.

>They can't run labs or interpret experiments like humans

Somewhat true. It's possible but they're not very good at it - but irrelevant to whether they learn physics models. A paralyzed theoretical physicist who's never run a lab still understands physics. The ability to physically manipulate equipment is orthogonal to understanding the mathematical structure of physical law. You're conflating "understanding physics" with "having a body that can do experimental physics" - those aren't the same thing.

>humans actually verify that the things they learn and say are correct and provide effects, and update models accordingly. They do this by trying behaviours consistent with the learned model, and seeing how reality (other people, the physical world) responds (in degree and kind). LLMs have no conception of correctness or truth (not in any of the loss functions), and are trained and then done.

Gradient descent is literally "trying behaviors consistent with the learned model and seeing how reality responds."

The model makes predictions

The Data provides feedback (the actual next token)

The model updates based on prediction error

This repeats billions of times

That's exactly the verify-update loop you describe for humans. The loss function explicitly encodes "correctness" as prediction accuracy against real data.

>No serious researcher thinks LLMs are the way to AGI... accepted by people in the field

Appeal to authority, also overstated. Plenty of researchers do think so and claiming consensus for your position is just false. LeCunn has been on that train for years so he's not an example of a change of heart. So far, nothing has actually come out of it. Even META isn't using V-JEPA to actually do anything, nevermind anyone else. Call me when these constructions actually best transformers.


How do you manage to get that from the article?

Not from the article. Comments don't have to work this way.

> How many vehicles have a 650 mile range? Almost none.

'22 Ford Escape hybrid

The remaining miles thing shows less than that on a full tank, but I've been pretty consistently getting upper-600s between fill-ups.

I suppose it would probably be less if I went on the interstate more.


There's one. Go to a Car and Driver article about cars with extreme ranges, namely those over 650 miles, and they will start listing out particular years' models over a 10 year period in order to get to even ~10 models, and most of them are EcoBoost or variants or poor selling hybrid versions of other cars.

Assuming a 1000km range is a very strange thing to do, as it's a fringe feature that almost no one needs or wants! Recall that "almost no one" means that there's still some, an existence of a handful of people on HN is quite consistent with "almost none."


Of course I didn't pick it for range, I looked at price and miles of what the local carmax had and then separately looked up how tall the top of the windshield was.

Which I would expect to typically find something that's, um, fairly typical on characteristics I wasn't selecting on.


my 2010 F-150 with the notoriously terrible 5.4L gas engine seems to manage 1000km range. there's absolutely nothing efficient about it, it's just got a big gas tank.

Yep, Ford had to put really big tanks on even the F150 to make up for the horrid mileage. Even with a 36 gallon tank, when towing with an F150 you might only get 300 miles. It's one reason the Lightning had problems selling as many as they wanted (aside from the ridiculous pricing the first year or so). Most people who are serious about towing don't use an F150 anyway, but that doesn't mean that F150 buyers don't fantasize about their potential towing needs in the future.

Comparing range of gasoline cars is idiotic. There are plenty of cars with long range (1000km), and they all have 60L+ fuel tanks and most run on diesel (which gives you ~15% more range per liter). It'd even argue the same for BEVs. More battery is more range.

> Mechanically as in letting an AI find a solution, or as in reasoning like a mechanic, or?

As in it's fully characterized, so you can use only math and logic rather than relying on experience and guesswork.


This seems to be arguing that they should more than showing that they increasingly are.

Also the bit about companies with more older workers performing better, and the bit about older people often losing jobs due to layoffs, sound like they could also fit together as high firm performance permitting long tenure rather than having to show only that experienced employees cause higher firm performance (although of course the examples demonstrate the latter via other means, so it can't be that it doesn't happen at all).


Definitely smells like survivorship bias.

The title is clickbait. This reads more like marketing copy for the author’s consulting firm than any serious research.

They “help forward-thinking leaders and organisations see aging not as decline, but as a driver of innovation, resilience, and growth.”


ugh that they have to phrase it in such a way already makes me nauseous.

why start with the 'not as a decline, but as'. its such a stupid way to put it.

i cant beleive marketing ppl still dont realize that you dont sell something by say 'oh its not actually a turd its delicious'. JUST SAY ITS DELICIOUS. NO ONE WANTED TO KNOW WHAT IT WAS NOT.

you dont want to put a bad flavor in someones mouth and then try to wash it out. even LLM get this basic shit wrong lmao.

sorry a bit unrelated but considering the specific topic this tagline is suppose to address its really triggering.


Email is pretty decentralized without those things.

And it is infamously insecure, full of spam, and struggles with attachments beyond 10mB.

So thank you for bringing it up, it showcases well that a distributed system is not automatically a good distributed system, and why you want encryption, cryptographic fingerprints and cryptographic provenance tracking.


And yet, it is a constantly used decentralized system which does not require content addressing, as you mentioned. You should elaborate why we need content addressing for a decentralized system instead of saying "10MiB limit + spam lol email fell off". Contemporary usage of technologies you've mentioned don't seem to do much to reduce spam (see IPFS which has hard content addressing). Please, share more.

If you think email is still in widespread use because it’s doing a good job, rather than because of massive network effects and sheer system inertia, then we’re probably talking past each other - but let me spell it out anyway.

Email “works” in the same sense that fax machines worked for decades: it’s everywhere, it’s hard to dislodge, and everyone has already built workflows around it.

There is no intrinsic content identity, no native provenance, no cryptographic binding between “this message” and “this author”. All of that has to be bolted on - inconsistently, optionally, and usually not at all.

And even ignoring the cryptography angle: email predates “content as a first-class addressable object”. Attachments are in-band, so the sender pushes bytes and the receiver (plus intermediaries) must accept/store/scan/forward them up front. That’s why providers enforce tight size limits and aggressive filtering: the receiver is defending itself against other people’s pushes.

For any kind of information dissemination like email or scientific publishing you want the opposite shape: push lightweight metadata (who/what/when/signature + content hashes), and let clients pull heavy blobs (datasets, binaries, notebooks) from storage the publishing author is willing to pay for and serve. Content addressing gives integrity + dedup for free. Paying ~1$ per DOI for what is essentially a UUID, is ridiculous by comparison.

That decoupling (metadata vs blobs) is the missing primitive in email-era designs.

All of that makes email a bad template for a substrate of verifiable, long-lived, referenceable knowledge. Let's not forget that the context of this thread isn’t “is decentralized routing possible?”, it’s “decentralized scientific publishing” - which is not about decentralized routing, but decentralized truth.

Email absolutely is decentralized, but decentralization by itself isn’t enough. Scientific publishing needs decentralized verification.

What makes systems like content-addressed storage (e.g., IPFS/IPLD) powerful isn’t just that they don’t rely on a central server - it’s that you can uniquely and unambiguously reference the exact content you care about with cryptographic guarantees. That means:

- You can validate that what you fetched is exactly what was published or referenced, with no ambiguity or need to trust a third party.

- You can build layered protocols on top (e.g., versioning, merkle trees, audit logs) where history and provenance are verifiable.

- You don’t have to rely on opaque identifiers that can be reissued, duplicated, or reinterpreted by intermediaries.

For systems that don’t rely on cryptographic primitives, like email or the current infrastructure using DOIs and ORCIDs as identifiers:

- There is no strong content identity - messages can be altered in transit.

- There is no native provenance - you can’t universally prove who authored something without added layers.

- There’s no simple way to compose these into a tamper-evident graph of scientific artifacts with rigorous references.

A truly decentralized scholarly publishing stack needs content identity and provenance. DOIs and ORCIDs help with discovery and indexing, but they are institutional namespaces, not cryptographically bound representations of content. Without content addressing and signatures, you’re mostly just trading one central authority for another.

It’s also worth being explicit about what “institutional namespace” means in practice here.

A DOI does not identify content. It identifies a record in a registry (ultimately operated under the DOI Foundation via registration agencies). The mapping from a DOI to a URL and ultimately to the actual bytes is mutable, policy-driven, and revocable. If the publisher disappears, changes access rules, or updates what they consider the “version of record”, the DOI doesn’t tell you what an author originally published or referenced - it tells you what the institution currently points to.

ORCID works similarly: a centrally governed identifier system with a single root of authority. Accounts can be merged, corrected, suspended, or modified according to organisational policy. There is no cryptographic binding between an ORCID, a specific work, and the exact bytes of that work that an independent third party can verify without trusting the ORCID registry.

None of this is malicious - these systems were designed for coordination and attribution, not for cryptographic verifiability. But it does mean they are gatekeepers in the precise sense that matters for decentralization:

Even if lookup/resolution is distributed, the authority to decide what an identifier refers to, whether it remains valid, and how conflicts are resolved is concentrated in a small number of organizations. If those organizations change policy, disappear, or disagree with you, the identifier loses its meaning - regardless of how many mirrors or resolvers exist.

If the system you build can’t answer “Is this byte-for-byte the thing the author actually referenced or published?” without trusting a gatekeeper, then it’s centralized in every meaningful sense that matters to reproducibility and verifiability.

Decentralised lookup without decentralised authority is just centralisation with better caching.


I assume someone somewhere has a dataset for technology diffusion broken out by country or at least region? Like so[1], but as a table and not limited to just here.

Perhaps that sort of thing could be useful enough to justify the extra bytes?

[1] https://techliberation.com/2009/05/28/on-measuring-technolog...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: