Hacker Newsnew | past | comments | ask | show | jobs | submit | windexh8er's commentslogin

> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

This is a nice strawman, but it means nothing in the long run. People's values change and they often change fast when their riches are at stake. I have zero trust in anyone mentioned here because their "values" are currently at odds with our planet (in numerous facets). If their mission was to build sustainable and ethical AI I'd likely have a different perspective. However, Anthropic, just like all their other Frontier friends, are accelerating the burn of our planet exponentially faster and there's no value proposition AI doesn't currently solve for outside of some time savings, in general. Again, it's useful, but it's also not revolutionary. And it's being propped up incongruently with its value to society and its shareholders. Not that I really care about the latter...


Tripling productivity? Where? You can say this but where is this measurement being sourced. Every time I ask how LLMs can simply replace a real front desk assistant I get responses like: well that use case isn't viable because <enter excuse here>.

> "People enjoy products and services." ???

WTF does that even mean? Folks are so deluded with all of these "right around the corner" solves that AI has in store that they fail to realize how out of whack the numbers game has played out. In any other reality people would be scrutinizing Sam Altman at every angle. But because of some magical AI sauce the incomprehensible numbers now magically make sense.

But for a lot of us: it doesn't. If you're going to claim hundreds of billions in revenue, just a few short years from now, you better have a really fucking great product today. Not in 6 months, not in a year - but right now.

SaaS has not been displaced. Workers have not been displaced (other than shifting their salaries to AI spend which does not equate to worker replacement). Where does madness end? The only thing that makes sense is an implosion that will ripple all the way through many other markets which will now take years to fix.


It's ironic that Microsoft execs don't worry about AI replacingtheir jobs. If that doesn't paint a very clear picture of why most execs keep repeating this trope, I'm not sure what is.

There are lots of other eInk devices you can use LVGL with.

ReTerminal and other derivatives from Seeed Studio are two options. Seeed even has a newish color unitfor under $250 [0].

Not trying to diminish all of the thought and work that's gone into OPs project but a lot of this has been available to do in HomeAssistant for quite some time. Glad more people are finally seeing the value in eInk like this. I've been using them for a while in my office and bedroom for simple status as the OP states: only showing certain status depending on state.

The other unit I tinkered with quite a bit of is the Heltec Vision Master E290 [1] which is a 3" eInk devices for under $35. Based on ESP32 and has LoRa.

[0] https://www.seeedstudio.com/reTerminal-E1004-p-6692.html [1] https://heltec.org/project/vision-master-e290/


Current LLMs do not think. Just because all models anthropomorphize the repetitive actions a model is looping through does not mean they are truly thinking or reasoning.

On the flip side the idea of this being true has been a very successful indirect marketing campaign.


What does “truly thinking or reasoning” even mean for you?

I don’t think we even have a coherent definition of human intelligence, let alone of non-human ones.


Everyone knows to really think you need to use your fleshy meat brain, everything else is cheating.

Oh, yes. The trope of "but what does it even mean to think".

If you can't speak, can you think? Yes. Large Language model. Thinking is not predicated on language.

A few good starts for you. Please refute all of these arguments in your spare time to convince me otherwise:

* https://machinelearning.apple.com/research/illusion-of-think... * https://archive.is/FM4y8 * https://www.raspberrypi.org/blog/secondary-school-maths-show...


My point was not that I’m 100% convinced that LLMs can think or are intelligent.

My point was that we don’t have a great definition for (human) intelligence either. The articles you posted also don’t seem to be too confident in what human intelligence actually entails.

See https://en.wikipedia.org/wiki/Intelligence

> There is controversy over how to define intelligence. Scholars describe its constituent abilities in various ways, and differ in the degree to which they conceive of intelligence as quantifiable.

Given that an LLM isn’t even human but essentially an alien entity, who can confidently say they are intelligent or not?

I’m very sceptic of those who are very convinced one or the other way.

Are LLMs intelligent in the way that humans are? I’m quite sure they aren’t.

Are LLMs just stochastic parrots? I don’t find that framing convincing anymore either.

Either way it’s not clear, just check how this topic is discussed daily in most frontpage threads for the last couple of years


Everyone has heard the word "enshittification" at this point and this falls in line. But if you haven't read the book [0] it's a great deep dive into the topical area.

But the real issue is that these companies, once they have any market leverage, do things in their best interest to protect the little bit of moat they've acquired.

[0] https://www.mcdbooks.com/books/enshittification


OK. Let's take what you've stated as a truth.

So where is the labor force replacement option on Anthropic's website? Dario isn't shy about these enormous claims of replacing humans. He's made the claim yet shows zero proof. But if Anthropic could replace anyone reliably, today why would they let you or I take that revenue? I mean they are the experts, right? The reality is these "improvements" metrics are built in sand. They mean nothing and are marketing. Show me any model replacing a receptionist today. Trivial, they say, yet they can't do it reliably. AND... It costs more at these subsidized prices.


Why is the bar replacing a receptionist ? At the low end It will take over tasks and companies will need less people, at the top end it will take over roles. What’s the point you are making, if it can’t do bla now it never will ?

Then define the bar. You're OK with all of these billionaires just saying "we're replacing people in 6-60 months" with no basis, no proof, no validation? So the onus is now on the people who challenge the statement?

Why is the bar not even lower you ask? Well I guess we could start with replacing lying, narcissistic CEOs.


Security is always a cost center. We've seen multiple iterations of changes already impact security in the same ways over the last 20+ years. Nothing is different here and the outcomes will be the same: just good enough but always a step behind. The one thing that is a new lever to pull here is time, people need far less of it to make disastrous mistakes. But, ultimately, the game hasn't changed and security budgets will continue to be funneled to off the shelf products that barely work and the remainder of that budget will continue to go to the overworked and underpaid. Nothing really changes.


The opposite is also true. People often follow people off of a (figurative) cliff because that's what everyone is doing. We have copious toxic online communities to show for that. Most of the conversations around AI are falling into that type of cultish aspect. Look no further than YouTube to find how many born-again-AI-zealots emerged with ClawdBot/MoltBot/OpenClaw. It's just not as obvious in the blogosphere. The thing that is obvious is the constant "findings" that are nothing more than opinions. There's no historical evidence you won't change your mind in 10 minutes. And that's why I feel, as I read them, that these types of blog posts are foundationally built in sand.


Simon is a new form of troll and you hit the nail on the head of what: soapboxing the obvious all in the name of AI. Just like the OpenClaw article that hit the FP yesterday, these types of folks are either doing this for marketing or they're really elated by the mediocre. Has Simon actually produced anything novel or compelling? His blog posts surely aren't - so if that's any indication of his work output I wouldn't be surprised if the answer is a hard no.

And, who wants to be working on 3 projects simultaneously? This is the new "multitasking" agenda from generations ago with a new twist: now I just manage prompts and agents, bro! But the reality is: you think you're doing more than you actually are. Maybe Simon is just placating to his inevitable AGI overlords that he will still be useful in the coming Altmania revolution? No idea. Either way half the time I read his posts (only because they're posted here and I'm excited for his new discoveries) I can't stand to stomach his drivel.


> Has Simon actually produced anything novel or compelling?

Here are some of my recent posts which I self-evaluate as "novel and compelling".

- Running Pydantic’s Monty Rust sandboxed Python subset in WebAssembly https://simonwillison.net/2026/Feb/6/pydantic-monty/ - demonstrating how easy and useful it is to be able to turn Rust code into WASM that can run independently or be used inside a Python wheel for Pyodide in order to provide interactive browser demos of Rust libraries.

- Distributing Go binaries like sqlite-scanner through PyPI using go-to-wheel https://simonwillison.net/2026/Feb/4/distributing-go-binarie... - I think my go-to-wheel utility is really cool, and distributing Go CLIs through PyPI is a neat trick.

- ChatGPT Containers can now run bash, pip/npm install packages, and download files https://simonwillison.net/2026/Jan/26/chatgpt-containers/ - in which I reverse engineered and documented a massive new feature of ChatGPT that OpenAI hadn't announced or documented anywhere

I remain very proud of my current open source projects too - https://datasette.io and https://llm.datasette.io and https://sqlite-utils.datasette.io and a whole lot more: https://github.com/simonw/simonw/blob/main/releases.md

Are you ready to say none of that is "novel or compelling", in good faith?


If we revisit these posts in a week, a month and then a year my question is: was it useful? Are others building off of this, still?

My is answer right now is: you can't answer that question yet and the fact that you are looking for immediate validation showcases you're just building random things. Which is great if that's what you want to do. But is it truly novel or compelling? Given you just move on to the next thing, there seems to be a lack of direction and in that regard I would say: no.

Just because you're doing more doesn't mean anything unless it's truly useful for you or others. I just don't think that's the case here. It's a new of form of move fast and break things. And while that can have net positives, we also are very aware it has many net negatives.


My major open source projects get a lot of use. I wouldn't classify them as "just random things".

I don't think you're familiar with my work at all.


In my view your open source projects don't align with what you've, mostly, written about. This is in the context of your AI posts. I am familiar with your work and my perspective is still the same. You can think whatever you want, but let's call a spade a spade: I'm not the lone wolf with this perspective. Because, as can be viewed in this thread, others have a similar opinion.


I'm very happy with the quality of my writing on LLMs.

I finally turned a corner last year where I’m generally pleased with how well my older posts hold up rather than wishing I’d done better.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: