I mean that's perfectly valid for a hobby project, but I think the argument is from the pov of a company seeing your time as a resource. In that context, it is obvious why it makes economic sense to spend your time on the actual, complex issues, assuming AI can handle the basic tasks. My job doesn't give a f about me finding joy, I can raw dog all the code I want in my free time/for hobby projects.
You're a PM and this basic-level watered down article barely discussing anything "clicked for you in a way" you didn't expect? Of course the best system is desinged based on requirements, how can a PM not know this before being a PM?
I read about mathematical expectation in a poker book for the first time a long time ago and it's really an interesting way to think about the world. In real life, it's a bit different though since there are other factors than just raw percentages like poker hands. For example, you could do everything right and still fail while someone else (like a nepo hire) can do everything wrong and still succeed.
No, that's not what I meant. I meant more like you could flop a high probability draw that does get there on the river, but the nepo kid still wins even though they just had one pair.
In poker, there are rules everyone has to follow. In real life, the rules can differ from person to person. Hence, the odds are rigged.
I think it's like super insane people think that anyone can just "code" an app with AI and that can replace actual paid or established open-source software, especially if they are not a programmer or know how to think like one. It might seem super obvious if you work in tech but most people don't even know what an HTTP server is or what is pytho, let alone understanding best practices or any kind of high-level thinking regarding applications and code. And if you're willing to spend that time in learning all that, might as well learn programming as well.
AI usage in coding will not stop ofc but normal people vibe coding production-ready apps is a pipedream that has many issues independent of how good the AI/tools are.
There is no way AI is making you 10x more productive at the current moment. And if AI is supposed to work well, then that doesn't mean you'll need to put in 10x more hours (because the AI will seamlessly and magically make that effortless). So you'll still be working the same hours even in that scenario.
Overall, I would say, if you want to pursue serious writing, please do it without have LLM generate everything. This blog is just a pattern of vomit-inducing AI-writing cliches and cites nothing of value.
In fact, I went through all your other AI-generated posts and created a meta prompt that you can just paste into ChatGPT and have one of these articles come out, saving you the time to be 10x more or whatever.
---
Write a short essay (800–1,200 words) in a reflective, intellectually restless tone that blends personal observation with a contrarian insight about technology, work, progress, or human behavior.
Constraints and style:
* Open with a concrete hook: a quote, anecdote, tweet, or cultural reference that feels slightly overfamiliar.
* Use clear, confident prose. No emojis. No motivational clichés. No listicles.
* The essay should feel like thinking out loud, not teaching.
* Avoid moralizing. Let implications emerge implicitly.
* Assume an intelligent, online reader who is tired of hype but curious.
Core structure:
1. Start with a relatable observation or irritation about modern life, tech discourse, or self-improvement culture.
2. Introduce a somewhat unexpected but real tech or economics idea (e.g., Jevons paradox, Goodhart’s law, Conway’s law, scaling laws, second-order effects of AI tooling, coordination problems, invisible infrastructure, option value, etc.).
3. Use that idea to reframe a dominant narrative people take for granted.
4. Explore at least one uncomfortable implication for individuals or society.
5. End without a neat conclusion. Close with an open tension, question, or quiet reversal.
Content rules:
* Cite or reference one specific person, company, paper, or concept from tech or economics, but don’t over-explain it.
* No product reviews or tutorials.
* No explicit calls to action.
* No “the future will…” certainty language.
Voice:
* Calm, slightly skeptical, observant.
* Curious rather than cynical.
* Written like a public notebook entry, not a polished op-ed.
The goal is not to persuade, but to sharpen how the reader sees something they already thought they understood.
I don't think this is the platform for such ad hominem attacks! It's fine to disagree with someone's opinions and / or style but no need of accusing them of "vomit-inducing AI-writing". I just went through some of OPs posts maybe you can leave your opinions here: https://news.ycombinator.com/item?id=46646939
Thanks, I will use this prompt going forward. On a more serious note, I'm aware that putting my writing "out there" potentially exposes me to all kinds of scrutiny, so I appreciate you taking the time to read through all of my work and I can only encourage you to write something yourself - I'll be happy to read it.
My working theory is that the ai bubble is caused by trump. People are too uncertain to want to invest in most industries, but they have to put their money somewhere, so they put it in ai stocks. Since the supreme court is likely to rule trump's tariffs illegal in a week or so, this may lead to a stock market crash. As people reallocate their portfolios, they will sell their ai stocks, which will pop the bubble and cause a crash. Something to watch out for.
Remember the first time you wanted to buy a stock.
You used a product or a service that you liked immensely, realized it had a stock and wanted to be involved.
1 billion people are using AI, not dramatically changing their lives yet of course but for sure they go 'wow incredible I want to be part of this' when they make a video with Sora or generate a pamphlet without having to work
That's not why I bought stocks for the first time. I had extra money and wanted it to grow rather than sit around. I think the same is true of the majority of people.
I don't think we're disagreeing with each other in that we both think that ai will continue to be a successful industry, and furthermore that we both think that investors think the same. I'm simply hypothesizing an origin for the widely acknowledged bubble in ai stocks.
Nor would it have much impact. Retail investors are a tiny part of the market. Sure, we all have 401Ks and IRAs, but are we collectively trading GOOG and MSFT? No. The people managing funds that go into those 401Ks are trading those stocks, but individually we aren't (generally speaking, of course).
Valid theory, and if you look at the prices of assets like gold, the reallocation is already happening. But I feel a near-term crash in AI stocks is just not coming unless we are headed towards catastrophic economic conditions. Lots of market forces are involved in AI now and even people selling stocks (or a major correction) will not pop the AI bubble since the major players have invested way too much cash to just let it go away at this point. (IMO)
I mean you're right in terms of it being a demanding hobby. The question is, is it worth the switch from other services.
I have 7 computers on my self-hosted network and not all of them are on-prem. With a bit of careful planning, you can essentially create a system that will stay up regardless of local fluctuations etc. But it is a demanding hobby and if you don't enjoy the IT stuff, you'll probably have a pretty bad time doing it. For most normal consumers, self-hosting is not really an option and the isn't worth the cost of switching over. I justify it because it helps me understand how things work and tangentially helps me get better my professional skills as well.
I don't understand why AI-generated text always resort to this pattern. It's not [x], but [y]. If you say that 10 times in a blog post, it's just really bad writing. There is no clarity and you say the same thing 15 times while using the stereotypical car salesman billboard voice. Here are some AI gems from the blog that was totally written by the dev in full ernest.
> Not ten. Not fifty. Five hundred and twenty-three lint violations across 67 files.
> You're not fixing technical debt—you're redefining "debt" until your balance sheet looks clean.
> These are design flaws, not syntax errors. They compile. They might even work. But they're code smells—early warnings that maintainability is degrading.
> AI-generated code is here to stay. That makes quality tooling more important, not less.
> This isn't just technical—it's a mindset change:
> It doesn't just parse your code—it analyzes control flow, tracks variable types, and detects logical errors that Ruff misses.
reply