maybe the fact that Persians != Arabs will improve their odds. Recent uprisings had more luck (i.e. Bangladesh), even if it’s too early to fully assess their success
Bangladesh hasn't become a "democracy" in any manner. Remember that a whole host of leaders were arrested, and the most popular political party banned from participating in the recent elections! You can claim that if they were popular there wouldn't have been any "revolutions" when they were ruling. But note that this is a country that has struggled with violence throughout its history, has seen many military coups, and struggled to be a democracy. If they weren't popular, why were these so-called revolutionaries so hell-bent in not allowing them to participate in the "first free and fair" elections organised by them? You don't become a democracy by deliberately excluding a political party that was instrumental in the founding of Bangladesh, and is supported by half the country - that's how you weaken your country's unity and lay the grounds for a civil war.
I am reading a book called Accelerando (highly recommended), and there is a play on a lobsters collective uploaded to the cloud. Claws reminded me of that - not sure it was an intentional reference tho!
I think that these companies are understanding that as the barrier to entry to build a frontend gets lower and lower, APIs will become the real moat. If you move away from their UI they will lose ad revenue, viewer stats, in short the ability to optimize how to harness your full attention. It would be great to have some stats on hand and see if and how much active API user has increased decreased in the last
two years, as I would not be surprised if it had increased at a much faster pace than in the past.
> the barrier to entry to build a frontend gets lower
My impression is the opposite: frontend/UI/UX is where the moat is growing because that's where users will (1) consume ads (2) orchestrate their agents.
I agree with you - we are saying the same thing, by restricting their API or making less developer friendly, they want you to be captive in their UI. This might not be true for Anthropic or OpenAI - another child commenter made a comment about ads in CLI, I would not be surprised if in a while we will have product placements in LLM responses exactly as we have it in movies - not a plain ad but just a slightly less subliminal suggestion.
It’s objectively easier to build a frontend now and therefore that moat is disappearing.
What you can argue is the moat is in incumbent advantage at the UI layer, not the UI itself.
I will give you an example I heard from an acquaintance yesterday - this person is very smart but not strictly “technical”.
He is building a trading automation for personal use. In his design he gets a message on whatsapp/signal/telegram and approves/rejects the trade suggestion.
To define specifications for this, he defined multiple agents (a quant, a data scientist, a principal engineer, and trading experts - “warren buffett”, “ray dalio”) and let the agents run until they reached a consensus on what the design should be. He said this ran for a couple of hours (so not strictly overnight) after he went to sleep; in the morning he read and amended the output (10s of pages equivalent) and let it build.
This is not a strictly-defined coding task, but there are now many examples of emerging patterns where you have multiple agents supporting each other, running tasks in parallel, correcting/criticising/challenging each other, until some definition of “done” has been satisfied.
That said, personally my usage is much like yours - I run agents one at a time and closely monitor output before proceeding, to avoid finding a clusterfuck of bad choices built on top of each other. So you are not alone my friend :-)
The higher the risk of e.g. a loan, the more interest it has to pay out to be worthwhile. The exact amount* is, as I understand it, governed by the Black–Scholes model.
* probably with some spherical-cows-in-a-vacuum assumptions given how the misuse of this model was a factor in the global financial crisis.
isn’t this easy for a potential attacker to mitigate, i.e. dropping from the address everything after the plus? it’s a known trick for gmail so i would not be surprised if an attacker knew how to get to the “real” address by cleaning it up.
Yes, even some attackers I noticed they excluded all custom domains from their dumps to avoid alerting individuals before they sell it. It’s why it’s better to have a fully unique email, preferably masked one (not custom domains) as some email services provider do, so you get the isolation feature but also blending in without going noticed by attackers.
I disagree with this - there have been overthrowings that did not require weapons in the field (i.e. Egypt, Tunisia), while widespread weapons were likely to cause civil wars (Lybia, Syria). In these cases however the role of the army was key in forcing the rulers out (and in Egypt to replace them), which might be unlikely in the case of Iran.
I use the house analogy a lot these days. A colleague vibe-coded an app and it does what it is supposed to, but the code really is an unmaintainable hodgepodge of files. I compare this to a house that looks functional on the surface, but has the toilet in the middle of the living room, an unsafe electrical system, water leaks, etc. I am afraid only the facade of the house will need to be beautiful, only to realize that they traded off glittery paint for shaky foundations.
To extend your analogy: AI is effectively mass-producing 'Subprime Housing'.
It has amazing curb appeal (glittering paint), but as a banker, I'd rate this as a 'Toxic Asset' with zero collateral value.
The scary part is that the 'interest rate' on this technical debt is variable. Eventually, it becomes cheaper to declare bankruptcy (rewrite from scratch) than to pay off the renovation costs.
My experience with it is the code just wouldn't have existed in the first place otherwise. Nobody was going to pay thousands of dollars for it and it just needs to work and be accurate. It's not the backend code you give root access to on the company server, it's automating the boring aspects of the job with a basic frontend.
I've been able to save people money and time. If someone comes in later and has a more elegant solution for the same $60 effort I spent great! Otherwise I'll continue saving people money and time with my non-perfect code.
In banking terms, you are treating AI code as "OPEX" (Operating Expense) rather than "CAPEX" (Capital Expenditure).
As long as we treat these $60 quick-fixes as "depreciating assets" (use it and throw it away), it’s great ROI.
My warning was specifically about the danger of mistaking these quick-fixes for "Long-term Capital Assets."
As long as you know it's a disposable tool, not a foundation, we are on the same page.
I remember a very nice quote from an Amazon exec - “there is no compression algorithm for experience”. The LLM might as well do wrong things, and you still won’t know what you don’t know. But then, iterating with LLMs is a different kind of experience; and in the future people will likely do that more than just grinding through the failure of just missing semicolons Simon is describing below. It’s a different paradigm really
Of course there is - if you write good tests, they compress your validation work, and stand in for your experience. Write tests with AI, but validate their quality and coverage yourself.
I think the whole discussion about coding agent reliability is missing the elephant in the room - it is not vibe coding, but vibe testing. That is when you run the code a few times and say LGTM - the best recipe to shoot yourself in the foot no matter if code was hand written or made with AI. Just put the screw on the agent, let it handle a heavy test harness.
this is a very good point, however the risk of writing bad or non extensive tests is still there if you don’t know what good looks like! The grind will still need to be there, but it will be a different way of gaining experience
There will still be a wide spectrum of people that actually understand the stack - and don’t - and no matter how much easier or harder the tools get, those people aren’t going anywhere.
Compression algorithms for experience are of great interest to ML practitioners and they have some practices that seem to work well. Curriculum learning, feedback from verifiable rewards. Solve problems that escalate in difficulty, are near the boundary of your capability, and ideally have a strong positive or negative feedback on actions sooner rather than later.
reply