Let’s have some compassion, a lot of people are freaking out about their careers now and defense mechanisms are kicking in. It’s hard for a lot of people to say “actually yeah this thing can do most of my work now, and barrier of entry dropped to the ground”.
I am constantly seeing this thing do most of my work (which is good actually, I don't enjoy typing code), but requiring my constant supervision and frequent intervention and always trying to sneak in subtle bugs or weird architectural decisions that, I feel with every bone in my body, would bite me in the ass later. I see JS developers with little experience and zero CS or SWE education rave about how LLMs are so much better than us in every way, when the hardest thing they've ever written was bubble sort. I'm not even freaking about my career, I'm freaking about how much today's "almost good" LLMs can empower incompetence and how much damage that could cause to systems that I either use or work on.
It’s literally not possible. It has nothing to do with intelligence. A perfectly intelligent AI still can’t read minds. 1000 people give the same prompt and want 1000 different things. Of course it will need supervision and intervention.
We can synthesize answers to questions more easily, yes. We can make better use of extensive test suites, yes. We cannot give 1000 different correct answers to the same prompt. We cannot read minds.
If the answer is "yes"? Then, yeah, AI is not coming for you. We can make LLMs multimodal, teach them to listen to audio or view images, but we have no idea how to give them ESP modalities like mind reading.
If the answer is "no"? Then what makes you think that your inability to read minds beats that of an LLM?
This is kind of the root of the issue. Humans are mystical beings with invisible sensibilities. Many of our thoughts come from a spiritual plane, not from our own brains, and we are all connected in ways most of us don't fully understand. In short, yes I can read minds, and so can everybody else.
Today's LLMs are fundamentally the same as any other machine we've built and there is no reason to think it has mystical sensibilities.
We really need to start making a differentiation between "intelligence" and "relevance". The AI can be perfectly intelligent, but without input from humans, it has no connection to our Zeitgeist, no source material. Smart people can be stupid, too, which means they are intelligent but disconnected from society. They make smart but irrelevant decisions just like AI models always will.
AI is like an artificial brain, and a good one, but humans have more to our intelligence than brains. AI is just a brain and we are more.
Then who else is still holding a job if a tool like that is available? Manually working people, for the few months or years before robotics development fueled by cheap human-level LLMs catches up?
If you have an AI that's the equivalent of a senior software developer you essentially have AGI. In that case the entire world will fundamentally change. I don't understand why people keep bringing up software development specifically as something that will be automated, ignoring the implications for all white collar work (and the world in general).
Yes and look how far we've come in 4 years. If programming has another 4 that's all it has.
I'm just not sure who will end up employed. The near state is obviously jira driven development where agents just pick up tasks from jira, etc. But will that mean the PMs go and we have a technical PM, or will we be the ones binned? Probably for most SMEs it'll just be maybe 1 PM and 2 or so technical PMs churning out tickets.
But whatever. It's the trajectory you should be looking at.
Have you ever thought about the fact that 2 years ago AI wasn't even good enough to write code. Now it's good enough.
Right now you state the current problem is: "requiring my constant supervision and frequent intervention and always trying to sneak in subtle bugs or weird architectural decisions"
But in 2 years that could be gone too, given the objective and literal trendline. So I actually don't see how you can hold this opinion: "I'm not even freaking about my career, I'm freaking about how much today's "almost good" LLMs can empower incompetence and how much damage that could cause to systems that I either use or work on." when all logic points away from it.
We need to be worried, LLMs are only getting better.
That's easy. When LLMs are good enough to fully replace me and my role in the society (kind of above-average smart, well-read guy with university education and solid knowledge of many topics, basically like most people here) without any downsides, and without any escape route for me, we'll probably already be at the brink of a societal collapse and that's something I can't really prepare for or even change.
All evidence points to the world changing. You're not worrying because worrying doesn't solve anything. Valid.
More people need to be upfront about this reasoning. Instead of building irrational scaffolds saying AI is not a threat. AI is a threat, THAT is the only rational conclusion. Give the real reason why you're not worried.
I'm all for this. But it's the delusion and denialism of people not wanting to face reality.
Like I have compassion, but I can't healthily respect people who try so hard to rewrite reality so that the future isn't so horrifying. I'm a SWE and I'm affected too, but it's not like I'm going to lie to myself about what's happening.
They just want people to think the barrier of entry has dropped to the ground and that value of labour is getting squashed, so society writes a permission slip for them to completely depress wages and remove bargaining chips from the working class.
Don't fall for this, they want to destroy any labor that deals with computer I/0, not just SWE. This is the only value "agentic tooling" provides to society, slaves for the ruling class. They yearn for the opportunity to own slaves again.
It can't do most of your work, and you know that if you work on anything serious. But If C-suite who hasn't dealt with code in two decades, thinks this is the case because everyone is running around saying its true they're going to make sure they replace humans with these bot slaves, they really do just want slaves, they have no intention of innovating with these slaves. People need to work to eat, now unless LLMs are creating new types of machines that need new types of jobs, like previous forms of automation, then I don't see why they should be replacing the human input.
If these things are so good for business, and are pushing software development velocity.. Why is everything falling apart? Why does the bulk of low stakes software suck. Why is Windows 11 so bad? Why aren't top hedge funds, medical device manufactures (places where software quality is high stakes) replacing all their labor? Where are the new industries? They don't do anything novel, they only serve to replace inputs previously supplied by humans so the ruling class can finally get back to good old feeling of having slaves that can't complain.
I’m guessing, would love someone who has first hand knowledge to comment. But my guess is it’s some combination of trying many different approaches in parallel (each in a fresh context), then picking the one that works, and splitting up the task into sequential steps, where the output of one step is condensed and is used as an input to the next step (with possibly human steering between steps)
Because this is the first glimpse of a world where anyone can start a large, programmatic smear campaign about you complete with deepfakes, messages to everyone you know, a detailed confession impersonating you, and leaked personal data, optimized to cause maximum distress.
If we know who they are they can face consequences or at least be discredited.
This thread has as argument going about who controlled the agent which is unsolvable. In this case, it’s just not that important. But it’s really easy to see this get bad.
In the end it comes down to human behavior given some incentives.
if there are no stakes, the system will be gamed frequently. If there are stakes it will be gamed by parties willing to risk the costs (criminals for example).
For certain values of "prove", yes. They range from dystopian (give Scam Altman your retina scans) to unworkably idealist (everyone starts using PGP) with everything in between.
I am currently working on a "high assurance of humanity" protocol.
If you look at the blog history it’s full of those “status report” posts, so it’s plausible that its workflow involves periodically publishing to the blog.
Not op, but the number of cards doesn't matter. Only one shuffle can exist at a time, the "number of shuffles" is not a number of natural objects but rather a cardinality of a set. And as we know sets and cardinalities open the gates of hell.
This doesn't mean it's not a "relevant thing to talk about". It just means that these mathematical constructs while useful don't maintain a direct connection to reality, kind of like complex numbers.
> Only one shuffle can exist at a time, the "number of shuffles" is not a number of natural objects but rather a cardinality of a set.
I really don't understand what this means in practice. If there are exactly 50 rocks in front of me right now, I can't talk about 51? It doesn't maintain a direct connection to reality to talk about what would happen if I threw another rock on the pile? Or if that's connected cause another rock exists, what about if I have exactly 20 chickens, and I want to talk about what would happen when another is born? Is this "connected to reality" and "a number of natural objects"? Or "the cardinality of a set" instead?
Though is some cases it is a very interesting question, like why gold and copper the color they are instead of being boring and silvery like all the other metals?
Agreed with other commenters that nothing was likely actually broadcast, but if it was it would definitely be highly illegal and you’d have feds knocking down your door pretty quickly. They don’t joke around with illegal transmissions like that.
Sorry I was unclear, I mean 50s or 70s air travel compared to present day air travel. (Which on reconsideration might not be particularly relevant haha)
Bicameralism appeared very, very early on. There’s a well known case of a missing pig in 1642’s Boston (with a population of less than 2000 at the time) that finally solidified splitting the assembly into two chambers, and that debate has been going on for a while at the time already https://www.americanantiquarian.org/sites/default/files/proc...
reply