Hacker Newsnew | past | comments | ask | show | jobs | submit | pfannkuchen's commentslogin

I don’t understand where Denmark’s claim to Greenland comes from in the first place.

Can someone who is unhappy about this fill me in on why Denmark should have it?


Fairly simple really. Greenland decided they wanted it and that they still want it. They are sovereign and free to decide otherwise. It is not a claim.

https://en.wikipedia.org/wiki/Greenland_and_the_European_Uni...


Okay China decided they want Taiwan and they still want it, but for some reason they don’t have it.

I think your framing may not model how territory ownership actually works…


What does the Chinese civil War have to do with this?

China (PRC) wants to be one with China (RoC)


I don't think China decided they want to be a part of Taiwan.

Ha, I misread because what was written made even less sense than flipping the order.

Greenland is clearly not sovereign! What does that even mean? They are a colonial territory of Denmark.


Did you follow the link?

“Greenland joined the then European Community in 1973 with Denmark, but after gaining autonomy in 1979 with the introduction of home rule within the Kingdom of Denmark, Greenland voted to leave in 1982 and left in 1985, to become an OCT.”

It is very easy to read about Greenland gaining autonomy also in more detail. I don’t know what is happening on your end that you have trouble with it?


Yes, absolutely.

Even more on it but Greenland right now is unsustainable and funded by Danish govt. who is willing to fund it because it sees potential in the Area

But its still Greenland's which is sovereigen/autonomous. Trump/America right now are the opposite of it.

Also all the points trump makes is bullshit and danish govt and everyone is always willing to help since they dont want russia either in the area but yeah I do feel as if this is just a smokescreen.


> willing to fund it because it sees potential in the Area.

This is a very transactional point of view that you put on them. I would rather guess that Denmark funds Greenland because the citizens are part of the kingdom. Not because the "see potential".

Not everything is about money. Money is just means.


I am not saying that they are doing it for the money. Of course the national identity is something which can't be expressed within words but my point was that Greenland and Denmark have a healthy relationship (unlike other colonies) and Greenland is happy being part of Denmark for multitude of reasons and Denmark's happy too.

So if the sovereign people of Greenland chose denmark and think its right for their country (and they are given autonomy as well by Denmark)

I just don't see how America gets any right in Greeland and wanted to debuke the claim that Denmark can get the claim as well.

Y'know the thing is America effectively tried to bribe the average Greelander to get away from Denmark but they still don't think its worth it to get into the mess, that's how happy Greenland is with Denmark and prides themselves to be part of Dane culture and neither is Denmark interested in selling Greeland (quite the contrary)

In all of these cases, as such America got literally zero argument ever and that was what I was trying to say. It's got the same argument as that of "mine" or just bullying

Denmark has sold virgin islands to America once and also for greenland, they could've gotten complete support of Danish govt/Greenland govt to make bases/mineral deals even in diplomatic ways and they literally tried to say that

Up until now. I don't see why America would want to do such a blunder not unless they just want to have the flag show Greenland as part of America just for the sake of it which is such a crazy thing when you think about it.

Also Greenland's as close to Denmark (around 2000 miles) as it is to America, so saying its in America's backyard because the map makes one think so is a crazy statement too.

Are we 100% sure that these guys didn't just look at mercator map and decided all this stupidness in it with 0 reason? This seems so silly even a teenager can tell this so much.


> America effectively tried to bribe the average Greelander to get away from Denmark.

When did they do that? Since I believe approaching individual citizens to pay them for opposing the country they belong to would be seen as an act of war.


It was settled by Scandinavians in 986 and with a bit of reorganisation Norway transferred it to Denmark in 1933.

They signed an agreement with the US in 1951, that the US military could freely use and move between defence areas, but was not to infringe upon Danish sovereignty.



The Danes should ask Trump if they get the Virgin Islands back if Trump renegs on the Treaty.

Check out the treaty of the Danish West Indies.

https://en.wikipedia.org/wiki/Treaty_of_the_Danish_West_Indi...


Why should USA have Hawaii or California?

You can read up on it yourself.


Well personally I don’t think the USA has a strong claim to Hawaii. It’s basically a strategic military outpost that got retconned into being a state after the war.

It’s kind of weird, like sure Wikipedia articles exist but it’s not as if people who have some position are basing their position entirely on the Wikipedia article and I can infer the structure of their position from the Wikipedia article. So I’m asking about what people think, and all I hear back is “reeeeeee”.


It caused reactions because it was the wrong thing to ask in this thread.

"Woman raped" - Tell me again why did she like mini skirts?

"American citizen kille by ICE' - Tell me again why did he look like he came from Nicaragua?

By themselves the may have been somewhat ok, but in this context, asking these questions give them a whole different meaning.


That would only justify Greenland‘s independence but not Trump’s bullying to get Greenland.

What next? Sicily?


Does anyone else feel like we are moving in the wrong direction?

Like every discussion I’ve seen about childcare takes the 1950s as the baseline for some reason. Like being a housewife in the 1950s sucked and it was unfair that the women had to do it and the men didn’t have to. Like people don’t explicitly say this, but this is what it boils down to.

And being a housewife in the 1950s (or 1970s or whatever) did suck. But why did it suck?

It sucked (and still does) because of the breakdown of the extended clan. A long time ago there would be a ton of family very close by to mutually spread the load.

So why did clan breakdown happen, and can we reverse that instead of pushing further and further into more and more atomization? I don’t really see that being discussed, it’s just like “1950s house wifing bad” and the analysis stops there.

One thing people are going to say is that family members are too different from each other now, or that they have economic incentives to scatter. Well, can we make them stop becoming so different? Can we delete the economic incentives? Etc.


It sucked because society back then (and currently in some cultures) was structured in such a way that women were de facto forced into marriage and motherhood, even if they didn't want to do these things. Women couldn't open bank accounts or buy cars easily without their husbands present; consequently, leaving bad marriages was considered very risky, to say nothing of the social ostricization that would ensure.

I thought that AskHistorians would have a more eloquent answer to your question. As expected, they do: https://old.reddit.com/r/AskHistorians/comments/16xsyoi/in_m...


Marriage is made up though, it’s a tool for structuring human groups that are a big jumble of individuals. In a clan structure you wouldn’t need marriage, necessarily, and “motherhood” would look entirely different from how it does today.

My point is that lots of women do reject what exists today for those, and the conservative reaction to this may be wrong. But just because the reaction is wrong doesn’t mean the “progress” is correct. We may be doing the wrong kind of progress, and the “conservatives” may be trying to conserve an overly recent and short lived model. They should instead be trying to conserve (or restore, really) a much older model, one that would resonate better with women and humans in general. IMO anyway.

And replies to me are all stuck in the modern progressive/conservative dialectic, which is not useful or interesting to discuss. We need to break out of that structure.


There is probably a soup kitchen in your town. Nothing obligates you to use it. That doesn’t make it repressive on your diet.

It’s fair to reject state-provided childcare. It’s mean to deny that to everyone else.


I have no clue how your response relates to what I said. Either I’m not understanding you or you’re not understanding me. Since my comment is much longer I’m going to wager the latter.

It isn’t genetic, it’s moral programming.

> I didn't pay hundreds of dollars for a mechanical keyboard, tuned to make every keypress a joy, to push code around with a fucking mouse

Can’t you use vim controls?


> It goes in one direction, turns around, and goes in the other direction.

To be fair, the peninsula is basically a long hallway. I’m not really sure where else it would go?


I would expect a regional system to connect an entire regional area.

Caltrain connects two parts of the Bay Area: San Francisco and the South Bay. BART connects the entire East Bay to San Francisco. In a functioning system, they would both just be legs and not two completely separate systems.

The only place they connect appears to be in Millbrae and not near any large hubs.


They will soon connect in San Jose.

I wouldn't consider "soon" to mean ten years.

Six miles, 12 billion dollars, opening in 2036.

https://en.wikipedia.org/wiki/Silicon_Valley_BART_extension


What's the holdup? Do they need to source more 5.25 inch floppy drives?

Don’t forget the SF Downtown Rail Extension, planned since the 1990s supposedly.

https://www.tjpa.org/portaldtx/about-portal

https://www.caltrain.com/media/17998/download


Is the argument just that the MNR and NYC subway, or Boston's T and commuter rail, are better integrated than BART and the Caltrain? Seems pretty great now but then I remember before the renovations at 4th and King.

My argument is that Caltrain mainly connects the two most largest and richest cities in the SF Bay Area, which are both population and job centers.

It would be like calling the Google private shuttles a model for public buses to follow.


Long Island is even more of a long hallway than the peninsula. The LIRR manages to have multiple trunks and something like 10 different branch lines. One thing that made it possible is LI is much flatter terrain than the peninsula.

The main trunk lines are in Long Island are about 3-4 miles apart. Northwest of around Cupertino or so, the mountains edge too close to the bay shoreline for you to make a second trunk line viable. Your best bet would be plonking a line around about 85, but the right-of-way doesn't exist to actually hook that line up to the existing line in any useful way.

And outside of that, basically everything you'd consider plonking another path already exists with some service: BART runs up the east shore of the bay, as it does west of San Bruno Mountain. You have two mountain crossings covered by BART and one by ACE. The main missing things are curving BART back into San Jose and reactivating the Dumbarton Bridge.


I've wondered about running BART from Fremont to East Palo Alto and Redwood City via Dumbarton. Not sure what the ridership would be though. I looked at the Dumbarton bridge traffic and it's the least of the three bridges and pales in comparison to the bay bridge.

Still if you built that the gap between Millbrae and Redwood city is 12 miles.


Your last sentence was going to be my reply. The peninsula is really linear along 101 / the historic el Camino. There really isn’t anything to connect to.

LIRR still had to do plenty of tunneling to build the East Side Access station though. Still, it opened in 2023! NYC is also still building the second avenue subway --- slowly, haltingly, and at near-ruinous expense, but it's actually a real expansion to the network is actually happening. By US standards, that's a miracle.

There used to be a rail line that went closer to the base of the mountains but they tore that down to build Foothill Expressway and other roads.

I’m confused why it won’t clear an existing infection while still working on future infections.

Here is what I know (which may be limited, I’m not a biologist) and also what I’m assuming:

1) The body apparently doesn’t eliminate the virus on its own when it picks up the virus unvaccinated. I’m assuming that this is because it isn’t registered by the immune system as being harmful, for whatever reason.

2) The attenuated virus in the vaccine would not produce an immune response without the adjuvant, because even viruses that are registered as harmful are not reliably registered as harmful when attenuated. This is where the adjuvant packaged with the attenuated virus comes in - it is registered by the body as harmful, and in its confusion the immune system also adds the virus to the registry.

So, naively, if the immune system previously didn’t register the natural infection as harmful, and if it does register the virus in the vaccine as harmful, why doesn’t the registry entry for the vaccine also get applied to the natural infection, the same way as it does for a person who wasn’t previously infected?

Is there some kind of specificity hierarchy, along with a “not harmful” registry alongside the “harmful” registry, such that the natural infection continues to get its previous classification of “not harmful” because the “not harmful” registry entry is more specific than the “harmful” registry entry? That’s the only explanation I can (naively) think of.

And if that’s the case, could we first wipe out the registry by infecting the person with measles, and then give them the HPV vaccine? Just kidding about this part!


I am assuming they meant it won't clear one strain that you already have but may protect against another one you don't


Yes, I understand that. Would you mind reading my comment above? The thing I’m confused about is why it won’t protect you against one you already have.

Like for viruses that have a vaccine, normally you wouldn’t vaccinate someone who had the virus already because the vaccine would be redundant - they already have natural immunity.

But in the case of HPV, apparently they don’t have effective natural immunity, the immunity naturally acquired is worse than the vaccine one. So why can’t the vaccine one take effect after the absent (or at least ineffective one) natural one isn’t (or is slightly) in place? That’s what I don’t understand. It seems like the natural immunity prevents the vaccine induced immunity from developing, but the natural immunity in this case doesn’t seem to work, while the vaccine induced immunity does work. Why…?


> we didn't speed up the priority orders, we just purposefully delayed non-priority orders by 5 to 10 minutes to make the Priority ones "feel" faster by comparison

This in particular sounds very fake to me.

If they are delaying the regular orders, then they are either a) having drivers sit idle or b) freeing up resources for the priority orders.

In the (b) case this would just deliver the promised prioritization behavior, not evil and not what OP is claiming.

In the (a) case where they are actually having drivers sit idle, then they are reducing the throughput of their system significantly. Which might be fine for a quick A/B test on a subset of customers, but as framed this is basically a psy op to trick customers en masse into thinking the priority order is faster when it really isn’t. To have that effect, you would need to deploy this to all customers long enough for them to organically switch back and forth between priority and regular enough times to notice the difference. That seems like it couldn’t possibly be better than the much simpler option of just implementing the priority behavior and reducing its effect down to zero slowly over time (which would be evil but isn’t the claim).


Beyond an initial effect when the delay is first implemented, adding a delay would increase the latency (waiting time) but not the throughput (utilisation of the system). It's queuing theory.

A way to think of it is that the drivers that are made idle by adding a delay will be kept busy delivering previously delayed orders.


Consider that demand for food delivery is not constant throughout the day and night.

I believe throughput would actually be reduced every time demand increases, which would happen in the morning, at various meal times, around the different opening and closing times of various restaurants, etc.

I do agree that the throughput reduction would be more complex than “1 driver sits idle for <delay time> every 1 order”.


Throughput would still remain unchanged. Suppose that the "lunch rush" is from 11AM to 1PM, and imagine that it's uniform for the sake of simplicity. Then drivers would end up being fully utilized from 11:10AM to 1:10PM, instead of 11 to 1. The 10 minute lag at the end where drivers are still finishing the queue makes up for the 10 minute delay at the start.


Oh yeah that makes sense actually. Thanks for explaining.


Exactly .. the delay would be a one time event.


> In the (a) case where they are actually having drivers sit idle, then they are reducing the throughput of their system significantly.

If there are 50 deliveries per driver per shift and I want do deliver everything 5 minutes later, I don't need the driver to idle for 50 × 5 minutes.

The driver only needs to start the first delivery 5 minutes later, at a time cost of 1 × 5 minutes. Then they finish it 5 minutes later, and hence start the second delivery 5 minutes later, without standing idle between deliveries.

And if I pay the workers per delivery, that 1 × 5 minutes of initial delay doesn't cost me anything except worker morale.


> If they are delaying the regular orders, then they are either a) having drivers sit idle or b) freeing up resources for the priority orders.

Delay would be easy. Just delay placing the order at the restaurant, or delay sending the order to the driver. That won’t introduce any NOP wait states.

This is all quite possible.

That said, I’m skeptical that this is a real story, but I guess time will tell.

I do have one prediction, though: this story will dive down, pretty quickly. I don’t think because evil. It’s just too insupportable.


I think its b, but I still think its a dirty trick. Its like when budget airlines prioritise the customers that pay to board first. If no one paid for it, or if everyone paid for it, the effect would be the same, except in the latter case the airline makes extra money.


Or option (c), they offer their drivers less for the job for 5 minutes to see if they'll take it anyway; if not then they pay a more reasonable amount.


I think they all do that to an extent. But they also kinda force drivers to take lesser value orders to keep their accept rate up. There’s been a few food delivery drivers make videos on it.

No matter what all of the apps are designed to screw the customers, the restaurants, and the drivers.

I’ve stopped using them stateside ever since that California law went into effect to affect that basically jacked prices up even more plus tips on top. It’s pretty ridiculous. So glad my local pizza place still has their own drivers.


With guys who are in prestigious/powerful corporate positions, I wonder if there is a fundamental issue where everybody tends to brown nose them, but female brown nosing sometimes gets misinterpreted as flirtation and interest.

And because guys in these sorts of positions actually do get an overpowered amount of real interest from women, they may have a harder time detecting inauthentic interest-alias than say a random janitor guy who a woman is being artificially nice to for some reason.

And then if the guy mistakenly thinks the woman is interested and makes a move, the woman may then in the moment feel unsure about what to do, because an abrupt rejection that contradicts their earlier outward behavior may make them feel not good, they might feel like they caused it, etc (which I think lines up with accounts I’ve read, except they don’t mention the brown nosing part of the theorized pattern).

This doesn’t excuse anything, necessarily, I just wonder if there are some complex dynamics at play. This setup we have where sexual relations are at will, subject only to consent, is not that old, so it wouldn’t be surprising if the system as-is still produces very bad outcomes at times even if the parties involved are all behaving in a non-psychopathic way.


You might want to go read the actual accusations. One woman said Lasseter felt her up under the table at meeting(s).


He was known (warning among women) to involuntarily hug pretty women and try to kiss them on the mouth

https://variety.com/2017/film/news/john-lasseter-pixar-disne...

This is unacceptable- period


Nobody is saying it’s acceptable. Are we unable to have discussions about the causes of bad things, or do we just have to frown and say they are bad over and over again?


I’m not sure how that goes against what I said? If the man is confused and thinks the woman is very interested in him (again because he is confused), that could absolutely happen. I guarantee it’s happened in other cases where the two have gone on to happily date or marry. The only difference would be that in those cases the man wasn’t confused about the woman’s interest.


There's a reason this stuff is illegal. The person being groped is effectively helpless since the person doing the groping is in a position of power over their life.

It really doesn't matter if the boss is confused. Someone with a position of power over another should not do this period. The person being groped risks their job if they speak up.


The problem with your hypothesis is that 'a woman being nice to you' (brown-nosing or otherwise) is in absolutely no way whatsoever flirting. Flirting is an entirely different way of behaving.


Thank you for verifying how all women behave and act, as if they are all identical.

Then, using that stereotypical behaviour to chastise others.

You also presume that "brown nosing" is the same category as "being nice". It's not. Brown nosing is a non-genuine, fabricated expression.

So is the woman faking "being nice" due to brow nosing, or faking "being nice" due to sexual interest?

The parent poster was merely wondering if this is hard to discern, and even indicated that it "doesn't make it right".

Your response is part of what is wrong with such dynamics today. Knee jerk reactions to speculation is not called for.


Did you miss where I said “misinterpreted”?

Men also sometimes misinterpret waitresses as flirting with them when they aren’t, which is another common entry point for sexual harassment. What I’m describing would be similar to that. Would you say that doesn’t happen either, or are they somehow completely different?


Yeah, sure, the plot of The Hot Chick is totally imaginary and not used as a satire on the behaviour of some people. Especially in those scenes at the cafe. Yes.

https://www.imdb.com/title/tt0302640/?ref_=sr_t_3


Adjusted for inflation?


From the article:

> The compensation figures have been adjusted into 2025 dollars to account for inflation.


I skimmed but missed that, thanks! I have seen so many times where comparisons are breathlessly made without adjustment in the media, so I’m pleasantly surprised it was done here.


I think part of what is happening here is that different developers on HN have very different jobs and skill levels. If you are just writing a large volume of code over and over again to do the same sort of things, then LLMs probably could take your job. A lot of people have joined the industry over time, and it seems like the intelligence bar moved lower and lower over time, particularly for people churning out large volumes of boilerplate code. If you are doing relatively novel stuff, at least in the sense that your abstractions are novel and the shape of the abstraction set is different from the standard things that exist in tutorials etc online, then the LLM will probably not work well with your style.

So some people are panicking and they are probably right, and some other people are rolling their eyes and they are probably right too. I think the real risk is that dumping out loads of boilerplate becomes so cheap and reliable that people who can actually fluently design coherent abstractions are no longer as needed. I am skeptical this will happen though, as there doesn’t seem to be a way around the problem of the giant indigestible hairball (I.e as you have more and more boilerplate it becomes harder to remain coherent).


Indeed, discussions on LLMs for coding sound like what you would expect if you asked a room full of people to snatch up a 20 kg dumbbell once and then tell you if it's heavy.

> I think the real risk is that dumping out loads of boilerplate becomes so cheap and reliable that people who can actually fluently design coherent abstractions are no longer as needed.

Cough front-end cough web cough development. Admittedly, original patterns can still be invented, but many (most?) of us don't need that level of creativity in our projects.


Absolutely this, and TFA touches on the point about natural language being insufficiently precise:

AI can write you an entire CRUD app in minutes, and with some back-and-forth you can have an actually-good CRUD app in a few hours.

But AI is not very good (anecdotally, based on my experience) at writing fintech-type code. It's also not very good at writing intricate security stuff like heap overflows. I've never tried, but would certainly never trust it to write cryptography correctly, based on my experience with the latter two topics.

All of the above is "coding", but AI is only good at a subset of it.


Generating CRUD is like solving cancer in mice, we already have a dizzying array of effective solutions… Ruby on Rails, Access 97, model first ORMs with GUI mappers. SharePoint lets anyone do all the things easily.

The issue is and always has been maintenance and evolution. Early missteps cause limitations, customer volume creates momentum, and suddenly real engineering is needed.

I’d be a lot more worried about our jobs if these systems were explaining to people how to solve all their problems with a little Emacs scripting. As is they’re like hyper aggressive tech sales people, happy just to see entanglements, not thinking about the whole business cycle.


Go with Laravel and some admin packages and you generate CRUD pages in minutes. And I think with Django, that is builtin.

But I don’t think I’ve seen pure CRUD on anything other than prototype. Add an Identity and Access Management subsystem and the complexity of requirements will explode. Then you add integration to external services and legacy systems, and that’s where the bulk of the work is. And there’s the scalability issue that is always looming.

Creating CRUD app is barely a level over starting a new project with the IDE wizard.


>Creating CRUD app is barely a level over starting a new project with the IDE wizard.

For you, maybe. But for a non-progrmamer who's starting a business or just needs a website it's the difference between hiring some web dev firm and doing it themselves.


  > it's the difference between hiring some web dev firm and doing it themselves.
anecdote but i've had a lot of acquaintances who started at both "hiring some web dev firm" and "doing it themselves" with results largely being the same: "help me fix this unmaintainable mess and i will pay you x"...

jmo but i suspect llms will allow for the later to go further before the "help me" phase but i feel like that aint going away completely...


Just like my previous comments, much depends on the specifics.

My wife's sister and her husband run a small retail shop in $large_city. My sister-in-law taught herself how to set up and modify a website with a shopify storefront largely with LLM help. Now they take online orders. I've looked at the code she wrote and it's not pretty but it generally works. There will probably never be a "help me fix this unmaintainable mess and I will pay you" moment in the life of that business.

The crux of my point is this: In 2015 she would have had to hire somebody to do that work.

This segment of the software industry is where the "LLMs will take our jerbs" argument is coming from.

The people who say "AI is junk and it can't do anything right" are simply operating in a different part of the industry.


> and with some back-and-forth you can have an actually-good CRUD app in a few hours

Perhaps the debate is on what constitutes "actually-good". Depends where the bar is I suppose.


Beauty is in the eye of the beholder. Litigating our personal opinions about "actually-good" is irrelevant and pointless.


> different developers on HN have very different jobs and skill levels.

Definitely this. When I use AIs for web development they do an ok job most of the time. Definitely on par with a junior dev.

For anything outside of that they're still pretty bad. Not useless by any stretch, but it's still a fantasy to think you could replace even a good junior dev with AI in most domains.

I am slightly worried for my job... but only because AI will keep improving and there is a chance it will be as good as me one day. Today it's not a threat at all.


Yea, LLMs produce results on par with what I would expect out of a solid junior developer. They take direction, their models act as the “do the research” part, and they output lots of code: code that has to be carefully scrutinized and refined. They are like very ambitious interns who never get tired and want to please, but often just produce crap that has to be totally redone or refactored heavily in order to go into production.

If you think LLMs are “better programmers than you,” well, I have some disappointing news for you that might take you a while to accept.


> LLMs produce results on par with what I would expect out of a solid junior developer

This is a common take but it hasn't been my experience. LLMs produce results that vary from expert all the way to slightly better than markov chains. The average result might be equal to a junior developer, and the worst case doesn't happen that often, but the fact that it happens from time to time makes it completely unreliable for a lot of tasks.

Junior developers are much more consistent. Sure, you will find the occasional developer that would delete the test file rather than fixing the tests, but either they will learn their lesson after seeing your wth face or you can fire them. Can't do that with llms.


I think any further discussion about quality just needs to have the following metadata:

- Language

- Total LOC

- Subject matter expertise required

- Total dependency chain

- Subjective score (audited randomly)

And we can start doing some analysis. Otherwise we're pissing into ten kinds of winds.

My own subjective experience is earth shattering at webapps in html and css (because I'm terrible and slow at it), and annoyingly good but a bit wrong usually in planning and optimization in rust and horribly lost at systems design or debugging a reasonably large rust system.


I agree in that these discussions (this whole hn thread tbh) are seriously lacking in concrete examples to be more than holy wars 3.0.

Besides one point: junior developers can learn from their egregious mistakes, llms can't no matter how strongly worded you are in their system prompt.

In a functional work environment, you will build trust with your coworkers little by little. The pale equivalent in LLMs is improving system prompts and writing more and more ai directives that might or might not be followed.


This seems to be one of the huge weaknesses of current LLMs: Despite the words "intelligence" and "machine learning" we throw around, they aren't really able to learn and improve their skills without someone changing the model. So, they repeat the same mistakes and invent new mistakes by random chance.

If I was tutoring a junior developer, and he accidentally deleted the whole source tree or something egregious, that would be a milestone learning point in his career, and he would never ever do it again. But if the LLM does it accidentally, it will be apologetic, but after the next context window clear, it has the same chances of doing it again.


> Besides one point: junior developers can learn from their egregious mistakes, llms can't no matter how strongly worded you are in their system prompt.

I think if you set off an LLM to do something, and it does a "egregious mistake" in the implementation, and then you adjust the system prompt to explicitly guard against that or go towards a different implementation and you restart from scratch again yet it does the exact same "egregious mistake", then you need to try a different model/tool than the one you've tried that with.

It's common with smaller models, or bigger models that are heavily quanitized that they aren't great at following system/developer prompts, but that really shouldn't happen with the available SOTA models, I haven't had something ignored like that in years by now.


And honestly this is precisely why I don't fear unemployment, but I do fear less employment overall. I can learn and get better and use LLMs as a tool. So there's still a "me" there steering. Eventually this might not be the case. But if automating things has taught me anything, it's that removing the person is usually such a long tail cost that it's cheaper to keep someone in the loop.

But is this like steel production or piloting (few highly trained experts are in the loop) or more like warehouse work (lots of automation removed any skills like driving or inventory work etc).


I can in fact fire an LLM. It's even easier than firing a junior developer.

Or rather, it's more like a contractor. If I don't like the job they did, I don't give them the next job.


you say this as if web development isnt 90% of software.


> If you are just writing a large volume of code over and over again

But why would you do that? Wouldn't you just have your own library of code eventually that you just sell and sell again with little tweaks? Same money for far less work.


People, at least novice developers, tend to prefer fast and quick boilerplate that makes them look effective, over spending one hour sitting just thinking and designing, then implementing some simple abstraction. This is true today, and been true for as long as I've been in programming.

Besides, not all programming work can be abstracted into a library and reused across projects, not because it's technically infeasible, but because the client doesn't want to, cannot for legal reasons or the developer process at the client's organization simply doesn't support that workflow. Those are just the reasons from the top of my head, that I've encountered before, and I'm sure there is more reasons.


But people don't stay novices after years/decades. Of course when you write the boilerplate for the 20x time maybe you still accept that, but when you write it for the 2000x time, I bet you do the lazy thing and just copy it.

> cannot for legal reasons or ...

Sure, you can't copy trade secrets, but that's also not the boilerplate part. Copying e.g. a class hierarchy and renaming all the names and replacing the class contents that represent the domain, won't be a legal problem, because this is not original in the first place.


> But people don't stay novices after years/decades

Some absolutely do. I know programmers who entered web development at the same time as me, and now after decades they're still creating typical CRUD applications for whatever their client today is, using the same frameworks and languages. If it works, makes enough money and you're happy, why change?

> Copying e.g. a class hierarchy and renaming all the names and replacing the class contents that represent the domain, won't be a legal problem, because this is not original in the first place.

Some code you produce for others definitively fall under their control, but obviously depends on the contracts and the laws of the country you're in. But I've written code for others that I couldn't just "abstract into a FOSS library and use in this project", even if it wasn't trade secrets or what not, just some utility for reducing boilerplate.


> "abstract into a FOSS library and use in this project"

That is not what I meant. My idea was more like "copy ten lines from this project, then lines from that project, the class from here, but replace every line before the commit ...".

I shouldn't have used the word library, as I did not mean output from the linker, but rather a colloquial meaning of a loose connection of snippets.


That’s a very good point I hadn’t heard explained that way before. Makes a lot of sense and explains a lot of the circular debates about AI that happen here daily.


>at least in the sense that your abstractions are novel and the shape of the abstraction set is different from the standard things that exist

People shouldn't be doing this in the first place. Existing abstractions are sufficient for building any software you want.


> Existing abstractions are sufficient for building any software you want.

Software that doesn't need new abstractions is also already existing. Everything you would need already exists and can be bought much more cheaply than you could do it yourself. Accounting software exists, unreal engine exists and many games use it, why would you ever write something new?


>Software that doesn't need new abstractions is also already existing

This isn't true due to the exponential growth of how many ways you can compose existing abstractions. The chance that a specific permutation will have existing software is small.


I'm supposing that nobody who has a job is producing abstractions that are always novel, but there may be people who find abstractions that are novel for their particular field because it is something most people in that field are not familiar with, or that come up with novel abstractions (infrequently) that improve on existing ones.


The new abstraction is “this corporation owns this IP and has engineers who can fix and extend it at will”. You can’t git clone that.

But if there is something off the shelf that you can use for the task at hand? Great! The stakeholders want it to do these other 3000 things before next summer.


Software development is a bit like chess. 1. e4 is an abstraction available to all projects, 3. Nc3 is available to 20% of projects, while 15. Nxg5 is unique to your own project.

Or, abstractions in your project form a dependency tree, and the nodes near the root are universal, e.g. C, Postgres, json, while the leaf nodes are abstractions peculiar to just your own project.


The possible chess moves is already known ahead of time. Just because an AI can't make up a move like Np5 as a human could do, that doesn't mean anything AI can't play chess. It will be fine with just using the existing moves that have been found so far. The idea that we still need humans to come up with new chess moves is not a requirement for playing chess.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: