To piggyback on this question, I've been looking for a replacement for - The Dinner Party Download - there's not been a podcast like it since they've moved on. Any suggestions?
Lots more but not because of the benchmark - I live in Half Moon Bay, CA which turns out to have the second largest mega-roost of the California Brown Pelican (at certain times of year) and my wife and I befriended our local pelican rescue expert and helped on a few rescues.
From the subreddit I linked in another comment, there did seem to be some "magic" that 4o had for these kinds of "relationships". I'm not sure how much of it is placebo, but there does seem to be a strong preference in that user group.
4o was very sycophantic so was very willing to play along with and validate the users roleplay. OpenAI even noticed enough to talk about it in a blog: https://openai.com/index/sycophancy-in-gpt-4o/
I suspect that OpenAI knew that their product was addictive, potentially dialed up the addictiveness as a business strategy, and is playing dumb about the whole thing.
That's an actively harsh response, pushing these people away from the idea GPT is in a relationship with them. So even if the initial tune was meant to increase the attach and retention rate their actions show they don't like the way it turned out to influence people who were using it as a friend/lover bot.
Then why would they have toned it down in future releases? If they really wanted to make it addictive they'd have turned it up, like social media companies do with their algorithms.
It probably is placebo. Character AI for example used DeepSeek and I'm sure many grew attachments to that model. Ultimately though I don't even get it, models lose context very quickly so it's hard to have long running conversations with them, as well as talking very sycophanticly to you. I guess this is fixed due to implementing a good harness and memories, which is what these companies did I assume.
I don’t have any evidence but I always get a strong suspicion that a very large % of what happens on this subreddit is fake. I don’t know what the exact motives are, but just something about it isn’t right to me.
I sort of agree. I don't know if it's "fake" so much as the members of that community use it as a place to extend their private role play into public.
On the one hand they're "mourning" their AI partners, but on the other hand they have intelligent and rational conversations about the practicalities of maintaining long running AI conversations. They talk about compacting vs pruning, they run evals with MRCR, etc. These are not (all) crazy people.
Well. Huh. Without regard to whether or not it was basically healthy to get that emotionally dependent on the bot… you’d think that if they could manipulate people into being so attached to the things, they’d also be able to manipulate people into accepting the end of the situation.
Go look at any tweet by sama, or twitter generally, it's full of pretty angry people who feel like something tangible in their life has been ripped away - I read someone posting about how they got an email from OAI saying they'd been concerned about the users usage of the service so they'd "upgraded them" to the "newest model". This whole situation has been really distressing for me and I'm not even involved in it, so SO glad they're getting rid of 4o, that thing is genuinely a scourge on our societies.
They didn't intentionally manipulate the people though, or lets say they didn't intend for it to go as far as some of the more /intense/ users took it. It was just a byproduct of making the bot way too agreeable and follow-y. That doesn't mean they can manipulate these people into anything OpenAI wants to undo the issue, 4o wasn't persuading these people to believe it was going along with something they desperately wanted to believe.
> you’d think that if they could manipulate people into being so attached to the things, they’d also be able to manipulate people into accepting the end of the situation.
That seems like a very unlikely conclusion to me. Why is it your prior?
I agree, but not because I think that those users had stable attachment patterns and have been corrupted by an unscrupulous company, but because there is unacknowledged, often hidden, but severe pain in a large % of the population.
“Nvidia wins either way” assumes the game stays the same — but Google, Amazon, and Meta aren’t building custom silicon to beat Nvidia on price, they’re building it to never need Nvidia at all. The moat isn’t the chips, it’s CUDA lock-in, and every major player is racing to break it.
I would argue it just means the game doesn't suddenly change all at once. If the game changes slowly, in the short term it'll be good for Nvidia. It will take quite a while for it to affect Nvidia.
Google, Amazon and Meta are to some extent solving the wrong problem, or not solving the whole problem. They're designing chips ... which they can't build because they don't have the infrastructure and don't have as long running contracts as Nvidia does. They can't match Nvidia even at 3nm, at 10nm ... Now, maybe they can go with Intel (though several have tried and given up), but ...
Nvidia GPUs are still at their core reliant on the PC architecture,
Inferencing on Nvidia cores will soon be like encoding a h265 stream on CPU.
I expect custom built TPUs will have progressively more and more advanced hardware acceleration where legacy aspects of the CUDA architecture will eventually limit their innovation without architecture changes (pci-e, nvme bus, cpu interrupts, reliance on system ram for index tables, etc
..) which fill their moat and level the playing field for google/Amazon/eventually apple
reply