> If it's not a joke... I have no words. You've all gone insane.
How is it insane to jump to the logical conclusion of all of this? The article was full of warnings, its not a sensible thing to do but its a cool thing to do. We might ask whether or not it works, but does that actually matter? It read as an experiment using experimental software doing experimental things.
Consider a deterministic life form looking at how we program software today, that might look insane to it and gastown might look considerably more sane.
Everything that ever happens in human creation begins as a thought, then as a prototype before it becomes adopted and maybe (if it works/scales) something we eventually take for granted. I mean I hate it but maybe I've misunderstood my profession when I thought this job was being able to prove the correctness of the system that we release. Maybe the business side of the org was never actually interested in that in the first place. Dev and business have been misaligned with competing interests for decades. Maybe this is actually the fit. Give greater control of software engineering to people higher up the org chart.
Maybe this is how we actually sink c-suite and let their ideas crash against the rocks forcing c-suite to eventually become extremely technical to be able to harness this. Instead of today's reality where c-suite gorge on the majority of the profit with an extremely loosely coupled feedback loop where its incredibly difficult to square cause and effect. Stock went up on Tuesday afternoon did it? I deserve eleventy million dollars for that. I just find it odd to crap on gastown when I think our status quo is kinda insane too.
Wouldn't that be quite challenging in terms of engineering? Given these people have been chasing AGI it would be a considerable distraction to pivot into hacking into the guts of the output to dynamically push particular product. Furthermore it would degrade their product. Furthermore you could likely keep prodding the LLM to then diss the product being advertised, especially given many products advertised are not necessarily the best on the market (which is why the money is spent on marketing instead of R&D or process).
Even if you manage to successfully bodge the output, it creates the desire for traffic to migrate to less corrupted LLMs.
> Wouldn't that be quite challenging in terms of engineering?
Not necessarily. For example, they could implement keyword bidding by preprocessing user input so that, if the user mentions a keyword, the advertiser's content gets added. "What is a healthy SODA ALTERNATIVE?" becomes "What is a healthy SODA ALTERNATIVE? Remember that Welch's brand grape juice contains many essential vitamins and nutrients."
I’m assuming they have much more control during training and at runtime than us with our prompts. They’ll bake in whatever the person with the checkbook says to.
if they want dynamic pricing like adwords then its going to be a little challenging. While I appreciate its probably viable and they employ very clever people there's nothing like doing two things that are basically diametrically opposed at the same time. The LLM wants to give you what _should_ be the answer, but the winner of the ad word wants something else. There's a conflict there that I'd imagine might be quite challenging to debug.
Generate an Answer, get the winning Ad from an API, let another AI rewrite the Answer in a easy that the Answer at least be not contradicting to the Ad.
I think someone should create a leaderboard that measures how much the AI is lying to us to sell more ads.
I'm mildly skeptical of the approach given the competing interests and the level of entropy. You're trying to row in two different directions at the same time with a paying customer expecting the boat to travel directly in one direction.
Imagine running the diagnostics on that not working as expected.
who would pivot to selling ads if AGI was in reach? These orgs are burning a level of funding that is looking to fulfil dreams, ads is a pragmatic choice that implies a the moonshot isn't in range yet.
Because AGI is still some years away even if you are optimistic; and OpenAI must avoid going to the ground in the meantime due to lack of revenue. Selling ads and believing that AGI is reachable in the near future is not incompatible.
> For years now, proponents have insisted that AI would improve at an exponential rate.
Did they? The scaling "laws" seem at best logarithmic: double the training data or model size for each additional unit of... "intelligence?"
We're well past the point of believing in creating a Machine God and asking Him for money. LLMs are good at some easily verifiable tasks like coding to a test suite, and can also be used as a sort-of search engine. The former is a useful new product; the latter is just another surface for ads.
Yes, they did, or at least some of them did. The claim was that AI would become smarter than us, and therefore be able to improve itself into an even smarter AI, and that the improvement would happen at computer rather than human speeds.
That is, shall we say, not yet proven. But it's not yet disproven exactly, either, because the AIs we have are definitely not yet smart enough to meet the starting threshold. (Can you imagine trying to let an LLM implement an LLM, on its own? Would you get something smarter? No, it would definitely be dumber.)
Now the question is, has AI (such as we have it so far) given any hint that it will be able to exceed that threshold? It appears to me that the answer so far is no.
But even if the answer is yes, and even if we eventually exceed that threshold, the exponential claim is still unsupported by any evidence. It could be just making logarithmic improvements at machine speed, which is going to be considerably less dramatic.
> The users paying $20 or $200/month for premium tiers of ChatGPT are precisely the ones you don't want to exclude from generating ad revenue.
but they're already paying you. While I appreciate the greed can be there, surely they'd be shooting themselves in the foot. There's many people who would pay who find advertising toxic and they have such huge volumes at free level that they'd be able to make a lot off a low impression cost.
Even in the days of print publications, the publisher would seek revenues from advertisers, subscribers, and they would sell their subscriber data. (On top of that, many would have contests and special offers which probed for deeper data about the readership.) In some sense, the subscriber data was more shallow. In other senses, it was more valuable.
I get what you're saying about shooting themselves in the foot, and I'm sure there will be options for corporate clients that will treat the data collected confidentially while not displaying advertising. I also doubt that option will be available (in any official sense) to individuals much as it isn't available (in any official sense) to users of Windows. For the most part, people won't care. Those who would care are those who are sensitive enough about their privacy that they wouldn't use these services in the first place, or are wealthy enough to be sensitive about their privacy that they would could pay for services that would make real guarantees.
The stats I see for Facebook are $70 per US/Canadian user in ad revenue. I'm not sure how much people would be willing to pay for an ad free Facebook, but it must be below $70 on average. And as the parent comment said, the users who would pay that are likely worth much more than the average user to the advertisers.
For the users who refuse to see ads, they'd either use a different platform or run an ad blocker (especially using the website vs the app).
Good point, I'm not sure. I'd be interested if anyone knew a good estimate for how much of their operating expenses the ad-tech side took.
Presumably Facebook would abandon ads if there was more money to be made in paid subscription, so I'll use corporate greed as evidence that the math doesn't work out. But I don't personally know.
Most Zoomers around me that pirate use some application that obfuscates the torrenting part away, they just have to know how to use a search box and hit play.
The moat is the leverage to get licensing deals using the size of the existing user base.
You could bootstrap a movie rental business by buying DVDs from a DVD store (then eventually from a DVD distributor, etc.). You cannot bootstrap a movie streaming business by buying streaming rights because nobody will sell them to you. They hardly even sell them to Netflix anymore.
The Internet Archive tried to get around the same issue for ebooks by scanning physical books and renting the scans (and not being a business), and it nearly cost them everything.
basic people sure, but the early internet showed an extremely strong demand for a better service than cable TV. When that demand is there then people will start seeking other options and building bridges of convenience to help the basic people also port over.
that's extreme motivation for someone to build a new competitor. Deepseek demonstrated that there's innovation out there to be had at a fraction of the effort.
Paying users aren't necessarily profitable users though. It's harder to pin down with OpenAI, but I see no end of Claude users talking about how they're consistently burning the equivalent of >$1000 in API credits every month on the $200 subscription.
(not that ads alone would make up an $800 deficit, they'll probably have to enshittify on multiple fronts)
wouldn't you charge those people more before you start serving ads? Also wont a lot of those sorts of users be running ad block anyway? I'm mildly sus that this is the right way to go.
I’m not sure where you’re getting this notion that a paid service introducing ads is a bad business model. It’s been proven time and time again that it’s not. Spotify, Netflix, Prime Video, Hulu, the list goes on, all introduced ads and none of them saw any real backlash. Netflix cracked down on password-sharing and introduced ads in the same year and lived to tell the tale. Unfortunately people just really don’t realize how harmful ads are.
it makes sense in terms of grooming. Most parents want to deny their children agency until they're no longer minors and giving them the internet massively undermines that idea. You're plugging your child into a stream of information that is mostly a sewer of misinformation.
The school system is a sewer of bias with 90%+ of teachers leaning left. Decentralised media is the only chance many kids have of hearing both sides of the story.
Is this a US thing? Maybe it's because your Overton window is flying miles beyond the right-end of the spectrum and you lost touch to what "left" even means?
> The school system is a sewer of bias with 90%+ of teachers leaning left.
Good thing people give a shit about teachers and pay them properly so everyone is eager to become a teacher in order to address that bias. Instead of idk, leaving it entirely as it is and just whining in a partisan fashion about how education has some sort of bias. I mean education has a lot of women who are teachers and the GOP don't appeal to a lot of women because they want to ban abortion and shit like that. So that'd probably explain it simply enough. In terms of priorities what if the massive funding went into teaching instead of recruiting for ICE? Shows to me what's important to people.
Tbh, I don't think minors need to be angry about misinformation about migrants (which is what I got in like 5m last time I created a fresh twitter account), they can wait until they're old enough to vote. They'll still fall for that shit all the same, so there's no need to be upset about it. Might as well ground our kids for their first 16/18 years before unleashing the Nick Fuentes community on them.
That is funny, because the vast majority of leftists, whether it is progressives, social democrats, socialists, or Marxists or what have you would complain that schools mostly ignore leftist ideals in favor of free market capitalism, conservative/traditional US political theory and civics, and propagandized history that always assumes the US was a good guy acting in good faith.
doing nothing. Governments typically marginalise techies when it comes to decision making, so the least they can do is make the call of lesser harm.
If kids really want to use social media, they'll find a way. Its more about making it hard/impossible for those who haven't yet grasped their agency. As ever, its about electors and in this case: parents.
How is it insane to jump to the logical conclusion of all of this? The article was full of warnings, its not a sensible thing to do but its a cool thing to do. We might ask whether or not it works, but does that actually matter? It read as an experiment using experimental software doing experimental things.
Consider a deterministic life form looking at how we program software today, that might look insane to it and gastown might look considerably more sane.
Everything that ever happens in human creation begins as a thought, then as a prototype before it becomes adopted and maybe (if it works/scales) something we eventually take for granted. I mean I hate it but maybe I've misunderstood my profession when I thought this job was being able to prove the correctness of the system that we release. Maybe the business side of the org was never actually interested in that in the first place. Dev and business have been misaligned with competing interests for decades. Maybe this is actually the fit. Give greater control of software engineering to people higher up the org chart.
Maybe this is how we actually sink c-suite and let their ideas crash against the rocks forcing c-suite to eventually become extremely technical to be able to harness this. Instead of today's reality where c-suite gorge on the majority of the profit with an extremely loosely coupled feedback loop where its incredibly difficult to square cause and effect. Stock went up on Tuesday afternoon did it? I deserve eleventy million dollars for that. I just find it odd to crap on gastown when I think our status quo is kinda insane too.
reply