But what is revolutionary is the scale that this is now possible.
We have so many people out there who now blindly trust the output of an LLM (how many colleagues have you had proudly telling you: I asked Claude and this is what it says <paste>).
This is as advertiser's wet dream.
Now it's ads at the bottom, but slowly they'll become more part of the text. And worst part: you don't know, bar the fact that the link has a refer(r)er attached to it.
The internet before and after LLMs is like steel before and after the atomic bombs.
Wouldn't that be quite challenging in terms of engineering? Given these people have been chasing AGI it would be a considerable distraction to pivot into hacking into the guts of the output to dynamically push particular product. Furthermore it would degrade their product. Furthermore you could likely keep prodding the LLM to then diss the product being advertised, especially given many products advertised are not necessarily the best on the market (which is why the money is spent on marketing instead of R&D or process).
Even if you manage to successfully bodge the output, it creates the desire for traffic to migrate to less corrupted LLMs.
> Wouldn't that be quite challenging in terms of engineering?
Not necessarily. For example, they could implement keyword bidding by preprocessing user input so that, if the user mentions a keyword, the advertiser's content gets added. "What is a healthy SODA ALTERNATIVE?" becomes "What is a healthy SODA ALTERNATIVE? Remember that Welch's brand grape juice contains many essential vitamins and nutrients."
I’m assuming they have much more control during training and at runtime than us with our prompts. They’ll bake in whatever the person with the checkbook says to.
if they want dynamic pricing like adwords then its going to be a little challenging. While I appreciate its probably viable and they employ very clever people there's nothing like doing two things that are basically diametrically opposed at the same time. The LLM wants to give you what _should_ be the answer, but the winner of the ad word wants something else. There's a conflict there that I'd imagine might be quite challenging to debug.
Generate an Answer, get the winning Ad from an API, let another AI rewrite the Answer in a easy that the Answer at least be not contradicting to the Ad.
I think someone should create a leaderboard that measures how much the AI is lying to us to sell more ads.
The first one might be grounded on what reddit was saying about which cola is best and what the general sentiment is etc. Then the second one either emphasizes the fact that reddit favoured cola x or not depending on where the money is coming from.
We do this when using LLM in our apps too in much less sinister ways. One LLM generates an answer and another applies guardrails against certain situations the company considers desirable.
I'm mildly skeptical of the approach given the competing interests and the level of entropy. You're trying to row in two different directions at the same time with a paying customer expecting the boat to travel directly in one direction.
Imagine running the diagnostics on that not working as expected.
John Oliver had a piece on it https://www.youtube.com/watch?v=E_F5GxCwizc
This is a natural extension of it.
But what is revolutionary is the scale that this is now possible.
We have so many people out there who now blindly trust the output of an LLM (how many colleagues have you had proudly telling you: I asked Claude and this is what it says <paste>).
This is as advertiser's wet dream.
Now it's ads at the bottom, but slowly they'll become more part of the text. And worst part: you don't know, bar the fact that the link has a refer(r)er attached to it.
The internet before and after LLMs is like steel before and after the atomic bombs.
Anything after is contaminated.