Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I stumbled upon this one as well, but I do not understand it really: Why is my job safe if Ads prove there is no AGI?

Because even if there would be AGI, they could (and would?) serve ads anyway?





If your job is gone forever, with what money are you going to buy the thing in the advert? If nobody can buy the thing in the advert, the value of the ad slot itself is zero.

This is silly, so why does chatgpt ask for monthly payments if AGI is imminent? With what money would we pay them?

My argument and yours both agree that it is *not* imminent.

(Where the "it" that isn't, is AGI that takes your job, rather than any other definition of AGI).


To bring it to fruition.

They had claimed money would be worthless, etc, all investors should consider it a donation in a post-AGI world, blah blah.


who would pivot to selling ads if AGI was in reach? These orgs are burning a level of funding that is looking to fulfil dreams, ads is a pragmatic choice that implies a the moonshot isn't in range yet.

Because AGI is still some years away even if you are optimistic; and OpenAI must avoid going to the ground in the meantime due to lack of revenue. Selling ads and believing that AGI is reachable in the near future is not incompatible.

>Because AGI is still some years away

For years now, proponents have insisted that AI would improve at an exponential rate. I think we can now say for sure that this was incorrect.


> For years now, proponents have insisted that AI would improve at an exponential rate.

Did they? The scaling "laws" seem at best logarithmic: double the training data or model size for each additional unit of... "intelligence?"

We're well past the point of believing in creating a Machine God and asking Him for money. LLMs are good at some easily verifiable tasks like coding to a test suite, and can also be used as a sort-of search engine. The former is a useful new product; the latter is just another surface for ads.


Yes, they did, or at least some of them did. The claim was that AI would become smarter than us, and therefore be able to improve itself into an even smarter AI, and that the improvement would happen at computer rather than human speeds.

That is, shall we say, not yet proven. But it's not yet disproven exactly, either, because the AIs we have are definitely not yet smart enough to meet the starting threshold. (Can you imagine trying to let an LLM implement an LLM, on its own? Would you get something smarter? No, it would definitely be dumber.)

Now the question is, has AI (such as we have it so far) given any hint that it will be able to exceed that threshold? It appears to me that the answer so far is no.

But even if the answer is yes, and even if we eventually exceed that threshold, the exponential claim is still unsupported by any evidence. It could be just making logarithmic improvements at machine speed, which is going to be considerably less dramatic.


The original AGI timeline was 2027-2028, ads are an admission that the timeline is further out.

If AGI or something truly amazingly novel and useful for the market was about to get here, that’d be an unbelievable about of cash on the horizon. The promises they’ve made, it even half met, would be revolutionary. Ad placement means the business likely needs ads to work because the product isn’t near what they have been promising and won’t generate the revenue (at least not on its own) that investors have been promised because the hype was way too big.

Yes, I don't understand it either. I think the opposite is true. If AGI happens and it becomes immensely successful, it would be the best medium to deliver ads and at the same time our jobs wouldn't be safe.

Perhaps the people who like that quote can elaborate why that quote makes sense and why they like it?


AGI would be able to exponentially improve at much, much better money making schemes like high frequency trading. It would beat every online business. It would run 24/7/365.26 farming out tens of thousands of conversations at once, customized to the person it's talking to, for sales, supplier negotiations, marketing, press, etc.

Its costs would be frighteningly low compared to human employees, so it's margins could remain fine at lower prices.


Nobody will have jobs and nobody will be able to buy the stuff offered in the ads

Because it shows that it’s just yet another ad delivery vehicle.

Once you go ads, that’s pretty much it, you start focusing on how to deliver ads rather than what you claim your core competency is.


If AGI was around the corner, they wouldn’t have to resort to what some consider a scummy way to make money. They’d would become the most valuable company on the planet, winning the whole game. Ads show you they don’t know what else to do but they desperately need money.

This doesn’t answer the actual question: why they wouldn’t just do both?

There are costs to doing ads (e.g. it burns social/political capital that could be used to defuse scandals or slow down hostile legislation, it consumes some fraction of your employees’ work hours, it may discourage some new talent from joining).

You have AGI, why do you care about new talent? You have AGI to do the ads. You have AGI to pick the best ads.

Isn't that the pitch of AGI? Solve any problems?


Yes. Infinite low cost intelligence labor to replace those pesky humans!

Really reminds me of the economics of slavery. Best way for line to go up is the ultimate suppression and subjugation of labor!

Hypothetically can lead to society free to not waste their life on work, but pursue their passions. Most likely it’ll lead to plantation-style hungry-hungry-hippo ruling class taking the economy away from the rest of us


yes but if AGI is around the corner, with what would they make money then?

Selling this AGI to a state actor? OK - this seems realistic, but for how many billions then? 100b per year?

Thats what I meant.


> consider a scummy way to make money.

How should Chatgpt survive then?


Well, the obvious answer is none of these companies should.

ok but the real world does not work like that.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: