Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

- Which might be a different matter: of specifically SE declining. (A very different, and long-running, tragedy, but one that began long before the current AI boom and prompted by very different, non-technical issues.)

- That said, surely traffic will decline for Q&A sites. "How do I connect tab A into slot B" is something that people are likely to query LLMs for; the response will surely sound authoritative, and could be even correct. That's definitely a task where LLMs could help: common questions that have been asked many times (and as such, are likely to be well-answered in the human-made training data). A 20001st question of "how do I right-align a paragraph in HTML" has not been posted? Good. Rote tasks are well-suited to automation. (Which, again, brings us back to the issue "how to distinguish the response quality?")



But what happens with the next generation of questions? The reason LLMs can answer how to right-align a paragraph in HTML is at least in part because it has been asked and answered publicly so many times.

Now imagine that HTMZ comes along and people just go straight to asking how to full justify text in HTMZ for their smart bucket. What happens? I doubt we’ll get good answers.

It feels like the test of whether LLMs can stay useful is actually whether we can stop them from hallucinating API endpoints. If we could feed the rules of a language or API into the LLM and have it actually reason from that to code, then my posed problem would be solved. But I don’t think that’s how they fundamentally work.


>Now imagine that HTMZ comes along and people just go straight to asking how to full justify text in HTMZ for their smart bucket. What happens? I doubt we’ll get good answers.

So, I think the answer is that since all useful data is already in a LLM somewhere all new data will be stolen/scraped and inserted in real time. So if real people are answering the question it will work as normal. The real question is what happens when people are trying to mine karma by answering questions using an LLM that is hallucinating. We have seen such with the Bug Bounty silliness going on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: