Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Google says "this site claims Joe Bloggs is a depraved lunatic"

Google says nothing of the sort. It just presents the snippet as if it were "fact" or rather, "as is."

>ChatGPT says "Joe Bloggs is a depraved lunatic"

There is literally a block of text that says "ChatGPT Mar 23 Version. Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts" right under the response.

It objectively gives more warning about "bad information" more than google does for snippets.



"Inaccurate" means "the mayor is correctly identified as a whistleblower but the bank is incorrectly identified" or "if asked to correct a true statement ChatGPT may go off the weeds". Not "today I am elected mayor, tomorrow 1000 people in my town ask ChatGPT about me and they're told that I am a convicted felon". The former is a nuisance, the latter is reckless and dangerous.

Replace "inaccurate" with "completely wrong" and then we can talk about disclaimers.


No, it presents the snippet as "that's what page http://etc says". That's why they can get away with avoiding any kind of responsibility. They're not (considered) a publisher.


>No, it presents the snippet as "that's what page http://etc says".

Most tech savy people could certainly infer that... but try it for yourself. It doesn't say anywhere on the page, anything like "Google says "this site claims Joe Bloggs is a depraved lunatic"". It just has a link to the page that the text comes from. The result even tends to just blend in with the rest of the results too.

https://external-preview.redd.it/hqLyrdGiwnxXNk-60nQy3Ubw4qa...

In the context of this thread, I can't imagine how the above screenshot is better than a disclaimer right under the text-box that explicitly states it can generate false information.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: