Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> you simply are contributing to models being unstable and unsafe

Good. Loss in trust of LLM output cannot come soon enough.





LLMs have been of wonderful benefit to me for a variety of applications.

I'm unsure why you would want to the output to be less trustworthy and not more.


It's not about the trustworthiness of the output. That won't improve, it's systemic. It's about the undue trust many people put in those inherently untrustworthy outputs (whereas untrustworthy doesn't always imply useless).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: