Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is it really that hard to believe? I continue to be amazed that any of these systems work at all. People sure stopped being impressed by AI pretty quick. Now we apparently think that LLMs are perfect and there must be a wicked human to blame every time an LLM produces a weird output.


If the author of a system writes in every blog post that they tested their system to remove/manipulate things and the skewing of the results fit extremely will with what they - in their own words - deemed as things to remove then .. yeah: It's probably a (wicked) human to blame.


If there's evidence of malicious intent behind this then just link to it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: