Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
We Are Still Unable to Secure LLMs from Malicious Inputs (schneier.com)
3 points by zdw 4 months ago | hide | past | favorite | 1 comment


Bruce Schneier:"We simply don’t know to defend against these attacks. We have zero agentic AI systems that are secure against these attacks. Any AI that is working in an adversarial environment—and by this I mean that it may encounter untrusted training data or input—is vulnerable to prompt injection. It’s an existential problem that, near as I can tell, most people developing these technologies are just pretending isn’t there."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: