Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What are some examples where you've provided the LLM enough context that it ought to figure out the problem but it's still failing?


if prompting worked then we would have reliable multi-step agents, the companies that are succeeding like Manus are doing alignment, which is intuitive




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: