Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
arkmm
7 months ago
|
parent
|
context
|
favorite
| on:
The new skill in AI is not prompting, it's context...
What are some examples where you've provided the LLM enough context that it ought to figure out the problem but it's still failing?
mountainriver
7 months ago
[–]
if prompting worked then we would have reliable multi-step agents, the companies that are succeeding like Manus are doing alignment, which is intuitive
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: