Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The AI being wrong problem is probably not insurmountable.

Humans have meta-cognition that helps them judge if they're doing a thing with lots of assumptions vs doing something that's blessed.

Humans decouple planning from execution right? Not fully but we choose when to separate it and when to not.

If we had enough data on here's a good plan given user context and here's a bad plan, it doesn't seem unreasonable to have a pretty reliable meta cognition capability on the goodness of a plan.



Depending on your definitions, either:

* there are already lots of "reasoning" models trying meta-cognition, while still getting simple things wrong

or:

* the models aren't doing cognition, so meta-cognition seems very far away




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: