Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> At its core, Moravec's paradox is the observation that reasoning takes much less computation compared to sensorimotor and perception tasks. It's often (incorrectly) described as tasks that are easy for humans are difficult for machines and visa versa.

He just states that this description would be incorrect multiple times but never gives a reason why it would be incorrect.

Then he tries to simplify the paradox to a question of degree, e.g. "hard" problems for computers just have a larger search space and require more compute.

But wasn't a big part about the paradox also that we didn't even have insight as how the problems could be solved?

E.g. if you play chess or do math as a human, you're consciously aware if the patterns, strategies and "algorithms" you use - and there is a clear path to formalize them so a computer could recreate them.

However, with vision, walking, "thinking", etc, the process are entirely subconscious and we get very little information on the "algorithms" by introspection. Additionally, not just the environment and the input data is chaotic and "messy", but so is the goal of what we would want to archive in the first place. If you ever hand-labeled a classification corpus, you could experience this firsthand: If the classification criteria were even moderately abstract, labelers would often disagree how to label individual examples.

Machine learning didn't really solve this problem, it just sort of routed around it and threw it under a rug: Instead of trying to formulate a clear objective, just come up with a million examples and have the algorithm guess the objective from the examples.

I think this kind of stuff is meant with "the hard problems are easy and the easy problems are hard".



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: