Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah I'd agree, we learn addition/multiplication/etc as processes of smaller problems. If you gave an LLM a prompting framework to do addition I'm sure the results would be better (add the units, add the tens,

Food for thought- would a savant use the same process? Or have they somehow recruited more of their brain to where they can memorize much larger problems.



So first of all, prompting and re-prompting an LLM is basically forcing it to deduce rather than induce; using millions of gates to get from 1+1 to 1+2. That's what our brain does, too (uses millions of gates for dumb stuff), but we designed computers to do that using 4 bits, so it's ironic that we're now trying to write scripts to force something with 60 billion parameters to do the same thing.

I think savants usually solve problems in the most deductive way, using reasoning that leads to other reasoning. I went to an elementary school in the 80s where more than half the kids would now be labeled autistic... some got into math programs at colleges by the age of 12. I believe it's all pure reasoning, not like some magical calculator that spat out answers they didn't understand the reasons for.

[edit] If you meant: Do savants solve problems by recursively breaking problems into smaller and smaller problems, then yes. But the breaking-apart-of-problems is actually the hard problem, not the solving.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: