Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> [2]

In the comments, zero_k posted a link to the SAT competition's parallel track. The 2025 results page is here: https://satcompetition.github.io/2025/results.html Parallel solvers consistently score lower (take less time) than single-threaded solvers, and solve more instances within the time limit. Probably the speedup is nowhere near proportional to the amount of parallelism, but if you just want to get results a little bit faster, throwing more cores at the problem does seem like it generally works.



> The solvers participating in this track will be executed with a wall-clock time limit of 1000 seconds. Each solver will be run an a single AWS machine of the type m6i.16xlarge, which has 64 virtual cores and 256GB of memory.

For comparison, an H100 has 14,592 CUDA cores, with GPU clusters measured in the exaflops. The scaling exponents are clearly favorable for LLM training and inference, but whether the same algorithms used for parallel SAT would benefit from compute scaling is unclear. I maintain that either (1) SAT researchers have not yet learned the bitter lesson, or (2) it is not applicable across all of AI as Sutton claims.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: