Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Same 4-year age for the i7-8700k. It's true that it's about half as fast as a modern Ryzen 7 5800X or brand-new Intel i7-11700K, and if you could get a new Nvidia 3080 or AMD RX 6900-XT they'd have a similar doubling in speed, but it's not ancient.

Regardless, does the difference between 5 minutes or 10 minutes for 10,000 pixels really matter? It still means that you're running on the order of a hundred thousand operations per pixel; what can you possibly need to do that requires that much processing?



It's probably doing some equivalent of solving a hard inverse problem approximately using numeric method, likely with as lest as many unknowns per pixel as the image, in a noisy domain, and with an expensive cost function for the optimization.

Not saying they are doing exactly that, but something in that realm/scale. 100kOps per pixel is really not that much in those kind of problems.


I haven't read this paper, but extrapolating from my experience working with other super-resolution scopes: reconstruction. Instead of measuring the pixels directly, you measure some projection of them and then have to solve an inference problem to recover the image.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: