Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not 0.85 per file. It's 0.85 for the 50k files combined.


No, because the program has to be run by slurm over several compute nodes, so it can't process them all at once.


as long as you don't have 50k computers, it still should be 0.85 seconds per node which is still tiny.


> it still should be 0.85 seconds per node

No it's not, because I will not architecture my whole pipeline & program around Julia inability to start in maybe a second in a year or 1.7" now, I will just use another language.


In good company, that is what Python folks do all the time.


Python does not take 1.7" to load pandas.


> I will just use another language.

Hello C, C++ and Fortran.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: