Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

These are all legitimate reasons, but my personal experience (and perhaps preference?) is to use Docker for anything that is more complex than pip can handle.

> conda-forge managing builds of some really flaky binary python packages that are sometimes a nightmare to build locally

Yeah this is fair. Fortunately it's becoming rarer.



Well. To each their own, use cases differ so wildly it's hard to compare them.

The key audience for conda is ML/DS space, where most if not all packages come from either C/C++/Rust/Fortran and have to be compiled, while also requiring a consistent set of external C libraries like libblas, etc. As I said, some of those packages are a completely nightmare to build locally. Conda simplifies this by a lot in that you can just 'conda create -n myenv some=1.0 crazy=2.0 deps=2.0' and in a few seconds (if you use mamba and not conda) you have a working Python environment so off you go; no dockers, no local builds etc.


Honestly, I've found that conda has made operationalizing code very difficult. We've found it much easier to simply switch back to using pip, poetry, docker, and the standard OS package management tools rather than conda. Conda's dependency resolution is also quite slow and causes our builds & CI to timeout unless we drop in mamba.


Yes, precisely this. (I only do ML/DS work)


Can these hard-to-build-locally lib just upload a wheel to PYPI?


lol, the dependency hell of even using mamba can be large on old things, I have waited more than an hour for dependency resolution using both.


Seems docker is going from the frying pan to the fire. Have they added ‘resume download’ yet to docker? Over my slow DSL I can’t stand how docker makes me download 7G of images when I want to install something very simple, frequently it fails to do the download and I have to do it several times so it adds up to more like 28G of downloading and all that waiting.

I worked at one place where management was shocked when I told them the image build process would take 20 minutes on gigabit fiber up in Canada and we agreed to time it and I measured 18 minutes. Docker slows down “dev” to the speed of “ops.”

I don’t know how they did it but the data scientists could always find f-ed up Python images, you never got the same default character encoding twice, one time the default character set was Hungarian and I wonder how that happens…




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: