Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Startup time for a docker container is way way faster than a VM. Also, you could run the exact binary state of production, which is helpful if you run into "works on my machine" types of problems.


Agreed, but my point was that Docker doesn't reduce dev setup time. Give a good vagrant config file to a dev and tell him to do vagrant up and you have the same result as what you're saying. You can replicate production state with vagrant too (and bash scripts if we stretch this) and avoid "works on my machine" problems.

I'm not saying that vagrant > docker. The way I see it, docker is great if your infrastructure is using it all the way. If your prod setup if not dockerized, using docker in dev seems to me counterproductive than spinning up a VM and provisionning it with ansible or puppet to achieve production replication. As @netcraft said, I don't see why I should "change my server architecture" to use docker in dev.


If you have a complex stack (multiple services, different versions of Ruby/Python/etc, DB, search engine, etc), it's a real pain to shove them all into a single VM. Once you have 2 VM's running you have already lost to Docker on memory/space efficiency and start-up time.


I have yet to see real, complex, and distributed applications that share the exact same config in dev and production. I know that having the same versions of system libs in dev and prod can be a problem in some context and docker can help with that, but it's not the only solution and does not take care of the whole landscape (e.g., npm packages.json, pip requirements.txt, etc.).

I totally agree that startup time of a container is far less than a VM, but I don't see how docker "removes all the trouble of running applications that you need for your development: databases, application servers, queues"

You still need to install, configure these services, make sure that the containers can talk to each other in a reliable and secure way, etc.


First, I'm a dilettante. I haven't used docker in production. I've really only set up a handful of containers.

That said, all of those fiddly library dependencies are where i struggle the most at work. If i could just build a docker image and hand that off, it would save me a lot of grief with regard to getting deployment machines just right.

I do have a great deal of experience with legacy environments, and it seems like the only way to actually solve problems is to run as much as possible on my machine. Lowering that overhead would be valuable. Debugging simple database interaction is fine on a shared dev machine. a weblogic server that updates oracle that's polled by some random server that kicks of a shell script... ugh. Even worse when you can't log into those machines and inspect what a dev did years ago.

If you've got a clean environment, there's probably not as much value to you.


I hear you about legacy systems. Two years ago, I had to support a Python 2.4 system that used a deprecated crypto c library and I did not want to "pollute" my clean production infrastructure. Containers would definitively help with this scenario. The thought never occurred to me that docker could be used to reproduce/encapsulate legacy systems, thanks!


At the company I work for, we went through all the trouble of getting our distributed backend application running Vagrant using Chef so that we could have identical local, dev and production environments.

In the end, it's just so slow that nobody uses it locally. Even on a beefy Macbook Pro, spinning up the six VMs it needs takes nearly 20 minutes.

We're looking at moving towards docker, both for local use and production, and so far I'm excited by what I've seen but multi-host use still needs work. I'm evaluating CoreOS at the moment and I'm hopeful about it.


I don't see how Docker solves the speed problem without a workflow change that could already be accomplished with Vagrant.

* Install your stack from scratch in 6 VMs: slow * Install your stack from scratch via 6 Dockerfilea: slow * Download prebuild vagrant boxes with your stack installed: faster * Download prebuilt docker images with your stack installed: fastest

The main drawback of Vagrant is that afaik it has to download the entire box each time instead of fetching just the delta. That may not matter much on a fast network.


Running 6 VMs has non-trivial overhead though. That just isn't there using containers.


I have to disagree, although I'll admit that what's "trivial" is subjective. Sure, a container means you don't have to run another kernel. If the container is single-purpose like docker encourages, you skip running some other software like ssh and syslog as well. That software doesn't use much CPU or memory though. I just booted Ubuntu 12.04 and it's using 53MB of memory. Multiplied across 6 VMs that's 318MB, not quite 4% of the 8GB my laptop has. I'd call that trivial.

On the last project where I had to regularly run many VMs on my laptop, the software being tested used more than 1GB. Calling it 1GB total per VM and sticking with the 53MB overhead, switching to containers would have reduced memory usage by 5%. Again, to my mind that's trivial.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: