They claim "Podman provides a Docker-compatible command line front end and one can simply alias the Docker cli, `alias docker=podman`".
That claim alone shows they dont know or dont care how people use container engines outside the k8s space.
I know a lot of orgs that simply use docker-compose files to spin up simple setups. There is podman-compose[1] but its "still underdevelopment".
Then there is software using the docker socket.
Portainer? No Podman support [2]
Testcontainers? No Podman support [3]
Traefik? No Service discovery for you [4]
Should i go on?
yeah its rootless and i like the idea podman and buildah represent.
What i dont like is the way they break at least part of the ecosystem.
Podman supports pods, hence the name. It follows the k8s approach, and it’s nowhere close to a drop-in replacement for docker-compose, but managing multiple related containers is supported as a core feature.
To your point about the docker socket: I tried to set up Gitlab using podman, mainly for CI/CD, but it ended up being too much effort for a spontaneous hobby project. This aspect is severely lacking, I agree.
The lack of composer and Traefik support is the main reason I don't switch my personal projects to podman (work stuff is all K8s).
That and neither podman nor buildah being at least as easy to install on Ubuntu LTS then Docker.
Compose is just lovely for personal stuff. Swarm is also pretty decent (and easier to run for small clusters, if somewhat unstable networking-wise). Podman can't compare.
I really feel like Docker Compose and Swarm are underrated.
For running groups of related containers during development, Compose is wonderful - Compose files are simple and easy to understand even if you've never seen one before.
Swarm is also good for small-scale production deployments - it's just so simple to deploy and update. "secrets" and "configs" are also really useful, but of course I see the appeal of a centralised system such as Vault for complex deployments.
I've never tried to use Swarm it at scale, so don't know what kind of issues you might face that k8s would solve.
> Can’t I just do all what docker-compose does with a Makefile
There's a lot of value in just being able to run a set of unified Docker Compose commands and have things work the same in every case with the same YAML configuration.
But technically yes you could replicate that behavior, however Docker Compose does a bunch of pretty nice things, like when you run docker-compose up it will intelligently recreate containers that changed but leave the others untouched. Then there's the whole concept of override files, etc..
It would take a fair amount of scripting to emulate all of that behavior along with getting all of the signal processing and piping multiple services to 1 terminal output acting well. Even today Docker Compose has issues with that after years of development.
You should run docker-compose with --verbose one day just to see what really happens under the hood. Compose is doing a lot. For example I wrote about this a while back. It also includes an example output of running a single service app with --verbose: https://nickjanetakis.com/blog/docker-tip-60-what-really-hap...
The benefit is standardization. If every team you join has a `docker-compose.yml` to run a local development environment you know exactly what it is, how to add things to it, etc. If every team you join has a custom solution for an extremely lightweight development environment than you need to be brought up to speed every single time.
It's a similar argument to "Why do you need Makefiles when you can just write a bunch of bash files and a meta-bash file to execute each sub script based on the stat results of a list of files?"
The recommended approach for managing groups of multiple containers with podman seems to be using pods (hence the name), which can be managed by Makefiles as you describe.
This is like asking "can't I just use a cellar + ice instead of using a fridge?".
Yes, through a lot of scripting, you can emulate docker-compose. But compose actually handles a lot of things, to the point that manual scripting doesn't really make any sense.
I don't understand how people manage multiple related containers -without- compose. It makes things so much easier.
Among other responsibilities, I currently manage a Docker Swarm deployment that serves between 10k and 100k users. I can’t compare to k8s since I lack experience with k8s, but I can say that ops on this service take approximately 1% of my time per month. After the initial learning curve, Swarm has proven quite stable and easy to work with. 10/10 would deploy with again (in fact, I will within the next week).
Fair points. Work is ongoing with a podman-compose. I haven't used it myself. Most of the routing work for our initial users are focused on is around Kubernetes/okd/OpenShift (and not Swarm) and so that's where those routing issues are solved.
I know someone worked on a Ubuntu build but I'm not sure who is maintaining that. It would be a great place for people to contribute.
I found kompose didn't give me a fluid experience with any project I tried using it with. I got endless API incompatibility issues for some reason. I'd have to dig through the docs and rewrite the files kompose generated. Even then they wouldn't always work.
I attempted to use this for a couple very simple projects and the conversion completely fell on it's face. I tried four projects and none of them successfully converted.
We use it for production, spinning up hundreds of identical setups. We just baked an AMI with a docker-compose.yml with restart: always and it is very dependable.
It’s not even dependable for us on local setups; a “docker-compose up” that has to pull images will almost always throw an HTTP timeout exception from inside The Python client.
Hard to say. Our whole team experiences it, there are a dozen related issues in the issue tracker, and the docker-cli and Go bindings work fine while docker-compose is timing out. Based on the issues we’ve had and those files in their issue tracker, there are lots of related issues including some where the docker daemon (or perhaps the docker for mac VM) is unresponsive; we’ve definitely seen that, but as previously mentioned, we’ve also seen this issue while the daemon responds to other clients.
I think that removing the docker socket is a feature of podman. I never really understood the need for the docker daemon anyway, especially when you are building container.
If you are only building a container then you can use BuildKit. But there are plenty of instances where having the socket is extremely useful. For example I've seen plenty of test suites run in a container on a unique host that uses Docker socket to build containers on many other hosts.
The Docker daemon used to be required. It has always done a lot of things. I don't understand your comment given the fact minimizing Docker today is easy because functions of container operations have been broken out into unique tools over time. Part of Dockers success, I believe, is that you could do everything in a single install.
> But there are plenty of instances where having the socket is extremely useful.
Of course there are plenty of cases where the socket is useful, but you can have the daemon on top of non daemon requiring tools. Which BTW is the direction docker is going in now, when it is an industry standard.
>The Docker daemon used to be required. It has always done a lot of things.
But initially docker was doing two things only, build a container and run a container. And for both daemon is not required.
> Part of Dockers success, I believe, is that you could do everything in a single install.
Single install has to do only with packaging, not the architecture or docker.
And I'm not sure why the OPs link is relevant. If you look at the blog post it's all old architecture and most of the first round of points of "why not Docker" (a favorite way of RHEL employees to have a conversation about containers) are completely outdated and wrong as it stands today. It's actually a pretty entertaining read given most of the points no longer apply.
I don't disagree with you, but two things worth considering:
1. Perhaps it's blog posts like these and innovation in RH tools that spurred docker into action making things better?
2. You've got to remember the target market where RH operates: huge scale, and often extremely high security. To you a daemon may not be a big deal, but to a bank that's an attack surface that is scary. You might not find the "docker has an unnecessary daemon" argument compelling, but many in the high security field definitely do.
Docker architecture isn't reflective of that blog post anymore. Considering containerd is now the runtime, the arguments outlined initially for "why Docker isn't good in Redhats eyes" are no longer valid in the case of how the argument is laid out.
I don't feel Docker was incented to do anything based on RH's blog posts. I worked for Docker and the joke internally was how hard RH went out of their way in a "me too" fashion to compare their homegrown products, that weren't offering any upside and offering a lot of replacements that weren't nearly with feature parity. Dan Walsh was the butt of many jokes with his tirades about how one should never say Docker, and in turn probably exposed more people to Docker than he turned away.
High security? As I understand it RH doesn't have a way to guarantee containers that are requested to be run have been signed by appropriate release control. Docker can.
Since you're a RH employee maybe you can share the innovative reasoning why RH removed all Docker bits (that they could) from the OS and started telling customers Docker on RHEL wasn't supported? Sounds very anti-innovative to me, but would love your feedback.
I regret to inform you that you are not understanding this point regarding the daemon correctly. The containerd process is a replacement for the dockerd. It's just a new daemon I apologize I didn't address this clearly in the blog - I was a bit flippant in using dockerd only when I should have mentioned containerd.
I agree that when you're working within the Docker Swarm type community there was probably little incentive to to anything based on RHs blog post.
I'm very sorry to hear that Dan Walsh was the butt of many jokes at Docker. Dan Walsh did so much for the Docker community at the start - SELinux work being just one area of his many important contributions to making Docker successful.
We removed it because we have decided to support one OCI based container toolset in RHEL. Just like we switched from our home grown open source cartridge approach and dropped it to Dockers homegrown based approach in RHEL and OpenShift we had to do the same. We did this with our Gear versus Kubernetes versus Swarm versus Mesos approach. All of these are really good technologies. I myself advocated some support for Mesos at one time. It was about engineering and support focus. It wasn't about killing innovation. It's about focus. This is certainly my personal opinion of our business decision but it is shared by others I work with.
Thanks for sharing your perspective, I appreciate it!
> Since you're a RH employee maybe you can share the innovative reasoning why RH removed all Docker bits (that they could) from the OS and started telling customers Docker on RHEL wasn't supported? Sounds very anti-innovative to me, but would love your feedback.
I don't work on any of the podman/buildah/etc stuff so I can't speak about any of that as an employee since I have no special insight. (I just disclosed my employer previously so people can judge for themselves if my opinions/thoughts reflect a conflict of interest).
But speaking just as a fan of docker (and of podman too), I actually agree that removing the docker bits from RHEL isn't cool. I've used Fedora as my workstation for over 10 years now and were I still working for various start ups I'd have a harder time since I used Docker for containerizing and deploying our apps. I'd probably have to run Ubuntu in a VM so I could use "real" docker, which would make me sad.
As I understand it tho it wasn't a decision they reached lightly. There were compatibility issues (such as cgroups v2) that would have held back other things. Sorry I can't be any more specific. I just don't know enough about the background.
I do think the real or perceived animosity tho is counter-productive, and I agree the Dan Walsh stuff was a bit on the silly side. I mean, "docker" and "containers" really did become synonymous in many developer minds, but I don't see why that was a big deal (and it certainly wasn't unfairly earned).
Can you comment about the Docker closed API stuff? As I understand it, Docker (the company) never released the API to the community (or CNCF, etc) and that is part of the reason why RH didn't want to support it for reasons of legal ambiguity and not wanting to be on the hook for a specific vendor's implementation.
I think it's a shame there isn't more collaboration between the RH people and Docker people. The rift may not be reparable now, but I hope we don't end up with a huge divide in the community where all RH/Fedora people have to use podman and Ubuntu/Debian people use Docker, and each OS doesn't support the either. That would be bad for everyone.
Sorry this comment is going on forever, but lastly since you worked for docker, thank you! Docker revolutionized the way I did things and has made my life better. Regardless what happens, Docker will have a warm place in my heart.
The entire architecture in the blog post is wrong and thereby most of the bulleted reasons the author claims. The container runtime is now containerd and, yes, there have been many strides to run rootless [0].
Was just about to post a comment pointing this out, autoconfiguration based on information obtained from the docker socket is a killer feature for a lot of software like Traefik.
- not when you're using docker-compose and have custom scripts for docker-compose
- lack of support from Linux distributions - only Fedora switched to Podman a few weeks ago and a few people at work where abandoned with their problems so they switched to Ubuntu and Arch
- still don't know how to self-host podman/quay repository locally - only paid services available - even for homelab - https://quay.io/plans/
- k8s/k3s integration - broken (for me?), never managed to make it work locally in homelab
I see absolutely no reason to ever switch to podman and quay as it's in current state.
I work for Red Hat, but I find myself pretty centrist on this issue. There's good arguments on all sides. Red Hat isn't doing podman and buildah etc because they want to crush or destroy docker. There are legitimate arguments and they have been open about them.
One big one is security. You may think it's overly paranoid to be concerned about having a daemon (especially one running as root, tho rootless docker is either here or near), but keep in mind different people have different requirements. If you're a bank securing billions of dollars, that attack surface is scary.
I find this argument not very convincing given that I have been in the audience when a red hat engineer who was giving a overview of open shift and related technologies started a presentation and started by making the audience “swear” not to call things docker containers, but just containers.
Are there things that would be better with a different model then the runc:containerd? Sure. But is that really the primary factor here? I very much doubt it.
Red hat and google wanted docker gone, and have spent the capital to do so. Good business move for open shift, GKE and RHEL, but not necessarily in the long term interest of the open source community.
> Good business move for open shift, GKE and RHEL, but not necessarily in the long term interest of the open source community.
But same applies to Docker (the company). In the end, their decisions are also business moves, and they are not necessarily better for the open source community. And they have also proven that they don't always work in the interest of the community - remember the "I don't accept systemd patches" ?
Redhat continues to try and eliminate Docker as a competitive threat. Water is wet. They are winning, but it still leaves a bad taste in my mouth. This won't be good over the long run for the Linux community.
Huh. I think that Redhat's work here is a boon to the community. Docker was trying to become synonymous with containerization (they would have me say "dockerization"). We benefit from open specs and multiple implementations.
They are certainly trying to compete with Docker, but are doing so by implementing a (subjectively) superior architecture. The community is winning here.
I'd like to reiterate that the Podman and Buildah projects were started in order to address business needs for specific risk averse customers. The upstream work we had started with Docker still could not meet those requirements and so investment was started in alternative OCI based approaches that were essentially daemonless and were taking advantage of other enhancements in namespaces and CGroups.
> If you are a Docker user, you understand that there is a daemon process that must be run to service all of your Docker commands. I can’t claim to understand the motivation behind this but I imagine it seemed like a great idea, at the time, to do all the cool things that Docker does in one place and also provide a useful API to that process for future evolution.
I guess that would be for the Windows and macOS support as it should make things easier implement in a cross platform way when you can just proxy the cli commands to a daemon running in a Linux VM even when you are on Windows or macOS?
There were a few reasons for this. The original golang version of docker required root, full stop. There was no difference between the client and server.
The first reason was to reduce privileges of the client interface. This would provide the possibility to reduce privileges and restrict what unprivileged could do later on. The communication over the socket is just http, which allows for remote management of docker containers.
A second reason was to build a strong contact between the client and server. The client became syntactic sugar for the rest calls, which helped stabilize docker. This would ultimately lead to enabling osx and win support via a VM on the host.
Another major goal was to enable the docker in docker use case which helped significantly when developing on docker itself.
Since you were there, can you comment on discussions you might have had, where tradeoffs were chosen which would have led to buildah instead of Docker e.g. prioritization on cross platform support? Your comment seems written in a very after the fact way, but surely someone was agonizing overs these choices before they became history.
Sorry for the slow response, just got off an airplane. These were not after the fact. Instead, we knew these were the potential benefits and we moved over. I personally wasn't happy about having to run a daemon to get a container, but we saw the benefits far outweighed the downside for the approach. The other approach on the table was for docker (pre client/server split) to continue requiring sudo. I don't recall anyone suggesting an alternative approach though.
Oh on multi arch support. This was purely about keeping focus and reducing the original scope. I shared the same concerns and brought up this exact topic with Solomon. A change here meant changes in docker hub and other key places including devops which would further burden an already stretched team.
I should review the comment a bit more carefully sometimes.
I've started using podman for my personal projects where I want to deploy just a single service in perhaps a couple of containers on a VM.
I rebooted a box one time, and some sort of tracking for the podman networking decided that my listening port was still in use even though no container was running. The only way I fixed it was to uninstall the networking tool podman uses (I forget it's name; slirp4ns or something) and podman itself then reinstall and start again.
I use docker daily, professionally, it has it's issues, but a lack of docker-compose like files and anecdotal reliability issues I've seen, I can't see it taking over from, or even competing with Docker for some time yet.
I'm sure that project would benefit from your feedback and information on what looks like a bug. What did upstream Podman community say? Did they understand your issues? Were they able to reproduce the error? Were they able to fix it?
Honestly, I haven't been in touch with the community yet. I needed it working quickly so I solved my issue and I moved on. If it happens again, I'll be expecting it and I'll collect data to report it.
Does Red Hat have a solution for using Kubernetes (or OpenShift) in development? Something similar to Skaffold or Tilt? AFAIK, those tools depend on DOcker to do builds.
What difficulty (or subpar experience) are you running into with CRC? Initially it was a bit weird, but getting up and running is fairly simple.
1) Go to Red Hat’s cloud site (linked on GitHub) and download the latest CRC release and your pull secret.
This does require having at least an empty Red Hat account, which may bother some.
2) Extract the release and run `crc setup` and let it do it’s thing.
3) Run `crc start -p /path/to/pull/secret` and wait for it to finish getting up and running. First time this may take 10 minutes, follow up starts 4. You can pass other options as well as needed.
4) Run `eval $(crc oc-env)` and start working with developer:developer credentials (or the provided kubeadmin creds). Use `crc console` to get to the WebUI.
We use this on Linux and macOS here just fine for local development and testing. As always, though, YMMV. CRC still has some other quirks that need to be ironed out, but it is generally useable. So far for us the weird parts are the limited self-signed certs and lack of cluster metrics, but both are known issues to the project. And that you can’t have a VPN process running as the startup will restart network services on your system.
The article discusses this, but the big win is that podman runs "rootless" (as a role user, not root), which can reduce system security exposure when building images. It also doesn't require a daemon which could be considered a single point of failure.
Thanks for all the feedback. I will try to address many of the comments in a follow up blog. I will also try to address some of them here in reply to the comments.
First I want to mention a couple of things:
1) My blog didn't recognize the diversity of meanings for "Docker" to the Docker community. This is a problem when there is confusion over Docker company, Docker community, Docker as a collective of products, Docker as a single command line project/product. My blog was specifically focused on Docker CLI users. The Docker command line tool that so many of us grew to love. To say I "don't know our care how people use containers" is an unfortunate conclusion to make. I'm sorry of my restricted use made it seem that way. I will say I wrote all of the original Docker CLI manual pages. So I can claim a very deep knowledge of the Docker CLI. I had to test almost every aspect of the CLI in order to write those manual pages. As a result I filed several bugs too. But I do understand the the Docker CLI is just one part of the tooling that many Docker community users take advantage of. My definition of Docker was limited in my blog. It did not address projects like docker-compose etc.
2) It is unfair to say that Red Hat employees set out to destroy/ruin/whatever Docker. Very early on we wanted to help the Docker community. Red Hat provided a lot of validation to the Docker community by jumping on board and providing a lot of technical expertise and including it in RHEL and OpenShift. People like Dan Walsh and others tried very hard to explain both enterprise features required by risk averse users and also how to build a sustainable inclusive community model. Unfortunately much of our enthusiasm to help make Docker successful, based on our proven track record, fell on deaf ears. Perhaps their was s suspicion that were were looking after our own self interests but it really was a genuine effort to share our experiences in the community. Our open source first approach is always in the interest of the community and our customers and we know that that benefits us too. We know that strong inclusive communities benefit everyone. We sometimes get this wrong. But most times it works out - consider out move from our OpenShift cartridges technology to Docker. We didn't try to kill Docker, we knew it had the right approach. We wanted to make it better through open source community contributions. And we invested in Docker very heavily. Eventually some of our customer concerns with security could not be met with Docker's daemon approach (btw dockerd or containerd) and so wehad to address those requirements.
I have continued to talk about the value of Docker to the container community and how they revolutionized the industry because of their unique value add on Linux containers.
3) There are areas that Podman still needs to address. Some hare been worked on - podman-compose and a Mac client etc. Plenty of work to be done. If you're interested then please consider contributing to Podman (libpod) Podman-Compose etc. at https://github.com/containers
podman doesnt support osx via a vm layer or natively, so that's a deal break for me, docker is really about the developer desktop experience which RedHat (IMO) is not great at.
I wonder if users of non-RPM distros would be eager use these tools. Even if these tools are a little better (maybe they are), my guess would be only medium sized K8s users may find it worthwhile to switch.
Large users usually build their own, and small users are lazy to chase incremental improvements (if they do change they'll move to public build services).
It looks like at least two people want buildah/podman on non-RPM distros. buildah recently entered Debian unstable and podman is in the process of being packaged:
(without first-hand experience with it) I could see it being popular with people not going all-in with containers. We have a few machines with services that are conventionally managed and a few services being stuffed into docker containers, and it feels slightly odd that some services are managed through systemd and some through docker commands.
I ended up writing a mostly-declarative tool for managing my mix of docker-composed and init-system-ed services at home when that got too unmanageable.
Then there is software using the docker socket.
Portainer? No Podman support [2]
Testcontainers? No Podman support [3]
Traefik? No Service discovery for you [4]
Should i go on?
yeah its rootless and i like the idea podman and buildah represent. What i dont like is the way they break at least part of the ecosystem.
[1] https://github.com/containers/podman-compose
[2] https://github.com/portainer/portainer/issues/2991
[3] https://www.testcontainers.org/supported_docker_environment/
[4] https://github.com/containous/traefik/issues/5730