If you're not wedded to docker-compose, with podman you can instead use the podman kube support, which provides roughly docker-compose equivalent features using a subset of the Kubernetes pod deployment syntax.
Additionally, podman has nice systemd integration for such kube services, you just need to write a short systemd config snippet and then you can manage the kube service just like any other systemd service.
Altogether a very nice combination for deploying containerized services if you don't want to go the whole hog to something like Kubernetes.
This is sort of "fixed" by using a Quadlet ".kube" but IMO that's a pretty weak solution and removes the "here's your compose file, run it" aspect.
Recently (now that Deb13 is out with Podman 5) I have started transitioning to Podmans Quadlet files which have been quite smooth so far. As you say, its great to run things without all the overhead of kubernetes.
Docker has one of the most severe cases of not-invented-here. All solutions require a combination of a new DSL, a new protocol, a new encryption scheme, a new daemon, or any combination there-of. People are sleeping on using buildah directly; which OP alluded to with Bakah (but fell short of just using it directly).
Ever wish you could run multiple commands in a single layer? Buildah lets you do that. Ever wish you could loop or some other branching in a dockerfile? Buildah lets you do that. Why? Because they didn't invent something new, and so the equivalent of a dockerfile in buildah is just a script in whatever scripting language you want (probably sh, though).
I came across this when struggling and repeatedly failing to get multi-arch containers built in Circle CI a few gears ago. You don't have access to an arm64 docker context on their x86 machines, so you are forced to orchestrate that manually (unless your arm64 build is fast enough under qemu). Things begin to rapidly fall apart once you are off of the blessed Docker happy path because of their NIH obsession. That's when I discovered buildah and it made the whole thing a cinch.
Buildah is elite tooling. Enables you to build with devices and caps and kernel modules. Buildx acts like you should sign a waiver and really weak documentation if at all for what you are trying to do
multiple commands in a layer is possible in a dockerfile for a long time, since format 1.4(?) using heredoc, which is just a script netting you loop and branches etc.
on the QEMU thing... the only time I tried to cross-build arm containers from an x86 server was using whatever servers Github Actions supports... the x86_64 build was pretty normal for the project, but the qemu/buildx/arm64 build was about the same speed as an 8mb Raspberry Pi 4 to build the same project... pretty disappointing.
I do a combi, sometimes even asking the LLM and starting a ddg search in parallel. It speeds me up. Sometimes the LLM is right, sometimes it's not. NP, I'll get it to work. One should never do anything that one does not understand, but I get to the understand faster as I can also ask more in depth follow up questions to the LLM.
It is very stupid and is usually wrong in some meaningful way, but it can help break logjams in my thinking. Giving me clues that might be missing. Sort of like how writing gibberish is sometimes effective for writers to break writer's block.
It is also nice for generating boiler plate code for languages that I am not super familiar with.
The biggest problems I have with current state of the art LLMs is that errors compound. Meaning that I only really get somewhat useful answers when starting out with the first few questions or the first couple times I ask it to review some code. The longer the session lasts the more la-la land answers I get.
It is a game of odds. I expect that with systemd and quadlets it is going to particularly useless because there just isn't that many examples out there. It can only regurgitate what it is trained with so if something isn't widely used and checked into code bases it is trained on then it can't really do anything with it.
Which is why it is nice for a lot of common coding tasks, because a lot of code is just same thing tens of thousands people did before for only slightly different contexts and is mostly boilerplate.
yeah Quadlets are a pretty reasonable improvement.
It was introduced in Podman 4.4 which is circa 2023.
And it takes a while for podman to get up to date in non-Redhat related distributions. Like Debian Stable was stuck on 4.3 until Trixie release this month.
So unless you are using Fedora and friends or something like Arch it is kinda hard time going for podman users. Which is unfortunate.
Docker has a bit of a advantage here because they encourage you to use their packages, not the distribution's.
Here is a example Quadlet configuration i use for syncthing that I run out of my home:
This then gets dropped into ~/.config/containers/systemd/syncthing.container
And it is handled automatically.
This configures the syncthing container to always get updated on each startup, bypasses the "rootless" networking by using host networking (rootless networking is limited and slow), and the default Sync dir ends up in ~/.syncthing where as I can add more sync'd directories to my real home directory by directing it to /var/home/ in the syncthing web ui.
As you can see the arguments under "container" is just really capitalized versions of docker/podman arguments.
Also if you like GUIs the podman desktop has support for helping to generating quadlets. Although I haven't tried it out yet.
They both utilize all the linux c-group magic to containerize. So performance is roughly the same.
Incus is an LXD fork, and focuses on "system" containers. You basically get a full distro, complete with systemd, sshd, etc. etc. so it is easy to replace a VM with one of these.
podman and docker are focused on OCI containers which typically run a single application (think webserver, database, etc).
I actually use them together. My host machine runs both docker and incus. Docker runs my home server utilities (syncthing, vaultwarden, etc) and Incus runs a system container with my development environment in it. I have nested c-groups enabled so that incus container actually runs another copy of docker _within itself_ for all my development needs (redis, postgres, etc).
What's nice about this is that the development environment can easily be backed up, or completely nuked without affecting my host. I use VS Code remote SSH to develop in it.
The host typically uses < 10GB RAM with all this stuff running.. about half what it did when I was using KVM instead of Incus.
That feature might be able to replace my docker usage on the host, so I don't need it and incus side by side. Which would be pretty neat.
Within the incus dev environment container though I'm pretty sure I want to keep docker, as I have a lot of tooling that expects it for better or worse (docker compose especially). It also doesn't appear incus integrates buildkit etc. so even if I used it here, I'd still need something else to _build_ OCI images.
If you are using podman "rootless" mode prior to 5.3 then typically you are going to be using the rootless networking, which is based around slirp4netns.
That is going to be slower and limited compared to rootful solutions like incus. The easy work around is to use 'host' networking.
If you are using rootful podman then normal Linux network stack gets used.
Otherwise they are all going to execute at native speed since they all use the same Linux facilities for creating containers.
Note that from Podman 5.3 (Nov 24) and newer they switched to "pasta" networking for rootless containers. Which is a lot better, performance wise.
edit:
There are various other tricks you can use for improving podman "rootless" networking, like using systemd socket activation. This way if you want to host services this way you can setup a reverse proxy and such things that runs at native speeds.
How would you configure a cluster? I’m trying to explore lightweight alternatives to kubernetes, such as docker swarm, but I think that the options are limited if you must support clusters with equivalent of pods and services at least.
I've found you can get pretty far with a couple of fixed nodes and scaling vertically before bringing in k8s these days.
Right now I'm running,
- podman, with quadlet to orchestrate both single containers and `pods` using their k8s-compatible yaml definition
- systemd for other services - you can control and harden services via systemd pretty well (see https://news.ycombinator.com/item?id=44937550 from the other day). I prefer using systemd directly for Java services over containers, seems to work better imo
So, unless you have a service that requires a fixed number of running instances that is not the same count as the number of servers, I would argue that maybe you don't need Kubernetes.
For example, I built up a Django web application and a set of Celery workers, and just have the same pod running on 8 servers, and I just use an Ansible playbook that creates the podman pod and runs the containers in the pod.
In the off chance your search didn't expand to k3s, I can semi-recommend it.
My setup is a bit clunky (having a Hetzner cloud instance as controller and a local server as a node throught Tailscale), from which I get an occasional strange error that k3s pods fail to resolve another pod's domain without me having to re-create the DNS resolver system pod, and that I so far failed at getting Velero backups to work with k3s's local storage providers, but otherwise it is pretty decent.
K3s is light in terms of resources, but heavy in operational complexity, I’m not looking for a smaller version of kubernetes but for a simple way to run container backed services when you’re not google but a small company, something that has few moving parts but is very reliable and low maintenance.
I've been back and forth on this for a long time, but I've just decided at this point that I either settle for podman or docker on a single host, or go to Talos / k3s / k8s. There's a lot of tools there, a lot of inertia, and eventually it's likely that I will need to solve the problems that k8s does.
I recall seeing a couple of blog posts lately about docker swarm and how its better now. I can see a few references to it in the latest release notes so I guess it's still getting some love.
I've been reading and watching videos about how you can use Ansible with Podman as a simpler alternative to Kubernetes. Basically Ansible just SSHs into each server and uses podman to start up the various pods / containers etc. that you specify. I have not tried this yet though so take this idea with a grain of salt.
ansible -i server1,server2,server3 deploy_fake_pods.yaml
ssh server1 sudo shutdown -h now
# aww, too bad, now your pods on server1 are no longer
With
kubectl apply -f deployment.yaml
for i in $(kubectl get nodes -o jsonpath='{.status.hostIP}'); do
ssh $i sudo shutdown -h now
sleep 120
done
# nothing has changed except you have fresh Nodes
If you don't happen to have a cluster autoscaler available, feel free to replace the for loop with |head -1 or a break, but I mean to point out that the overall health and availability of the system is managed by kubernetes, but ansible is not that
Nomad is weird. Its OSS version is like a very limited trial of paid version. At least last time I tried it. To a point that it was more productive for me to install k3s instead.
That is what I do as well. I'd rather not have to remember more than one way of doing things so 'podman play kube' allows me to use Kubernetes knowledge for local / smaller scale things as well.
Additionally, podman has nice systemd integration for such kube services, you just need to write a short systemd config snippet and then you can manage the kube service just like any other systemd service.
Altogether a very nice combination for deploying containerized services if you don't want to go the whole hog to something like Kubernetes.