Why use docker for a home server?

Home server so far

For quite a while I’ve been using docker for my home server (and some other servers). It’s there for a number of reasons: software isolation, data isolation, security, network setup.

The software isolation is the common reason for using docker. If you want to install some software, ideally you don’t want it to interact with other software. You don’t want potential version confusion in libraries for example. It is often fairly easy to avoid these issues with software installed on the host, but containers help ensure the issues can’t arise at all.

The data isolation is quite similar. You can limit the access to various paths using the system permissions and users. Docker instead allows mapping only a specific path into the container. So again, making a mistake impossible seems like a better idea than hoping you don’t misconfigure something.

Keeping with the theme of avoiding mistakes, the network separation is quite nice. I’m sure that the database will not get exposed to the internet or another service unexpectedly, because docker-compose ensures the traffic for it will only ever come from that one project. This is a very nice property.

Finally, all of this together and limiting the exposure through seccomp helps with the overall security posture. People often talk about whether or not docker is a security boundary, without realising things are not black and white. We traded a small increase in the namespace handing risk for a set of effective barriers, and in many environments it’s worth it.

There are some things worth remembering though. In this case: namespaces are available outside of the containers, seccomp is common, and docker is really an api + 3 namespaces in a trench coat. Most of what they provide you can construct through unshare yourself. Or through other tools targeting specific or generic isolation.

Still needed?

With all that context I summarised, what’s the role of docker in my use case and do I still need it? I’m running some services with attached databases, some with file storage, some talking to each other on their own network. Most of them can be trivially moved to a simper system. Specifically most features I care about can be provided by systemd parameters and the reproducibility by NixOS.

It wasn’t always that way though. In the old days we had lxc containers and openvz and if you wanted something that looks like a container, you needed to actually run the whole secondary system. Docker definitely helped by minimising that scope to just the app and its dependencies. (more than a chroot, less than a VM) It enabled a single description for running a single app. Then docker-compose allowed spawning the whole mini-environment with all the required blocks. It was an amazing change and it made hosting multiple things on one host really nice - perfect for home usage.

Alternative approach

But I believe new tools make this even nicer, even if the interface for them is not as clean / accessible yet. The power-users should be happy though. First, for the isolation, systemd exposes most interesting namespace features in a few easy settings. If I can prevent any escalation from a dedicated user (NoNewPrivileges), ensure no collisions with other apps (DynamicUser), limit what networking can be done (RestrictAddressFamilies) then a lot of the local lockdown can be achieved with simple flags. Then if I can prevent private data from being visible (ProtectHome) and limit the state changes (ReadWritePaths) I don’t really need a full system packaged in an image. With a locked down service I’m happy to run everything from the “host” (if you can still call it that).

Second, the software distribution - docker definitely made things easier to ship. No library collisions, isolated configuration, single source of updates - it all made ops smoother. On the other hand, having Docker as a layer that exists on its own is weird outside of a cluster setup. Many images will ship with their own init system to correctly manage multiple processes and switch users which you still need to configure. (looking at you linuxserver) On a server with limited resources, I don’t want to run s6 or supervisord and a wrapper script in bash for each service. This all adds up and most often is not shared between images.

The alternative we get today is either running full NixOS or the software itself from nixpkgs. This still allows installation of any software without version collisions. It allows (relatively) easy upgrades and rollbacks. It makes sure that services share all the libraries they can, so they can all use a single copy of libc. And everything is available directly, with full configuration available through files, environment or any other way the software allows it. Each service can then be spawned directly from systemd without worrying about the process ownership moving from init to docker daemon and all the fun that causes. (yes, podman solves that one too, go try it)

Both the “I want to run a specific version” and “I just want to play around” use cases are handled by either installing a service more permamently, or by running nix-shell -p something and getting whatever the latest version is for your environment.

Finally, getting rid of the docker layer makes some networking use cases simpler. Broadcasting and listening on raw interfaces “just works” so upnp, vpn, adhoc port binding, or mdns is possible without any special configuration or workarounds.

What’s lost

Now with all the things that get better, there are some things which don’t have a great interface yet. The major one is a replacement for docker-compose. You can group the services, make them talk to each other, etc. But, for example, setting up multiple databases is more manual with the systemd/nix combo. You’ll need to either ensure correct privileges on a shared instance, or do extra configuration to run two daemons in parallel. With multiple instances, the need to keep the ports consistent for the backend services is also added - no more: just connect to hostname “db” on default port. It’s not a showstopper though and I expect that some tool will be created to deal with the scenario in the future.

Another nice thing lost are the container labels. This for example prevents traefik configuring itself automatically through the docker configuration. Then again, systemd socket/service descriptions are still just text. With more sockets handled on the systemd side, we may still see a systemd-based auto configuration module for traefik. In the meantime, traefik, caddy and others accept text configuration which is not complicated.

Conclusions

Personally, moving to systemd/nix setup made my life easier than dealing with docker/docker-compose. One less layer to deal with is nice. I suspect this approach is only going to get more popular and easier with time for the home server self-hosters, and some nice tools will appear in the ecosystem.

If you haven’t tried it yet, maybe give it a go.

Was it useful? BTC: 182DVfre4E7WNk3Qakc4aK7bh4fch51hTY
While you're here, why not check out my project Phishtrack which will notify you about domains with names similar to your business. Learn about phishing campaigns early.

blogroll

social