docker-compose is a very handy tool when you want to run multi container installations. Using a very simple YAML description, you can develop stuff locally and then push upstream whenever you feel something needs to enter the CI/CD cycle.
Sometimes, I’ve even used it on production, when I wanted to coordinate certain containers on a single VM. But lately I’ve stopped doing so. The reason is simple:
All that we do is ultimately going to be deployed in a Kubernetes cluster somewhere.
Given the above, there’s no need to maintain two sets of YAMLs, one for the docker-compose.yaml
and one for the Kubernetes / helm
manifests. Just go with Kubernetes from the beginning. Run a cluster on your local machine (Docker Desktop, microk8s, or other) and continue from there. Otherwise you risk running into the variation of works on my machine that is phrased like but it works with docker-compose. Well, there’s no docker-compose
in production, why should there be on your machine? Plus you’ll get a sense of how things look like in production.
If you’re so much used to working with docker-compose
, you can start a very crude transition by assuming that you have a single deployment and every container that you were to deploy is a side-car container to a single Pod. Afterall, just like a Pod, any docker-compose
execution cannot escape a single machine (yes I know about Swarm). Then you can break it down to different deployments per container you want to run.
The above occured to me when I was trying to deploy some software locally, before deploying on Kubernetes, and tried to follow the vendor instructions for docker-compose
. They failed and I lost quite some time trying to fix the provided YAML, and it dawned me: I do not need it. I need to test in Kubernetes anyway.
So there, stop using docker-compose
when you can. Everyone will be happier.