There were a few posts about how people host their own services on hackernews and lobsters recently. Some of them used Kubernetes, some plain Docker and others are into Nix (which I will probably check out too soon). This post is about how I manage my services on virtual machines using Docker and a few other tools. More specifically, I run most of my services on virtual machines I rent from a provider and some at my home on a rather old server. Despite that I have to manage the virtual machines myself, I still use containers to deploy the actual services on them. This post will cover how I deploy and manage these containers on virtual machines.
You might ask yourself: "Why not use Kubernetes?". Well the answer is simple: I don't need any of its feature and I most importantly, don't want to handle to operational and financial overhead of managing a Kubernetes cluster.
Overview of Components on a Single Machine
In the following I will describe the overall idea of my setup. The VM obviously runs a Linux distribution, on my case Debian on the cloud or Arch at my home.
As can be seen in the image above, it consists of four major parts.
- A docker compose file for each service.
- Caddy as reverse proxy to handle TLS termination.
- upstream-watch to check for updates in the service definitions.
- watchtower to update the images of individual services.
Service Definitions
All services are containerized and therefore run in containers. I use a single git repository to manage all services of a single VM. Therefore, each service has its own folder in the repository. The following is an example directory structure:
.
├── service-1
│ ├── docker-compose.yml
│ └── README.md
├── service-2
│ ├── docker-compose.yml
│ └── README.md
└── README.md
There are two services, each in its own subfolder. Each of these services holds a README.md
(which is not interesting) and a docker-compose.yml
that defines and configures the containers.
An example service could look like this:
services:
service-1:
image: my.registry.dev/service-1:production
container_name: andre-blog
restart: always
labels:
- "com.centurylinklabs.watchtower.enable=true"
ports:
- 127.0.0.1:1337:1337
Note that the com.centurylinklabs.watchtower.enable
label is used to enable automated updates of the service via watchtower, which will be covered later. Additionally, the service is only accessible from the local machine on port 1337. Again, this will be covered later.
Reverse Proxy
As can be seen in the overview image, I use Caddy as a reverse proxy. I prefer Caddy to nginx or Apache because it very easy to configure and handles automatic HTTPS via Let's Encrypt out of the box. All services are therefore only accessible via HTTPS.
Caddy is configured via a Caddyfile
that looks like this:
service1.my-domain.dev {
reverse_proxy 127.0.0.1:1337
}
service2.my-domain.dev {
reverse_proxy 127.0.0.1:1338
}
In my setup, Caddy is directly installed on the host system and not running in a container. It is managed via a systemd service and the configuration file is located at /etc/caddy/Caddyfile
. Both the systemd service and the Caddyfile are managed via Ansible. It should be mentioned that Caddy could also run as a container, but that requires to use a shared network between the Caddy container and the services, which I wanted to avoid keeping the setup a bit simpler.
Upstream Watch
Upstream watch is a small tool I wrote to check for changes in an upstream git repository and pull them if there are any.[1] With this tool I can update the service definitions in the git repository and upstream-watch
will automatically pull the changes and restart the services if necessary. To use it you must provide two main configuration files:
.
├── .upstream-watch.yaml
├── README.md
├── service-1
│ ├── .update-hooks.yaml
│ ├── docker-compose.yml
│ └── README.md
├── service-2
│ ├── .update-hooks.yaml
│ ├── docker-compose.yml
│ └── README.md
└── upstream-watch
The .upstream-watch.yaml
is the main configuration file. You can set the retry interval (in seconds) and folders that should be ignored and not trigger the later configured hooks.
single_directory_mode: false
retry_interval: 10
ignore_folders: [".git", ".test"]
We need to adjust the earlier presented directory structure a bit. The main configuration file per service is .update-hooks.yaml
, which is the configuration file of upstream-watch
for this specific service.
In case of an update to any of these files in a subfolder, upstream-watch
will execute the pre- and post-hooks defined in the corresponding .update-hooks.yaml
.
An example for a .update-hooks.yaml
:
pre_update_commands: ["docker compose down"]
update_commands: ["docker compose pull"]
post_update_commands: ["docker compose up -d"]
upstream-watch
will stop all containers, pull updates from the registry and start them afterwards. Of course, you can do almost anything in these hooks, depending on the needs of your service.
Or the other way around to reduce the downtime:
pre_update_commands: ["docker compose pull"]
update_commands: ["docker compose down"]
post_update_commands: ["docker compose up -d"]
Currently, upstream watch must be run manually, but I plan to containerize it and remove the need to run it manually.
Watchtower
Watchtower handles updating the running containers, if there is a new version with the same tag available.[2] I decided to configure the services that should be updated via the label "com.centurylinklabs.watchtower.enable=true"
in the docker-compose.yml
files, allowing me to enable or disable automatic updates per service.
The full configuration of watchtower looks like this:
services:
watchtower:
image: containrrr/watchtower
command: --label-enable --interval 30
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ~/.docker/config.json:/config.json
In this case it will check for updates every 30 seconds and update the services with the label com.centurylinklabs.watchtower.enable=true
. Additionally, it mounts the docker socket and the docker configuration file to be able to interact with the container runtime.
Backups
With this setup you automatically get backups of your service definition and configuration. But you still need to backup the data of each of the services. In my case that is done via a separate backup script that is run via a systemd timer. I've written a post about the backup setup here.
Conclusion
In my opinion this setup is a good compromise between the flexibility and isolation of using containers and the ease of use of a single virtual machine running Linux. The setup is independent of any cloud provider or it's APIs and provides close to bare metal performance. On the other hand it provides no automatic scaling or failover, and you need to learn multiple tools like Ansible and Docker (compose) to manage the setup. So if you are in need for a simple setup that is easy to manage, and you don't need any of the features of Kubernetes, this setup might be for you.