When you go down the rabbithole of trying to selfhost your internet life there are two concerns that will arise quite rapidly:

  • How to be able to respawn everything quickly from scratch if it fails
  • How to make sure I won’t loose my data

The first part is mainly covered by your use of Docker Compose, but the second one can be a bit tedious to think through.

In this article I will propose a backup scenario. Beware, this is in no way a guaranteed approach to backup but rather an idea of one could do this. I in no way endorse any issue that would arise on your side while implementing this scenario.

Description of the Backup Scenario

In this scenario we will try to cover ourselves by following the 321 Rule:

  • 3 copies at least
  • 2 locations
  • 1 off-site

In order to follow this rule we have basically two needs:

  • A backup tool that we can trust
  • A way to synchronize the backup repository to another site

For the backup part we will be using Restic, a nice backup tool that encrypts the backups.

As to synchronizing the backup repository with a off-site location, we will use RClone. Rclone is like RSync but designed to discuss with Cloud backends like BackBlaze, Mega, S3 etc…

Initialization of the Restic repository

The first thing to do is to initialize our Restic repository. As per usual we are going to use Docker images to use both tools. To do so we are going to bind mount a place of the host file system were we will store the repository. So go where you want to store it and then:

docker run --rm -ti -v $(pwd):/data restic/restic init --repo /data/my-restic-repo

You will be asked for a password for the encryption of the repository, do not loose it, and then you are left with a new folder on your file system: my-restic-repo

Initialization of the RClone configuration

Same as the initialization of the Restic repository, we are going to use a Docker image for this. We will also mount the file system for this. The command to use is:

docker run --rm -ti -v $(pwd):/data rclone/rclone --config=/data/rclone-config config

I will not tell you what to configure for RClone as it highly depends on where you will want to store it, which cloud backend, and so on. You will find the documentation at this address. This said, as a hint, in this article we will use the crypt backend in combination with another cloud backend. So basically it is the crypted remote that will be targeted which himself targets the remote cloud backend I want to use like crypt -> S3.

Configure volumes and exclusions

Now that our tools are configured, we want to set a list of docker volumes that will be backed up. In order to do that, we will simply create a file volumes and list volumes from our docker-compose.yml file that we want to be backed up. We chose to do this because as of today it’s not possible to reposition labels on already created docker volumes, but there is work in the way regarding that, and once this is released we will be able to tag the volumes directly in the docker-compose.yml file and just list all docker volumes filtered with this tag, which could be more convenient.



Now that we listed the volumes to be backed up, we will configure exclusions as there are probably files or folders in those volumes that we don’t really want to store (tmp files, transfer files, whatever bulky unimportant files).

In order to do this, we will create an exclusions file and list all exclusions following the Restic exclusion format.



Writing the passwords

In order to provide the encryption passwords both for Restic and for the RClone configuration file, we will write them down in two different files. This is a very simple way to deal with the matter, one could use Docker Secrets, Vault, or whatever to better store those passwords. For the example we will store them individually in a restic-password and rclone-password file.

The backup script

Okay, we are good to take it to the next step. We have our volumes and exclusions set up, the passwords, the RClone config and the Restic repository. Now we need a script. Here is an example bash script that will:

  1. Stop all our containers
  2. Backup the listed volumes
  3. Prune the old backups following a given strategy (see this Restic documentation). We are going to keep 7 daily 4 weekly 12 montly.
  4. Clone the Restic repository to an off-site location with RClone
  5. Start all the containers back


function stop_all_containers {
  echo "Stopping all running containers"
  COMPOSE_HTTP_TIMEOUT=200 docker-compose -f $COMPOSE_FILE_PATH stop

function list_all_volumes {
  local result=$1
  local list=""
  local volumes=($(cat $VOLUME_LIST_PATH))
  for volume_name in "${volumes[@]}"
    list="${list} -v ${COMPOSE_PROJECT_NAME}_${volume_name}:/source/${volume_name}"
  eval $result="'$list'"

function backup_volumes {
  echo "Retrieving volumes list"
  list_all_volumes volumes_list

  echo "Starting backup"
  docker run --rm \
          -v $RESTIC_REPOSITORY_PATH:/data/restic_repository \
          -v $RESTIC_EXCLUSION_FILE_PATH:/data/excludes \
          -v $RESTIC_PASSWORD_PATH:/data/password \
          $volumes_list \
          restic/restic -r /data/restic_repository backup /source --exclude-file=/data/excludes --password-file=/data/password --host $MACHINE_NAME

function start_all_containers {
  echo "Starting back all containers"
  COMPOSE_HTTP_TIMEOUT=200 docker-compose -f $COMPOSE_FILE_PATH start

function prune_old_backups {
  echo "Pruning old backups if needed"
  docker run --rm \
          -v $RESTIC_REPOSITORY_PATH:/data/restic_repository \
          -v $RESTIC_PASSWORD_PATH:/data/password \
          restic/restic -r /data/restic_repository forget --keep-daily 7 --keep-weekly 4 --keep-monthly 12 --password-file=/data/password --host $MACHINE_NAME --prune

function clone_restic_repository {
  echo "Syncing the Restic repository on the cloud"
  docker run --rm \
          -v $RCLONE_CONFIG_PATH:/data/config \
          -v $RESTIC_REPOSITORY_PATH:/data/repo \
          rclone/rclone --config=/data/config --stats-log-level NOTICE --stats 45m sync /data/repo $RCLONE_REMOTE:

echo "Starting backup process"
echo "Done"

At this point, we have our entire backup process set up. Just try to run it to see where you could have misconfigured things, it should go through smoothly, if not make the needed adjustments. Once everything is working fine, congratulations, you are now able to backup your server and have the repository backed up off-site!

We can now proceed to the last part!

Automating the backup process

Right now, in order to launch a backup, we have to call our script. But what we really want is it to be run automatically by the server maybe once a day. For this, we are going to use systemd timers’ capabilities. Long story short, we are going to create a systemd service that runs the backup script and a timer associated to it to run it daily.

For that, all we need is a service file like that:

Description=Run the backup of the server


And the associated timer:

Description=Daily run of backup script

OnCalendar=*-*-* 04:00:00


And we are done! All we have to do is to enable and start the service and the timer:

  • sudo systemctl enable backup.service
  • sudo systemctl enable backup.timer
  • sudo systemctl start backup.timer

And you are now done ! Your backup script will run everyday at 4am! If you want further information on how to see the logs and so on, look into systemctl and journalctl you will find all you need, but quick hint: journalctl -u backup.service --since today will give you everything that happened today for your backup.

– Amike