Incremental Backups For Frappe Using Restic


Built-in backup and restore functionality is definitely one of Frappe's nicer features. Having used bench restore to move servers and, once, recover from a failed server I can vouch for its efficacy.

A Frappe site is fully backed up if you have backups for the application code, DB, files and the site_config.json. All our application code is on Github and a couple of computers so we will focus on the others.

The default bench backup creates snapshot tar.gz files for the DB, public files and private files. Based on the selected routine, it will also take snapshots of the site_config.json file. You can even configure Frappe to store the backups on AWS S3 - this works really well and will email you if the backup fails for some reason.

site_config.json is quite important to back up as it contains the DB password as well as the encryption key needed to decode passwords stored in the DB.

However, this approach also results in a lot of wasted disk space and cost on S3 (or wherever you are keeping your backups). Each set of tar.gz files is a standalone backup and hence contains all the data needed to restore your site. Do this once a day and soon your S3 bill has 2 digits on it. Do this 4 times a day and you will be paying more for S3 than your servers. Sure, you could rotate the backups (manually, Frappe doesn't have built-in support) but we can do better - and spend fewer CPU cycles.

Now, with our servers seeing a lot more traffic and databases extending into the 10s of GB, I felt we needed a slightly more custom solution. I have used Tarsnap in the past and that was my preferred solution but then came across this podcast interview with the creator of Restic. A quick search on the Frappe forum led me to this post which covers much the same things as this blog post.

The main difference between the forum post and my setup is that we don't use bench backup at all. Instead, we use mysqldump to create the DB snapshot (same as bench backup, without the tar.gz) and backup the site directory in its entirety - files and config.

Specifically, the timestamped tar.gz files created by bench backup are considered to be new files by Restic and this would defeat the purpose of using Restic.

Our approach is simpler and has the advantage of being more suited to restic which can be smart about the changes in the files and do proper incremental backups.

The Restic docs do a great job explaining how to get started - from initialising a repo to doing backups and restores. Read that to get a better understanding of Restic before using/adapting this script.

Backup Script

set -e
# Keep your exports in /etc/profile or similar
export AWS_DEFAULT_REGION=ap-south-1
export RESTIC_PASSWORD_FILE=/path/to/restic.password
SITES=( site_1 site_2 site_3 )
for FRAPPE_SITE in "${SITES[@]}"
	echo "Backing Up $FRAPPE_SITE"
	DB_NAME=`cat $DIR_PATH/site_config.json | jq -r '.db_name'`
	mysqldump --single-transaction --quick --lock-tables=false \
        -u $DB_USER -p$DB_PASSWORD $DB_NAME > $DIR_PATH/private/backups/$DB_NAME.sql
	restic -r $RESTIC_URL list snapshots || restic -r $RESTIC_URL init
	restic -r $RESTIC_URL backup $PWD/frappe-bench/sites/$FRAPPE_SITE
	touch last_backup_$FRAPPE_SITE.timestamp

Here's a brief explainer:

I add this script to my Cron with crontab -e as an hourly task. This means I have more frequent backups and pay less than with the default bench backup approach. Admittedly, I can no longer use bench restore but on the other hand, all I have to do is restic -r <repo> restore <snapshot> --target <target-dir> and mysql <db-name> < backup.sql.