Once everything is up and running smooth on the drupal site it is time for the warm fuzzy feeling of backups. Naturally, everyone has their own way of doing things and mine is to use a script called AutoMySQLBackup in conjunction with Duplicity and Amazon S3.
So what does this exactly do?
AutoMySQLBackup runs to take a dump of the database everyday and store the resulting files on the server. This gives me a complete snapshot of the database sitting on the server waiting to get backed up.
Duplicity takes care of compressing and securely transferring these database snapshots (as well as all other website files) offsite. In this case I decided to use Amazon S3 for offsite storage. Duplicity has a built-in protocol to transfer these files directly to Amazon S3 and makes it pretty simple.
AutoMySQLBackup
The setup of the MySQL backup script is pretty straight-forward and there is good documentation for it. The only little tweak I did was create a new mysql user called backupuser and assign the proper permissions for all the databases that this user can backup.
Once you have tested the script manually and make sure that it is backing up your databases properly, you'll probably want to set it as a cron job so you don't have to worry about it. This is the cron job I used to run the backup script once a day at 2:45 in the morning:
# Run MySQL backup job once a day at 2:45a.m.
# Backs up databases to /home/user/www/backups/mysql
45 2 * * * sh /home/user/scripts/automysqlbackup.sh.2.5 >/dev/null 2>&1
Duplicity
I asked a question on ServerFault on best practices for the duplicity script and here is what I'm currently using:
#!/bin/sh
# Export some ENV variables so you don't have to type anything
export AWS_ACCESS_KEY_ID=[your-access-key-id]
export AWS_SECRET_ACCESS_KEY=[your-secret-access-key]
export PASSPHRASE=[your-gpg-passphrase]
# Your GPG key
GPG_KEY=[your-gpg-key]
# The source of your backup
SOURCE=/
# The destination
# Note that the bucket need not exist
# but does need to be unique amongst all
# Amazon S3 users. So, choose wisely.
DEST=s3+http://[your-bucket]/backups
# The duplicity command and options
duplicity \
--encrypt-key=${GPG_KEY} \
--sign-key=${GPG_KEY} \
--volsize=250 \
--include=/home/user \
--exclude=/** \
${SOURCE} ${DEST}
# Reset the ENV variables. Don't need them sitting around
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export PASSPHRASE=
The main thing to note is that I'm only backing up the user directory (/home/user
) that contains all of my drupal files and databases. I'm not backing up any other users directories, system files, configuration files, etc.
Another note is that the volsize is set to 250MB. The default when I first installed duplicity was to use 5MB chunks. This means that if the total backup size was 50MB it would break it down to 10 chunks and transfer those to the Amazon S3 service. To save on transfer requests I like the larger volume size (thanks to Olly from ServerFault for that tip).
As with the MySQL backup, you'll probably want to setup a cron job for this as well. I wanted to make sure that my databases were backed up before running the duplicity script so I set the cron job to run an hour later:
# Run duplicity backup once a day after the mysql backup has run
# This will backup everything in /home/user offsite to the Amazon S3 Service
45 3 * * * sh /home/user/scripts/duplicity-backup >/dev/null 2>&1