Ever wondered how to efficiently backing up a drupal installation on a hetzner – or more generally – on a debian linux server with a secondary backup server? Here is my solution.

Problem description

  • 2 servers (one running debian with the live system that should be backed up)
  • mysql must be backed up
  • file system must be backed up
  • second server (backup-server) has no shell-access, but SFTP

Solution

  • write script to create compressed file-backup (daily, weekly, monthly)
  • mount backup-server via samba/cifs
  • use automysqlbackup to create mysql-backup
  • use rsync to sync mysql-backup to the backup-server

How

File Backup

Since rsync does not work directly with my backup-server and google did not come up with usable results for my search requests (ok maybe i was too stupid to google or to eager to write this script myself) i wrote a script with the following requirements:

  • keep daily backups for a week
  • keep weekly backups for a month
  • keep monthly backups for a year
  • compress the backup
  • copy it via sftp

SFTP problems

As soon as i started to write the script mentioned above i realized that this would not work out easily because normally every secure connection requires a password that sometimes cannot be stored. But in this case there is a solution. Hetzner themselves wrote a good how to. Only thing they missed in their guide is that you must create the ssh-key without password (i hate things without password and thought the password would be stored in a secure place or something, which was a silly assumption looking back). For completeness i include the commands i executed:

# enter an EMPTY password when generating this. if there already
# exists a key-file, make sure you have no existing
# sftp-connections without password
ssh-keygen
ssh-keygen -e -f .ssh/id_rsa.pub | grep -v "Comment:" > .ssh/id_rsa_rfc.pub

# executing the following commands you will be asked for the
# remote passwort, but this should be the last time!
echo "mkdir .ssh" | sftp u15000@u15000.your-backup.de
echo "put .ssh/id_rsa_rfc.pub .ssh/authorized_keys" | sftp u15000@u15000.your-backup.de

Testing

Now you should be able to connect to your backup-server without password using sftp:

sftp u15000@u15000.your-backup.de

Troubleshooting

  • make sure you created the key without password
  • adjust the remote-server adress correctly
  • the .ssh folder might not exist on the remote server (you might have to create it first)
  • the file authorized_keys must follow RFC4716. This means there must not be a comment-line in it.

The file-backup-script

The following backup-script requires a password-less sftp connection, so be sure you followed the steps above. I created the file-backup-script in /usr/local/bin and added it to crontab.

nano /usr/local/bin/autofilebackup
chmod a+x /usr/local/bin/autofilebackup

Furthermore i created a folder named “filebackup” on my backup-server directly in the /-folder of a new sftp connection.

#!/bin/bash

# configuration
LOCAL_BACKUP_PATH='/var/www/drupal.live'
REMOTE_BACKUP_PATH='/filebackup' # remote path where backups will be stored. make sure the subfolders "daily" "weekly" and "monthly" exist!
REMOTE_SERVER=u15000@u15000.your-backup.de

# initialize variables
dc=`date +'%s'`
BACKUP_FILE="live-"
BACKUP_FILE=$BACKUP_FILE`date +"%Y-%m-%d"`
BACKUP_FILE="$BACKUP_FILE.tar.bz2"

# create daily backup
tar -cjf /tmp/live-`date +"%Y-%m-%d"`.tar.bz2 $LOCAL_BACKUP_PATH
echo "put /tmp/$BACKUP_FILE $REMOTE_BACKUP_PATH/daily/$BACKUP_FILE" | sftp $REMOTE_SERVER

# rotate daily backups (delete backups older than a week)
c=0
for i in `echo "ls $REMOTE_BACKUP_PATH/daily" | sftp $REMOTE_SERVER`
do
        c=`expr $c + 1`
        [ $c -le 3 ] && continue
        d=`echo $i | sed -r 's/[^0-9]*([0-9]+-[0-9]+-[0-9]+).*/\1/'`
        d=`date -d $d +'%s'`
        echo $i
        if [ `expr $dc - 691200` -ge $d ]
        then
                echo "delete $i" | sftp $REMOTE_SERVER
                echo 'deleted'
        fi
done

# create weekly backup if sunday
if [ `date +%u` -eq 7 ]
then
        echo "put /tmp/$BACKUP_FILE $REMOTE_BACKUP_PATH/weekly/$BACKUP_FILE" | sftp $REMOTE_SERVER
fi

# rotate weekly backups (delete backups older than a month)
c=0
for i in `echo "ls $REMOTE_BACKUP_PATH/weekly" | sftp $REMOTE_SERVER`
do
        c=`expr $c + 1`
        [ $c -le 3 ] && continue
        d=`echo $i | sed -r 's/[^0-9]*([0-9]+-[0-9]+-[0-9]+).*/\1/'`
        d=`date -d $d +'%s'`
        echo $i
        if [ `expr $dc - 2678400` -ge $d ]
        then
                echo "delete $i" | sftp $REMOTE_SERVER
                echo 'deleted'
        fi
done

# create monthly backup if 1st of month
if [ `date +%e` -eq 1 ]
then
        echo "put /tmp/$BACKUP_FILE $REMOTE_BACKUP_PATH/monthly/$BACKUP_FILE" | sftp $REMOTE_SERVER
fi


# rotate monthly backups (delete backups older than a year)
c=0
for i in `echo "ls $REMOTE_BACKUP_PATH/monthly" | sftp $REMOTE_SERVER`
do
        c=`expr $c + 1`
        [ $c -le 3 ] && continue
        d=`echo $i | sed -r 's/[^0-9]*([0-9]+-[0-9]+-[0-9]+).*/\1/'`
        d=`date -d $d +'%s'`
        echo $i
        if [ `expr $dc - 31536000` -ge $d ]
        then
                echo "delete $i" | sftp $REMOTE_SERVER
                echo 'deleted'
        fi
done

# clean up local backup
rm /tmp/$BACKUP_FILE

Configuration

There are three variables that can be configured:

  1. LOCAL_BACKUP_PATH – this is where your drupal installation (or whatever you want to backup) lies. As this is the source argument to tar you can add multiple paths here separated by a space.
  2. REMOTE_BACKUP_PATH – this is where your compressed backup will be saved on the backupserver relatively to the root-folder of a new sftp-connection. The script DOES NOT create the folder automatically!
  3. REMOTE_SERVER – these are the connection details for the sftp conncetion

Testing

Execute the above script and depending on how big your folder is the script should be finished sooner or later. You should see the output of the put command and how fast the backup was transfered.

MySQL Backup

For exporting the database to the filesystem where i can copy the backups to the backup server i use automysqlbackup. I did not find an official how to install automysqlbackup correctly but in fact its straight forward:

  • download it from sourceforge to your server
  • extract it
  • move the executable to /usr/local/bin
  • move the configuration file to /etc/automysqlbackup
  • insert backup-destination and username and password (and the databases you want to backup) into the configuration file

I for myself like it to have a local and a remote backup. So i used /var/backups/db as destination for automysqlbackup. Now you should be able to run the mysql backup:

/usr/local/bin/automysqlbackup
ls /var/backups/db

The trouble with rsync – mount backup-server via SAMBA/CIFS

rsync does not support SFTP as protocol so i had to use a trick i do not really like but what choice did i have? I created a local mountpoint for the backup-server and used rsync locally.

Unfortunately mounting a remote server means that you would have to mount it manually every time your server reboots. This is not what we want, so we also have to register the mount in /etc/fstab.

Also on my debian installation mount.cifs was not installed so i had to install it first using apt-get.

apt-get install cifs-utils
nano /etc/fstab

The lines i added in this file are the following:

# /mnt/backup for backup-server and rsync of mysqlfiles
//u15000.your-backup.de/backup /mnt/backup cifs iocharset=utf8,rw,credentials=/etc/backup-credentials.txt,uid=0,gid=0,file_mode=0660,dir_mode=0770 0 0

This is also from the hetzner guide.

The file /etc/backup-credentials.txt (mode 0600) has the following content (oh we all love passwords stored plaintext yeah):

username=USERNAME
password=PASSWORD

Putting it all together

Now we are ready to install our crontab scripts:

EDITOR=nano crontab -e

I added the following lines:

0 4 * * * /usr/local/bin/automysqlbackup
30 4 * * * rsync -rtv --delete /var/backups/db/ /mnt/backup/mysqlbackup/
0 5 * * * /usr/local/bin/autofilebackup

You see that i give every process 30 minutes time to execute. This might be paranoid and you might to decide differently. Another problem i want to point out here is consistency. You won’t get a coherent db-backup and filesystem using this method. To achieve this you would have to set your website to maintenance mode using drush execute the backup scripts as fast as possible and then end maintenance mode.

Of corse for bigger websites this is no solution. You would create a redundant system (using mysql-replication) and run backup the filesystem using virtual machines and snapshots. VMware describes and offers such solutions but there are also others.

A good guide about rsync is this article.

Please let me know if you have any troubles or suggestions!