Migrating a WordPress database between domains

Occassionally I find myself developing a WordPress theme which will then require moving from development into production or otherwise having to move a blog between domains.

Apart from the transfer of the files, including plugins, theme and core WordPress installation, there is only 1 slightly gotcha: the database. I’ve never been quite sure why but WordPress stores the URL of the blog in it’s database, not once, but twice. For the blog to be migrated the database needs a slight tweak.

As always, create a backup before doing anything. This can be done with phpMyAdmin or using the mysqldump command like so:

$ mysqldump -uUSER -pPASSWORD DATABASE > /path/to/backup.sql

If the database is to be hosted on a different machine you can then import this dump using using which ever method you’re comfortable with. Finally, run the following query on the database, substituting your own domain name:

UPDATE
	wp_options
SET
	option_value = 'http://DOMAIN.TLD'
WHERE
	option_name IN ('siteurl', 'home');

That’s it!

MySQL database backup with remote storage

Prevent a disaster

After reading Jeff Atwood’s backup failure last month I decided to finally get around to doing something I’d been intending to do “one of these days” but had in actual fact been putting off for years.

Here’s the steps I took to ensure the databases on my webserver were backed up every night and copies of the dumps stored remotely.

On the remote storage machine

Generate an ssh key pair with and empty password and put the public key on the remote server. This will give our script access to the server without requiring you to enter a password each time:

$ ssh-keygen -t rsa -f /home/steve/code/db_backup/id_rsa
$ scp /home/steve/code/db_backup/id_rsa.pub REMOTEHOST:

This script will fetch all the backups, logging in as the rsync user and using the private key just generated. It’s located at /home/steve/code/db_backup/sync_backups.sh:

#!/usr/bin/env bash
rsync -e "ssh -l rsync -i /home/steve/code/db_backup/id_rsa" -avz REMOTEHOST:mysql/ /data/primary/backup/mysql/

Have this happen automatically daily at 12:20am:

$ crontab -l
# m h  dom mon dow   command
20	0	*	*	*	/home/steve/code/db_backup/sync_backups.sh
$

On the machine to be backed up

Create a new user and allow ssh access with the previously generated key:

# adduser rsync
# mkdir ~rsync/.ssh
# mv ~steve/id_rsa.pub ~rsync/.ssh/authorized_keys
# chown rsync:rsync ~rsync/.ssh/authorized_keys
# chmod 400 ~rsync/.ssh/authorized_keys

This script will dump all available databases and is located at /root/bin/backup_databases.sh:

#!/usr/bin/env bash
 
# dump all available databases
# SJW
 
AUTH='-uroot -pROOTPASSWORD'
DBS=`mysql $AUTH --skip-column-names -e 'SHOW DATABASES;'`
BACKUPS='/home/rsync/mysql/'
 
for DB in $DBS
do
	mysqldump $AUTH $DB > $BACKUPS`date +%Y%m%d%H%M`_$DB.sql
done
 
# delete backups older than 5 days
find $BACKUPS -mtime +5 -type f | awk '{print "rm "$1}' | sh

Have the script run nightly at 12:10am via cron:

# crontab -l
# m h  dom mon dow   command
10  0 * * * /root/bin/backup_databases.sh
#

Closing thoughts

This approach is realtively straight forward, everything happens automatically and it could easily be extended to cover mailboxes, source code repositories, uploaded content etc. However, for mission-critical databases master-slave replication may be more appropriate. For further reading you may enjoy JWZ’s thoughts on backups.