UESPWiki:Mirror Plan

The UESPWiki – Your source for The Elder Scrolls since 1995
Jump to: navigation, search

This page details the plan to setup a live site mirror, both to provide a better scaling site backup and possibly to provide site content if the primary site ever becomes unavailable.

See UESPWiki:Mirrors for the current list of mirror and backup sites.

Overall Plan[edit]

  • Setup the mirror server to be capable of running the site (Apache, MySql, Php, MediaWiki, etc...).
  • Setup and test the database replication on the primary and mirror servers.
  • Setup a NFS (or similar) share of the Wiki images directory on the primary server (and any other directories that need to be shared).
  • Get rsync (or similar) working on the mirror server to keep and updated Wiki images (and any other) path from the primary's NFS share.

Mirror Static/Dynamic IP[edit]

The current server setup is simply a computer in my apartment attached with a Sympatico HSE DSL connection. The only issue with this is that a static IP address is not possible (it changes whenever the computer reboots or is disconnected and sometimes just changes suddenly).

One possible solution is to use services from EasyDns. If it works as advertised the cost is small (20$/year) and would provide a way of associating the dynamic IP address of the mirror with a static domain name (mirror.uesp.net).

Note that for simple replication and backup purposes there is no need to assign a domain name to the mirror site. MySQL replication does not require the master to know the slave address and similarly for any direct file backup. All such requests originate from the slave so only the master's address/name needs to be known.

MySQL Replication Setup[edit]

Master Setup[edit]

Edit my.cnf to setup logging on the master for the desired databases.


We'll manually specify which databases to replicate rather than everything to prevent any test or backup databases from being logged (as long as we remember to add any new databases to the list when needed). Ensure that the output path for the log is writable by the mysql user.

  chown mysql:mysql /var/log/mysql

We'll add a new MySQL user for the slave replication:

  mysql -u root -p
  > GRANT REPLICATION SLAVE ON *.* TO 'slave'@'%' IDENTIFIED BY 'password';

Restart the DB and that's it on the master side:

  mysqladmin -u root -p shutdown
  mysqld_safe &

The master should now be setup for replication by the slave. You can check that the log file is properly being created in the directory specified by log-bin in my.cnf.

Slave Setup[edit]

Edit my.cnf on the slave server to enable slave replication:


Restart MySQL and the slave is ready to go.

Initial Database Setup[edit]

There are two ways to get the replication started. One is to use the SQL command on the slave:


This will cause the slave to update the database directly from the master. The problem with this is that the UESP databases are over 1GB in size and doing so would cause a significant outage on the master server would be read-only during that time due to the table locks.

The second option is to perform a database backup on the server, noting the log file position for future use, and then restore those backups on the slave.

To create the backup on the master, first run the SQL commands:


Record the log filename position shown from the status command. While still in the MySQL session (do not exit or the tables will be unlocked), backup all databases:

  mysqldump -u root -p --opt uesp_net_wiki5 > wikibackup.sql

Note that the site will not allow any edits or statistic updates during this time (should still display fine). Once complete unlock the tables with the SQL command:


Transfer the backups to the slave server and restore them:

  mysql -u root -p uesp_net_wiki5 < wiki5backup.sql

Starting the Replication[edit]

Once the initial databases have been copied to the slave from the master the replication process is ready to begin. Use the SQL command:


Exchange the log filename and position with values previously recorded on the master during the backup. To start the replication:


If everything is setup properly the slave should immediately start updating from the master. You can check for the existence of the files master.info and relay-log.info which store the current replication state on the slave.


  • Full Database Backup on Master: About 30 seconds, mostly for the Wiki, not including compression. Note that the site will be mostly unavailable during this time due to the table locks and the load on the database server. Effects will be visible for a few minutes as the web server becomes overloaded and connections are waiting to connect.
  • Compression of Database Backups: Around 6 minutes using gzip with default options.

NFS Setup[edit]


In order to setup a NFS share the partition holding the share must be set to use the acl option in fstab. The current UESP server setup is one hard disk with one main partition (ignoring swap, boot and tmp partitions). While we could set the main partition to use acl it seems this is not recommended and it may be possible to make the server inaccessible.

A better, and safer, setup is to have a second partition setup with the acl option and setup the NFS shares from there. A second hard drive is also convenient in terms of providing space for backups and uploads and provides some degree of reliability (if the main hard drive fails then backups on the secondary should still be fine).

Server Setup[edit]

The server will be the main uesp.net site which will share the Wiki images directory and any other paths that may need to be backed up.

Setup /etc/hosts.allow to only permit the loopback address to access the portmapper:

  portmap : 127. : ALLOW
  portmap : ALL : DENY

Setup /etc/sysconfig/nfs to include the following options:

  SECURE_NFS   = "no"

Setup /etc/idmapd.conf

  Verbosity = 0
  Pipefs-Directory = /var/lib/nfs/rpc_pipefs
  Domain = uesp.net
  Nobody-User = nfsnobody
  Nobody-Group = nfsnobody

Ensure that the partitions to be shared have the rw and acl options set /etc/fstab. For us this will simply be the /home2 mount which is the second hard drive:

  LABEL=/home2       /home2     ext3    rw,acl         1 2

Remount /home2 for the settings to take effect:

  umount -v /home2
  mount -v /home2


  mount -v -o remount /home2

Ensure that the appropriate services are setup to start automatically when the server boots:

  chkconfig --level 0123456 portmap off
  chkconfig --level 345 portmap on
  chkconfig --level 0123456 rpcidmapd off
  chkconfig --level 345 rpcidmapd on
  chkconfig --level 0123456 nfslock off
  chkconfig --level 0123456 nfslock on
  chkconfig --level 0123456 nfs off
  chkconfig --level 345 nfs on
  chkconfig --level 0123456 rpcgssd off
  chkconfig --level 345 rpcsvcgssd off

To manually start the required services, run the following commands:

  /etc/init.d/nfslock stop
  /etc/init.d/nfslock stop
  /etc/init.d/rpcgssd stop
  /etc/init.d/rpcsvcgssd stop
  /etc/init.d/portmap restart
  /etc/init.d/rpcidmapd restart

Some of these may fail depending on what is or isn't currently running. NFS should be now be running on the server. To check use the commands:

  rpcinfo -p
  netstat -tupa

Both server and client need identical usernames, UIDs, groupnames, and GIDs, for example:

 useradd -d /home/username -u 600 username
 passwd username

Hopefully at this point everything has been setup on the server to allow NFS shares, although no shares have yet to be created.

Client Setup[edit]

For now, the client will be the mirror server which has been setup as a MySQL replication slave.

Setup etc/hosts.allow and /etc/idmapd.conf the same as the server.

Create the mount points for the shares from the server:

  mkdir -m 755 /home2

Setup the boot scripts as was done on the server with the exception of excluding the nfslock on and nfs on commands. Manually start the NFS services as on the server:

 /etc/init.d/nfslock stop
 /etc/init.d/nfslock stop
 /etc/init.d/rpcgssd stop
 /etc/init.d/rpcsvcgssd stop
 /etc/init.d/portmap restart
 /etc/init.d/rpcidmapd restart

Check that NFS is operating properly and add the necessary users exactly as was done on the server. The client should now be ready for using a NFS share.

Read-Only Share[edit]

To create a read-only share on the server modify /etc/exports to include:


The share directory must be set with the appropriate permissions:

  chmod 1777 /home2/wikiimages

This sets the directory to have default permissions (777) as well as setting the sticky bit (1000).

Notify NFS about the new export:

  exportfs -rv
  exportfs -v

Check the NFS export status by:

  showmount -e

RSync Backups[edit]

Once the NFS share is setup we can begin to start doign backups of the Wiki images directory and any other paths that are shared.

Only one command is needed:

  rsync -zrtv /mnt/wikiimages/* /home2/wikiimages

This copies all files recursively (-r) from the mounted NFS share into the given local path preserving file time (-t) and compressing the transfer (-z). Verbose output is specified with -v.

The initial backup of a 300MB image path takes only a few hours. Subsequent daily backups are much shorter at around 5 minutes as only the changed or new files need to be transferred.

NFS Setup #2[edit]

Server Setup[edit]

The server will be the machine which hosts the files. In our case this will likely be the Wiki images.

Modify /etc/exports to include the following line:


Substitute the actual client IPs which will be accessing the NFS share (i.e., all content servers).

Modify /etc/hosts.deny with the following lines:

  portmap: ALL
  lockd: ALL
  mountd: ALL
  rquotad: ALL
  statd: ALL

and similarily /etc/hosts.allow:


Start the various services if not already running:


Run /usr/sbin/rpcinfo -p to ensure all the services are running. Add the previous programs to rc.local (or a similar script) to automatically start them on server boot.

If you change the /etc/exports file run the command /usr/sbin/exportfs -ra to update the change.

Client Setup[edit]

Start the portmap, rpc.statd and rpc.lockd services as was done on the server. The remote share on the server now should be available for mounting:

  mkdir /mnt/share1
  mount share.uesp.net:/home2/wikiimages /mnt/share1
  umount /mnt/share1

The user/group ids on the server and all clients must match otherwise there will be file permission errors.

Note: You will need to edit hosts.allow and add entries like on the server for all hosts accessing the NFS shared drive. If you don't do this the lockmgr may deny other hosts from locking (and thus accessing) files on the share.

Replication Commands[edit]

Block Updates[edit]

To block updates on the master until the slave catches up, run the following on the master:


Record the log filename and position. On the slave run the command:

  SELECT MASTER_POS_WAIT('filename', position);

This will run until the slave and master are in sync. On the master enable writes by running:


RSync Over SSH[edit]

Another option for backing up files on the main site is using RSync over a SSH connection rather than setting up an explicit share.

Quick Test[edit]

A simple test for seeing if the necessary components are already setup on the server and client is to execute the command:

 rsync -avz -e ssh remoteuser@uesp.net:/home2/dhackimages /home2

from the client (backup) server. The remoteuser must have ssh and read access to the specified directory on uesp.net. The current user on the client must have write access in /home2. If this works the given directory on the server (/home2/dhackimages/*) will be copied to the /home2/dhackimages/ path on the current server.

Key Generation[edit]

Rather than using clear text passwords for the ssh session we will using a pair of private/public keys for added security. On the client machine create a pair of keys:

  ssh-keygen -t dsa -b 1024 -f /home/someuser/uesp2net-rsync-key 

Do not enter a passphrase for the key or you will still be prompted when logging in using that key. Ensure that the private key file is not readable by any other users (it should be set to this automatically). Copy the public key to the primary server and add it to the /home/sshuser/.ssh/authorized_keys file:

  scp /home/someuser/uesp2net-rsync-key.pub sshuser@uesp.net:/home/sshuser/

Server Setup[edit]

A custom user with only read access can be created if desired:

  useradd sshuser
  passwd sshuser

Ensure that the public key previously created on the backup server is copied/appended to the /home/sshuser/.ssh/authorized_keys file.

  cd /home/sshuser
  mkdir .ssh
  chmod 700 .ssh
  mv ../uesp2net-rsync-key.pub authorized_keys
  chmod 600 authorized_keys

For additional security the authorized_keys file can be further modified to only allow that key to be used from the backup server and to only execute a specific program. Add the following to the beginning of the line in the key file (before ssh-dss):


The validate-rsync file is to ensure only an rsync command is available using this connection and must be executable by sshuser:

  echo "Rejected"
  echo "Rejected"
  echo "Rejected"
  echo "Rejected"
  echo "Rejected"
  echo "Rejected"
  rsync\ --server*)
  echo "Rejected"


Once the above steps are complete we can attempt to rsync from the backup client to the server:

  rsync -avz -e "ssh -i /home/user/uesp2net-rsync-key" sshuser@uesp.net:/remote/dir /this/dir/ 

If the command completed successfully without prompting for a password then the key setup is working correctly.


4 February 2007 
Initial replication test. Master log file uesp-mysql-bin.001 at position 373,883,456. 240MB compressed database backup size. Replication seems to be working fine. Will let it run for a while and see if any problems develop.
May-June 2007 
Finalized uesp2.net mirror domain with database replication and the read-only display of a daily snap-shot of the Wiki. Uses live database replication and hourly rsync/ssh for mirroring the necessary files.


  1. MySQL Database Replication HowTo
  2. MySQL Manual: Chapter 6: Replication
  3. Learning NFSv4 With Fedora Core 2
  4. Using RSync and SSH