The goal is to get a Mail server that will never be taken offline for nightly backups (as our old system used to). We found Linux, XFS, LVM, and snapshots to be perfect for this situation
What is snapshotting?
Snapshotting is a way to take a "point-in-time" image of a filesystem. What this allows you is to do is access files that would normally be locked so you can back them up. The process is as follows:- Freeze the file system
- Take the snapshot
- Mount the snapshot
- Unfreeze the real filesystem
- Take a backup of the snapshot
- Unmount and destroy the snapshot
Snapshots
A wonderful facility provided by LVM is 'snapshots'. This allows the administrator to create a new block device which presents an exact copy of a logical volume, frozen at some point in time. Typically this would be used when some batch processing, a backup for instance, needs to be performed on the logical volume, but you don't want to halt a live system that is changing the data. When the snapshot device has been finished with the system administrator can just remove the device. This facility does require that the snapshot be made at a time when the data on the logical volume is in a consistent state - the VFS-lock patch for LVM1 makes sure that some filesystems do this automatically when a snapshot is created, and many of the filesystems in the 2.6 kernel do this automatically when a snapshot is created without patching.
|
Full snapshot are automatically disabled
|
|
If the snapshot logical volume becomes full it will be dropped (become
unusable) so it is vitally important to allocate enough space. The amount of
space necessary is dependent on the usage of the snapshot, so there is no set
recipe to follow for this. If the snapshot size equals the origin size, it
will never overflow. |
In LVM2, snapshots are read/write by default. Read/write snapshots work like read-only snapshots, with the additional feature that if data is written to the snapshot, that block is marked in the exception table as used, and never gets copied from the original volume. This opens up many new possibilities that were not possible with LVM1's read-only snapshots. One example is to snapshot a volume, mount the snapshot, and try an experimental program that change files on that volume. If you don't like what it did, you can unmount the snapshot, remove it, and mount the original filesystem in its place. It is also useful for creating volumes for use with Xen. You can create a disk image, then snapshot it and modify the snapshot for a particular domU instance. You can then create another snapshot of the original volume, and modify that one for a different domU instance. Since the only storage used by a snapshot is blocks that were changed on the origin or the snapshot, the majority of the volume is shared by the domU's.
|
With the current LVM2/device-mapper code, the origin can be grown, but not
shrunk. With LVM1, you cannot resize the origin |
Freezing the file system
Freezing the file system is to prevent writing to the disk while the snapshot is taken. This freezes all activity to and from the filesystem to reduce the risk of problems. To do this we use xfs_freeze (included in the xfs userspace tools).
#xfs_freeze -f
/var/CommuniGate
Taking the snapshot
When you take the snapshot, you're essentially creating a new LVM device that appears to be a duplicate of the "real" filesystem at a point in time. To do this we create another LVM device (using lvcreate) with an argument of -s to indicate we want a snapshot and the -n argument to name it.#lvcreate -l 500 -s -n snap /dev/cgpro/prod
Mounting the snapshot
Next, we mount the snapshot somewhere else, we use /var/CGPro-Snap. Please note that since we're using XFS we have to mount the snapshot with the "nouuid" option, otherwise XFS will think it's trying to mount the same filesystem twice (which technically, it is). We also make it read-only just to be safe.#mount -o nouuid,ro /dev/mapper/cgpro-snap /var/CGPro-Snap
Unfreeze the filesystem
Now, you need to unfreeze the filesystem to resume I/O:#xfs_freeze -u /var/CommuniGate
Do the backup
Now, backup /var/CGPro-Snap like you would any other directory:Unmount and destroy the snapshot
Now we have to unmount the snapshot and destroy it. The reason we destroy it because any I/O that takes place on the device uses space to track the differences betweeen the real and the snapshot filesystem. Plus, we've done our job so there's no reason to keep it around.#unmount /var/CGPro-Snap
#lvremove -f /dev/cgpro/snapLogical Volume Manager (LVM) provides the ability to take a snapshot of any logical volume for the purpose of obtaining a backup of a partition in a consistent state. As applications may access files or databases on a partition during a backup some files may be backed up in one state, while later files are backed up after an update has been made, leading to an inconsistent backup.
Traditionally the solution has been to mount the partition read-only, apply table-level write locks to databases or shut down the database engine etc.; all measures which adversely impact availability (but not as much as data loss without a backup will). With LVM snapshots it is possible to obtain a consistent backup without compromising availability.
Please note that this information is only valid for partitions that have been created using LVM. LVM snapshots cannot be used with non-LVM filesystems.
The LVM snapshot works by logging the changes to the filesystem to the snapshot partition, rather than mirroring the partition. Thus when you create a snapshot partition you do not need to use space equal to the size of the partition that you are taking a snapshot of, but rather the amount of changes that it will undergo during the lifetime of the snapshot. This is a function of both how much data is being written to the partition and also how long you intend keeping the LVM snapshot. The longer you leave it, the more changes there are likely to be on the file system and the more the snapshot partition will fill up with change information. The higher the rate of change on the partition the shorter the lifespan of the snapshot. If the amount of changes on the LVM partition exceed the size of the snapshot then the snapshot is released.
Now we will show an example of how to make an LVM snapshot. Here we create a logical volume of 500MB to use to take a snapshot. This will allow 500MB of changes on the volume we are taking a snapshot of during the lifetime of the snapshot.
The following command will create /dev/ops/dbbackup as a snapshot of /dev/ops/databases.
#lvcreate -L500M -s -n dbbackup /dev/ops/databases
lvcreate -- WARNING: the snapshot must be disabled if it gets full
lvcreate -- INFO: using default snapshot chunk size of 64 KB for "/dev/ops/dbbackup"
lvcreate -- doing automatic backup of "ops"
lvcreate -- logical volume "/dev/ops/dbbackup" successfully createdNow we create the mount point and mount the snapshot.
# mkdir /mnt/ops/dbbackup
# mount /dev/ops/dbbackup /mnt/ops/dbbackup
mount: block device /dev/ops/dbbackup is write-protected, mounting read-onlyAfter performing the backup of the snapshot partition we release the snapshot. The snapshot will be automatically released when it fills up, but maintaining incurs a system overhead in the meantime.
# umount /mnt/ops/dbbackup
# lvremove /dev/ops/dbbackup
lvremove -- do you really want to remove "/dev/ops/dbbackup"? [y/n]: y
lvremove -- doing automatic backup of volume group "ops"
lvremove -- logical volume "/dev/ops/dbbackup" successfully removedThis tutorial shows how you can create backups of LVM partitions with an LVM feature called LVM snapshots. An LVM snapshot is an exact copy of an LVM partition that has all the data from the LVM volume from the time the snapshot was created. The big advantage of LVM snapshots is that they can be used to greatly reduce the amount of time that your services/databases are down during backups because a snapshot is usually created in fractions of a second. After the snapshot has been created, you can back up the snapshot while your services and databases are in normal operation.
Preliminary Note
I have tested this on a Debian Etch server with the IP address 192.168.0.100 and the hostname server1.example.com. It has two hard disks:- /dev/sda (10GB) that contains a small /boot partition (non-LVM), a / partition (LVM, a little less than 10GB), and a swap partition (LVM)
- /dev/sdb (60GB), unused at the moment; will be used to create a 30GB /backups partition (LVM) and for the snapshots of the / partition (10GB - that's enough because the / partition is a little less than 10GB).
You don't necessarily need a second HDD for the snapshots -
you can use the first one provided you have enough free (unpartitioned) space
left on it to create snapshots on it (you should use the same space for the
snapshots that you use for the partition that you want to back up). And as
mentioned before, you can use a USB drive for backing up the snapshots.
Create The /backups LVM Partition
(If you'd like to store your backups somewhere else, e.g on
an external USB drive, you don't have to do this.) Our current situation is as
follows:
#pvdisplay
server1:~# pvdisplay
--- Physical volume ---
PV Name /dev/sda5
VG Name server1
PV Size 9.76 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 2498
Free PE 0
Allocated PE 2498
PV UUID vQIUga-221O-GIKj-81Ct-2ITT-bKPw-kKElpM
--- Physical volume ---
PV Name /dev/sda5
VG Name server1
PV Size 9.76 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 2498
Free PE 0
Allocated PE 2498
PV UUID vQIUga-221O-GIKj-81Ct-2ITT-bKPw-kKElpM
#vgdisplay
server1:~# vgdisplay
--- Volume group ---
VG Name server1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 9.76 GB
PE Size 4.00 MB
Total PE 2498
Alloc PE / Size 2498 / 9.76 GB
Free PE / Size 0 / 0
VG UUID jkWyez-c0nT-LCaE-Bzvi-Q4oD-eD3Q-BKIOFC
--- Volume group ---
VG Name server1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 9.76 GB
PE Size 4.00 MB
Total PE 2498
Alloc PE / Size 2498 / 9.76 GB
Free PE / Size 0 / 0
VG UUID jkWyez-c0nT-LCaE-Bzvi-Q4oD-eD3Q-BKIOFC
#lvdisplay
server1:~# lvdisplay
--- Logical volume ---
LV Name /dev/server1/root
VG Name server1
LV UUID UK1rjH-LS3l-f7aO-240S-EwGw-0Uws-5ldhlW
LV Write Access read/write
LV Status available
# open 1
LV Size 9.30 GB
Current LE 2382
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:0
--- Logical volume ---
LV Name /dev/server1/swap_1
VG Name server1
LV UUID 2PASi6-fQV4-I8sJ-J0yq-Y9lH-SJ32-F9jHaj
LV Write Access read/write
LV Status available
# open 2
LV Size 464.00 MB
Current LE 116
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:1
--- Logical volume ---
LV Name /dev/server1/root
VG Name server1
LV UUID UK1rjH-LS3l-f7aO-240S-EwGw-0Uws-5ldhlW
LV Write Access read/write
LV Status available
# open 1
LV Size 9.30 GB
Current LE 2382
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:0
--- Logical volume ---
LV Name /dev/server1/swap_1
VG Name server1
LV UUID 2PASi6-fQV4-I8sJ-J0yq-Y9lH-SJ32-F9jHaj
LV Write Access read/write
LV Status available
# open 2
LV Size 464.00 MB
Current LE 116
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:1
#fdisk -l
server1:~# fdisk -l
Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 31 248976 83 Linux
/dev/sda2 32 1305 10233405 5 Extended
/dev/sda5 32 1305 10233373+ 8e Linux LVM
Disk /dev/sdb: 64.4 GB, 64424509440 bytes
255 heads, 63 sectors/track, 7832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/dm-0: 9990 MB, 9990832128 bytes
255 heads, 63 sectors/track, 1214 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1: 486 MB, 486539264 bytes
255 heads, 63 sectors/track, 59 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-1 doesn't contain a valid partition table
So /dev/sda contains the logical volumes /dev/server1/root (/ partition)
and /dev/server1/swap_1 (swap partition) plus a
small /boot partition (non-LVM). Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 31 248976 83 Linux
/dev/sda2 32 1305 10233405 5 Extended
/dev/sda5 32 1305 10233373+ 8e Linux LVM
Disk /dev/sdb: 64.4 GB, 64424509440 bytes
255 heads, 63 sectors/track, 7832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/dm-0: 9990 MB, 9990832128 bytes
255 heads, 63 sectors/track, 1214 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1: 486 MB, 486539264 bytes
255 heads, 63 sectors/track, 59 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-1 doesn't contain a valid partition table
(BTW, /dev/server1/root is the same as /dev/mapper/server1-root on Debian Etch. The first is a symlink to the second; I will use both notations in this tutorial. The same goes for /dev/server1/swap_1 and /dev/mapper/server1-swap_1.)
I will now create the partition /dev/sdb1 and add it to the server1 volume group, and afterwards I will create the volume /dev/server1/backups (which will be 30GB instead of the full 60GB of /dev/sdb so that we have enough space left for the snapshots) which I will mount on /backups:
fdisk /dev/sdb
server1:~# fdisk /dev/sdbDevice contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
The number of cylinders for this disk is set to 7832.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): <-- n
Command action
e extended
p primary partition (1-4)
<-- p
Partition number (1-4): <-- 1
First cylinder (1-7832, default 1): <-- [ENTER]
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-7832, default 7832): <-- [ENTER]
Using default value 7832
Command (m for help): <-- t
Selected partition 1
Hex code (type L to list codes): <-- 8e
Changed system type of partition 1 to 8e (Linux LVM)
Command (m for help): <-- w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
pvcreate /dev/sdb1
vgextend server1 /dev/sdb1
lvcreate --name backups --size 30G server1
mkfs.ext3 /dev/mapper/server1-backups
mkdir /backups
Now let's mount our /dev/server1/backups volume
on /backups:vgextend server1 /dev/sdb1
lvcreate --name backups --size 30G server1
mkfs.ext3 /dev/mapper/server1-backups
mkdir /backups
mount /dev/mapper/server1-backups /backups
To have that volume mounted automatically whenever you boot the system, you
must edit /etc/fstab and add a line like this to it:
vi /etc/fstab
[...] /dev/mapper/server1-backups /backups ext3 defaults,errors=remount-ro 0 1 |
#pvdisplay
server1:~# pvdisplay
--- Physical volume ---
PV Name /dev/sda5
VG Name server1
PV Size 9.76 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 2498
Free PE 0
Allocated PE 2498
PV UUID vQIUga-221O-GIKj-81Ct-2ITT-bKPw-kKElpM
--- Physical volume ---
PV Name /dev/sdb1
VG Name server1
PV Size 59.99 GB / not usable 0
Allocatable yes
PE Size (KByte) 4096
Total PE 15358
Free PE 7678
Allocated PE 7680
PV UUID cvl1H5-cxRe-iyNg-m2mM-tjxM-AvER-rjqycO
--- Physical volume ---
PV Name /dev/sda5
VG Name server1
PV Size 9.76 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 4096
Total PE 2498
Free PE 0
Allocated PE 2498
PV UUID vQIUga-221O-GIKj-81Ct-2ITT-bKPw-kKElpM
--- Physical volume ---
PV Name /dev/sdb1
VG Name server1
PV Size 59.99 GB / not usable 0
Allocatable yes
PE Size (KByte) 4096
Total PE 15358
Free PE 7678
Allocated PE 7680
PV UUID cvl1H5-cxRe-iyNg-m2mM-tjxM-AvER-rjqycO
vgdisplay
server1:~# vgdisplay
--- Volume group ---
VG Name server1
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 2
Act PV 2
VG Size 69.75 GB
PE Size 4.00 MB
Total PE 17856
Alloc PE / Size 10178 / 39.76 GB
Free PE / Size 7678 / 29.99 GB
VG UUID jkWyez-c0nT-LCaE-Bzvi-Q4oD-eD3Q-BKIOFC
--- Volume group ---
VG Name server1
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 2
Act PV 2
VG Size 69.75 GB
PE Size 4.00 MB
Total PE 17856
Alloc PE / Size 10178 / 39.76 GB
Free PE / Size 7678 / 29.99 GB
VG UUID jkWyez-c0nT-LCaE-Bzvi-Q4oD-eD3Q-BKIOFC
lvdisplay
server1:~# lvdisplay
--- Logical volume ---
LV Name /dev/server1/root
VG Name server1
LV UUID UK1rjH-LS3l-f7aO-240S-EwGw-0Uws-5ldhlW
LV Write Access read/write
LV Status available
# open 1
LV Size 9.30 GB
Current LE 2382
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:0
--- Logical volume ---
LV Name /dev/server1/swap_1
VG Name server1
LV UUID 2PASi6-fQV4-I8sJ-J0yq-Y9lH-SJ32-F9jHaj
LV Write Access read/write
LV Status available
# open 2
LV Size 464.00 MB
Current LE 116
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:1
--- Logical volume ---
LV Name /dev/server1/backups
VG Name server1
LV UUID sXq2Xe-y2CE-Ycko-rCoE-M5kl-E1vH-KQRoP6
LV Write Access read/write
LV Status available
# open 1
LV Size 30.00 GB
Current LE 7680
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:2
--- Logical volume ---
LV Name /dev/server1/root
VG Name server1
LV UUID UK1rjH-LS3l-f7aO-240S-EwGw-0Uws-5ldhlW
LV Write Access read/write
LV Status available
# open 1
LV Size 9.30 GB
Current LE 2382
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:0
--- Logical volume ---
LV Name /dev/server1/swap_1
VG Name server1
LV UUID 2PASi6-fQV4-I8sJ-J0yq-Y9lH-SJ32-F9jHaj
LV Write Access read/write
LV Status available
# open 2
LV Size 464.00 MB
Current LE 116
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:1
--- Logical volume ---
LV Name /dev/server1/backups
VG Name server1
LV UUID sXq2Xe-y2CE-Ycko-rCoE-M5kl-E1vH-KQRoP6
LV Write Access read/write
LV Status available
# open 1
LV Size 30.00 GB
Current LE 7680
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:2
Create An LVM Snapshot Of /
Now it's time to create the snapshot of the /dev/server1/root volume. We will call the snapshot rootsnapshot:
lvcreate -L10G -s -n rootsnapshot /dev/server1/root
The output of
lvdisplay
should look like this:
server1:~# lvdisplay
--- Logical volume ---
LV Name /dev/server1/root
VG Name server1
LV UUID UK1rjH-LS3l-f7aO-240S-EwGw-0Uws-5ldhlW
LV Write Access read/write
LV snapshot status source of
/dev/server1/rootsnapshot [active]
LV Status available
# open 1
LV Size 9.30 GB
Current LE 2382
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:0
--- Logical volume ---
LV Name /dev/server1/swap_1
VG Name server1
LV UUID 2PASi6-fQV4-I8sJ-J0yq-Y9lH-SJ32-F9jHaj
LV Write Access read/write
LV Status available
# open 2
LV Size 464.00 MB
Current LE 116
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:1
--- Logical volume ---
LV Name /dev/server1/backups
VG Name server1
LV UUID sXq2Xe-y2CE-Ycko-rCoE-M5kl-E1vH-KQRoP6
LV Write Access read/write
LV Status available
# open 1
LV Size 30.00 GB
Current LE 7680
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:2
--- Logical volume ---
LV Name /dev/server1/rootsnapshot
VG Name server1
LV UUID 9zR5X5-OhM5-xUI0-OolP-vLjG-pexO-nk36oz
LV Write Access read/write
LV snapshot status active destination for /dev/server1/root
LV Status available
# open 1
LV Size 9.30 GB
Current LE 2382
COW-table size 10.00 GB
COW-table LE 2560
Allocated to snapshot 0.01%
Snapshot chunk size 8.00 KB
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:5
--- Logical volume ---
LV Name /dev/server1/root
VG Name server1
LV UUID UK1rjH-LS3l-f7aO-240S-EwGw-0Uws-5ldhlW
LV Write Access read/write
LV snapshot status source of
/dev/server1/rootsnapshot [active]
LV Status available
# open 1
LV Size 9.30 GB
Current LE 2382
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:0
--- Logical volume ---
LV Name /dev/server1/swap_1
VG Name server1
LV UUID 2PASi6-fQV4-I8sJ-J0yq-Y9lH-SJ32-F9jHaj
LV Write Access read/write
LV Status available
# open 2
LV Size 464.00 MB
Current LE 116
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:1
--- Logical volume ---
LV Name /dev/server1/backups
VG Name server1
LV UUID sXq2Xe-y2CE-Ycko-rCoE-M5kl-E1vH-KQRoP6
LV Write Access read/write
LV Status available
# open 1
LV Size 30.00 GB
Current LE 7680
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:2
--- Logical volume ---
LV Name /dev/server1/rootsnapshot
VG Name server1
LV UUID 9zR5X5-OhM5-xUI0-OolP-vLjG-pexO-nk36oz
LV Write Access read/write
LV snapshot status active destination for /dev/server1/root
LV Status available
# open 1
LV Size 9.30 GB
Current LE 2382
COW-table size 10.00 GB
COW-table LE 2560
Allocated to snapshot 0.01%
Snapshot chunk size 8.00 KB
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 254:5
mkdir -p /mnt/server1/rootsnapshot
Then we mount our snapshot:
mount /dev/server1/rootsnapshot /mnt/server1/rootsnapshot
Then we run
ls -l /mnt/server1/rootsnapshot/
This should show all directories and files that we know from our / partition:
server1:~# ls -l /mnt/server1/rootsnapshot/
total 132
drwxr-xr-x 2 root root 4096 2007-04-10 21:02 backups
drwxr-xr-x 2 root root 4096 2007-04-10 20:35 bin
drwxr-xr-x 2 root root 4096 2007-04-10 20:25 boot
lrwxrwxrwx 1 root root 11 2007-04-10 20:25 cdrom -> media/cdrom
drwxr-xr-x 13 root root 40960 2007-04-10 20:36 dev
drwxr-xr-x 57 root root 4096 2007-04-10 21:09 etc
drwxr-xr-x 3 root root 4096 2007-04-10 20:36 home
drwxr-xr-x 2 root root 4096 2007-04-10 20:26 initrd
lrwxrwxrwx 1 root root 28 2007-04-10 20:29 initrd.img -> boot/initrd.img-2.6.18-4-486
drwxr-xr-x 13 root root 4096 2007-04-10 20:34 lib
drwx------ 2 root root 16384 2007-04-10 20:25 lost+found
drwxr-xr-x 4 root root 4096 2007-04-10 20:25 media
drwxr-xr-x 2 root root 4096 2006-10-28 16:06 mnt
drwxr-xr-x 2 root root 4096 2007-04-10 20:26 opt
drwxr-xr-x 2 root root 4096 2006-10-28 16:06 proc
drwxr-xr-x 3 root root 4096 2007-04-10 20:42 root
drwxr-xr-x 2 root root 4096 2007-04-10 20:36 sbin
drwxr-xr-x 2 root root 4096 2007-03-07 23:56 selinux
drwxr-xr-x 2 root root 4096 2007-04-10 20:26 srv
drwxr-xr-x 2 root root 4096 2007-01-30 23:27 sys
drwxrwxrwt 2 root root 4096 2007-04-10 21:09 tmp
drwxr-xr-x 10 root root 4096 2007-04-10 20:26 usr
drwxr-xr-x 13 root root 4096 2007-04-10 20:26 var
lrwxrwxrwx 1 root root 25 2007-04-10 20:29 vmlinuz -> boot/vmlinuz-2.6.18-4-486
So our snapshot has successfullly been created!total 132
drwxr-xr-x 2 root root 4096 2007-04-10 21:02 backups
drwxr-xr-x 2 root root 4096 2007-04-10 20:35 bin
drwxr-xr-x 2 root root 4096 2007-04-10 20:25 boot
lrwxrwxrwx 1 root root 11 2007-04-10 20:25 cdrom -> media/cdrom
drwxr-xr-x 13 root root 40960 2007-04-10 20:36 dev
drwxr-xr-x 57 root root 4096 2007-04-10 21:09 etc
drwxr-xr-x 3 root root 4096 2007-04-10 20:36 home
drwxr-xr-x 2 root root 4096 2007-04-10 20:26 initrd
lrwxrwxrwx 1 root root 28 2007-04-10 20:29 initrd.img -> boot/initrd.img-2.6.18-4-486
drwxr-xr-x 13 root root 4096 2007-04-10 20:34 lib
drwx------ 2 root root 16384 2007-04-10 20:25 lost+found
drwxr-xr-x 4 root root 4096 2007-04-10 20:25 media
drwxr-xr-x 2 root root 4096 2006-10-28 16:06 mnt
drwxr-xr-x 2 root root 4096 2007-04-10 20:26 opt
drwxr-xr-x 2 root root 4096 2006-10-28 16:06 proc
drwxr-xr-x 3 root root 4096 2007-04-10 20:42 root
drwxr-xr-x 2 root root 4096 2007-04-10 20:36 sbin
drwxr-xr-x 2 root root 4096 2007-03-07 23:56 selinux
drwxr-xr-x 2 root root 4096 2007-04-10 20:26 srv
drwxr-xr-x 2 root root 4096 2007-01-30 23:27 sys
drwxrwxrwt 2 root root 4096 2007-04-10 21:09 tmp
drwxr-xr-x 10 root root 4096 2007-04-10 20:26 usr
drwxr-xr-x 13 root root 4096 2007-04-10 20:26 var
lrwxrwxrwx 1 root root 25 2007-04-10 20:29 vmlinuz -> boot/vmlinuz-2.6.18-4-486
Now we can create a backup of the snapshot on the /backups partition using our preferred backup solution. For example, if you like to do a file-based backup, you can do it like this:
tar -pczf /backups/root.tar.gz /mnt/server1/rootsnapshot
And if you like to do a bitwise backup (i.e. an image), you can do it like
this:
dd if=/dev/server1/rootsnapshot of=/backups/root.dd
server1:~# dd if=/dev/server1/rootsnapshot of=/backups/root.dd
19513344+0 records in
19513344+0 records out
9990832128 bytes (10 GB) copied, 320.059 seconds, 31.2 MB/s
You could also use both ways to be prepared for whatever might happen to
your /dev/server1/root volume. In this case, you
should have two backups afterwards:19513344+0 records in
19513344+0 records out
9990832128 bytes (10 GB) copied, 320.059 seconds, 31.2 MB/s
ls -l /backups/
server1:~# ls -l /backups/
total 9947076
drwx------ 2 root root 16384 2007-04-10 21:04 lost+found
-rw-r--r-- 1 root root 9990832128 2007-04-10 21:28 root.dd
-rw-r--r-- 1 root root 184994590 2007-04-10 21:18 root.tar.gz
Afterwards, we unmount and remove the snapshot to prevent it from consuming
system resources:total 9947076
drwx------ 2 root root 16384 2007-04-10 21:04 lost+found
-rw-r--r-- 1 root root 9990832128 2007-04-10 21:28 root.dd
-rw-r--r-- 1 root root 184994590 2007-04-10 21:18 root.tar.gz
umount /mnt/server1/rootsnapshot
lvremove /dev/server1/rootsnapshot
That's it, you've just made your first backup from an LVM snapshot.lvremove /dev/server1/rootsnapshot
Restore A Backup
This chapter is about restoring the /dev/server1/root volume from the dd image we've created in the previous chapter. Normally you can restore a backup from the same running system if the volume that you want to restore doesn't contain system-critical files. But because the /dev/server1/root volume is the system partition of our machine, we must use a rescue system or Live-CD to restore the backup. The rescue system/Live-CD must support LVM.To restore the /dev/server1/root volume, I boot the system from the Debian Etch Netinstall CD and type in rescue at the boot prompt:
Run mount and you should see that /dev/server1/backups is
mounted on /target. So the dd
image of the /dev/server1/root volume should be /target/root.dd. To restore it, we simply run dd if=/target/root.dd of=/dev/server1/root
Taking a Backup Using Snapshots
Following on from the previous example we now want to use the extra space in the "ops" volume group to make a database backup every evening. To ensure that the data that goes onto the tape is consistent we use an LVM snapshot logical volume.A snapshot volume is a special type of volume that presents all the data that was in the volume at the time the snapshot was created. For a more detailed description, see Section 3.8, Snapshots. This means we can back up that volume without having to worry about data being changed while the backup is going on, and we don't have to take the database volume offline while the backup is taking place.
|
In LVM1, this type of volume was read-only, but LVM2 creates read/write
snapshots by default. |
Create the snapshot volume
There is a little over 500 Megabytes of free space in the "ops" volume group, so we will use all of it to allocate space for the snapshot logical volume. A snapshot volume can be as large or a small as you like but it must be large enough to hold all the changes that are likely to happen to the original volume during the lifetime of the snapshot. So here, allowing 500 megabytes of changes to the database volume which should be plenty.
# lvcreate -L592M -s -n dbbackup /dev/ops/databases
lvcreate -- WARNING: the snapshot must be disabled if it gets full
lvcreate -- INFO: using default snapshot chunk size of 64 KB for "/dev/ops/dbbackup"
lvcreate -- doing automatic backup of "ops"
lvcreate -- logical volume "/dev/ops/dbbackup" successfully created
|
|
Full snapshot are automatically disabled
|
|
If the snapshot logical volume becomes full it will be dropped (become
unusable) so it is vitally important to allocate enough space. The amount of
space necessary is dependent on the usage of the snapshot, so there is no set
recipe to follow for this. If the snapshot size equals the origin size, it
will never overflow. |
Mount the snapshot volume
We can now create a mount-point and
mount the volume
# mkdir /mnt/ops/dbbackup
# mount /dev/ops/dbbackup /mnt/ops/dbbackup
mount: block device /dev/ops/dbbackup is write-protected, mounting read-only
|
# mount /dev/ops/dbbackup /mnt/ops/dbbackup -onouuid,ro
|
Do the backup
I assume you will have a more sophisticated backup strategy than this!
# tar -cf /dev/rmt0 /mnt/ops/dbbackup
tar: Removing leading `/' from member names
|
Remove the snapshot
When the backup has finished you can now unmount the volume and remove it from the system. You should remove snapshot volume when you have finished with them because they take a copy of all data written to the original volume and this can hurt performance.
# umount /mnt/ops/dbbackup
# lvremove /dev/ops/dbbackup
lvremove -- do you really want to remove "/dev/ops/dbbackup"? [y/n]: y lvremove -- doing automatic backup of volume group "ops"
lvremove -- logical volume "/dev/ops/dbbackup" successfully removed |
No comments:
Post a Comment