This document describes howto migrate a single disk linux installation used for virtualization with kvm to a RAID 5 array. In this setup, 2 disks are added. Including the original disk a 3 disk RAID 5 is created. It is sought out to do the complete migration causing only one reboot downtime. During the process of syncing the virtual machine data, the guests can experience losses in disk performance.

Overview: Current and target partitioning layout

This manual is using mdadm to create software RAID. It can be adapted to any RAID level that can support one disk missing. The setup used as in this example is a CentOS 6.5 installation, partitioned as following described:

Description Filename in /dev Filesystem Used for
Disk 1, Partition 1, 4GB /dev/sda1 SWAP Virtualization host SWAP partition
Disk 1, Partition 2, 16GB /dev/sda2 ext4 Virtualization host root partition
Disk 1, Partition 3, remaining free space /dev/sda3 => /dev/mapper/VGcrypt cryptsetup luks volume Encryption for LVM
On top of cryptsetup luks volume VGcrypt, all space on luks encrypted volume /dev/mapper/VGcrypt => /dev/mapper/VGcrypt-$LVM-logical-volume-name LVM physical volume => only LVM volume group Storage of virtual machines

RAID 5 of Partitions 2 of Disk 1, Disk 2, Disk 3, 1GB/dev/md1SWAPVirtualization host SWAP partition

Target server partitioning: Three disk RAID
Description Filename in /dev Filesystem Used for
RAID 1 of Partitions 1 of Disk 1, Disk 2, Disk 3, 1GB /dev/md0 ext2 Virtualization host /boot partition
RAID 5 of Partitions 3 of Disk 1, Disk 2, Disk 3, 16GB /dev/md2 ext4 Virtualization host root partition
RAID 5 of Partitions 4 of Disk 1, Disk 2, Disk 3, remaining space /dev/md3 => /dev/mapper/Raid5Crypt cryptsetup luks volume Encryption for LVM
On top of cryptsetup luks volume Raid5Crypt, all space on luks encrypted volume /dev/mapper/Raid5Crypt => /dev/mapper/Raid5Crypt-$LVM-logical-volume-name LVM physical volume and LVM volume group New storage of virtual machines

Consideration: /boot partition and RAID level

The partition containing the /boot directory can only be RAID 1. You can not boot from RAID 0, 4, 5, 6 and other RAID types. As the single disk in this example has no dedicated /boot partition, the new layout is created with four paritions, one RAID 1 array over 3 disk partitions for /boot and one RAID 5 array for each SWAP, the root partition and cryptsetup luks for LVM.

Consideration: SWAP partition in RAID or not

Having multiple seperate swap partitions with all of them being set to pri=0 in /etc/fstab is faster than using a SWAP partition of a RAID array. If however one disk fails and the kernel is using a partition on it to swap right now, the kernel will panic. I therefore decided not to use individual partitions for SWAP space but to use a RAID 5 partition as well.

Step 1: Backup, then attach new disks

The backup system for the virtual machine host and the virtual machines themselves should be checked for being up to date. The new disks can then be installed to the machine.

Step 2: Identify operating system drive

The new drives can be identified using the linux tools `mount` and `fdisk`. It is important to identify the /dev paths for each disk, and to identify which drive the operating system is running from right now:

[root@host ~]# mount
/dev/sda2 on / type ext4 (rw)

The `mount` command shows that /dev/sda2 is mounted, so /dev/sda is the drive your operating system is running from. This device should be only read and not be written to during all steps of this process.

Step 3: Create GPT partition table on new devices

A GPT partition table has to be created on the other drives, so the /dev paths of the new disk drives have to be identified. A good way to do so is to look at the vendor and IDs of the devices, and whether they contain partitions in use or not. The required information can be obtained like follows:

[root@host ~]# fdisk -l | grep Disk
# The following error is no problem, we just want to see the drives present and not interact with them using fdisk
WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes

# The IDs labled on the physical disk drives by the vendor can be obtained using hdparm:
[root@host ~]# hdparm -I /dev/sda | grep Number
	Model Number:       WDC WD1003FZEX-00MK2A0                  
	Serial Number:      WD-WCC3F2292339
[root@host ~]# hdparm -I /dev/sdb | grep Number
	Model Number:       WDC WD1003FZEX-00MK2A0                  
	Serial Number:      WD-WMC3F0853983
[root@host ~]# hdparm -I /dev/sdc | grep Number
	Model Number:       WDC WD1003FZEX-00MK2A0                  
	Serial Number:      WD-WMC3F0865165

It is already know that the operating system is using only /dev/sda, so /dev/sdb and /dev/sdc are identified as the newly added disks.

The GPT partition table can now be created. I used gdisk for CentOS, which I found as .rpm file here. A little tutorial on installing .rpm packages from .rpm files via yum can be found here.

[root@host ~]# gdisk /dev/sdb
GPT fdisk (gdisk) version 0.8.7

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): o
This option deletes all partitions and creates a new protective MBR.
Proceed? (Y/N): Y

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): Y
OK; writing new GUID partition table (GPT) to /dev/sdb.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.

This procedure has to be repeated for /dev/sdc, or any other devices you might have. When this is done the new partitions can be created. With GPT it is possible to have up to 128 partitions per device.

[root@host ~]# gdisk /dev/sdb
GPT fdisk (gdisk) version 0.8.7

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): n
Partition number (1-128, default 1): 
First sector (34-1953525134, default = 2048) or {+-}size{KMGTP}: 
Last sector (2048-1953525134, default = 1953525134) or {+-}size{KMGTP}: +1G
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): fd00
Changed type of partition to 'Linux RAID'

Command (? for help): n
Partition number (2-128, default 2): 
First sector (34-1953525134, default = 2099200) or {+-}size{KMGTP}: 
Last sector (2099200-1953525134, default = 1953525134) or {+-}size{KMGTP}: +4G
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): fd00
Changed type of partition to 'Linux RAID'

Command (? for help): n
Partition number (3-128, default 3): 
First sector (34-1953525134, default = 10487808) or {+-}size{KMGTP}: 
Last sector (10487808-1953525134, default = 1953525134) or {+-}size{KMGTP}: +16G
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): fd00
Changed type of partition to 'Linux RAID'

Command (? for help): n
Partition number (4-128, default 4): 
First sector (34-1953525134, default = 44042240) or {+-}size{KMGTP}: 
Last sector (44042240-1953525134, default = 1953525134) or {+-}size{KMGTP}: 
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): fd00
Changed type of partition to 'Linux RAID'

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): Y
OK; writing new GUID partition table (GPT) to /dev/sdb.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot.
The operation has completed successfully.

When it takes to much time to sync the GPT to all disks, the partition table just created can be copied or synced to the remaining devices using `sgdisk` like follows:

[root@host ~]# sgdisk --backup=table.img /dev/sdb
The operation has completed successfully.
[root@host ~]# sgdisk --load-backup=table.img /dev/sdc
The operation has completed successfully.
[root@host ~]# sgdisk -G /dev/sdc
Creating new GPT entries.
The operation has completed successfully.

After modifying the partition table, the tool `partprobe` has to be run. The new partition layout can then be viewed using `gdisk`:

[root@host ~]# partprobe

[root@host ~]# gdisk -l /dev/sdb
GPT fdisk (gdisk) version 0.8.7

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 1953525168 sectors, 931.5 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): A97DAF20-EE92-4E4F-94FF-D1A2C30B9540
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 1953525134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         2099199   1024.0 MiB  FD00  Linux RAID
   2         2099200        10487807   4.0 GiB     FD00  Linux RAID
   3        10487808        44042239   16.0 GiB    FD00  Linux RAID
   4        44042240      1953525134   910.5 GiB   FD00  Linux RAID

Step 4: Creating mdadm software RAID arrays

As `/dev/sda` is still in use, the RAID arrays are being created with one partition missing each. Those partitions will be added later on after the degraded RAID array has been booted from.
The flag `–metadata=0.90` has to be given to the `mdadm` command for the creation of the RAID array containing the `/boot` partition, otherwise grub will not be able to detect the `/boot` partition and load the kernel from it during startup of the machine.

[root@host ~]# mdadm --create --metadata=0.90 --verbose /dev/md0 --level=1 --RAID-devices=3 missing /dev/sdb1 /dev/sdc1
mdadm: size set to 1048512K
Continue creating array? y
mdadm: array /dev/md0 started.

[root@host ~]# mdadm --create --verbose /dev/md1 --level=5 --RAID-devices=3 missing /dev/sdb2 /dev/sdc2
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 4191744K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

This step is continued for the remaining partitions. The process of `mdadm` syncing the RAID arrays can be ovserved by the command `watch cat /proc/mdstat`. It can be interrupted by pressing CRTL + C.

Step 5: Creating filesystems for virtualizing host operating system

The filesystems ext2, ext4 and the SWAP space are being created on the RAID arrays using the standard filesystem creation tools.

[root@host ~]# mkfs.ext2 /dev/md0
[...]

[root@host ~]# mkswap /dev/md1
[...]

[root@host ~]# mkfs.ext4 /dev/md2
[...]

Due to udev changing the device filenames in `/dev` on CentOS 6.5, it is highly recommended to use UUID unique identifiers in all configuration files and commands to avoid accidents. After a reboot, udev will change the device names, for example to `/dev/{125,126,127}`. Identifying UUIDs to actual devices is possible with the `blkid` command.
This document will continue to use direct `/dev` paths for the sake of clear view.

[root@host ~]# blkid | grep '/dev/md'
/dev/md0: UUID="62967f07-9ba9-442a-9989-0e1b7704b0b3" TYPE="ext2" 
[...]

Step 6: Encrypting the RAID 5 partition for LVM virtual machine store

As all virtual machine images are to be stored on an encrypted device, a luks container is created via cryptsetup. To not have to type a passphrase every time the partition is decrypted, a keyfile is created from random data. Using a keyfile to large will have massive impact on performace. A 4096 bypte size file of true random data is sufficient. The keyfile should not be written to disk, as files deleted by `rm` can be recovered from some filesystems, such as ext4. A tmpfs filesystem is used to temporarily store the decryption key on the machine instead. The keyfile has to be saved to a secure medium that is usually stored offline, such as a removable hard drive, CD, backup tape or other backup media. A number of copies should be created. The keyfile should be saved with its md5 hash to ensure data integrity.
If the keyfile is lost, the luks volume can not be decrypted and there will be no hope of recovering the data.

[root@host ~]# mount
[...]
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")

[root@host ~]# dd if=/dev/random bs=4096 count=1 of=/dev/shm/keyfile.img
1+0 records in
1+0 records out
4096 bytes (4,1 kB) copied, 0,00169887 s, 2,4 MB/s

[root@host ~]# md5sum /dev/shm/keyfile.img | tee -a ~/keyfile-hash.txt
c78531da31a6d728d06386872aeafafb  keyfile.img

The keyfile and the textfile containing its hash are now being copied to backup media. Once this is ensured, the device is encrypted like follows:

[root@host ~]# cryptsetup luksFormat --key-file=/dev/shm/keyfile.img /dev/disk/by-uuid/9b9505bc-8abc-46c8-ace3-d9ec0a81e497 # A link to /dev/md3
WARNING!
========
This will overwrite data on /dev/md3 irrevocably.
Are you sure? (Type uppercase yes): YES

# Check if /dev/md2 is a valid luks partition
[root@host ~]# cryptsetup isLuks /dev/md3; echo $?
0

[root@host ~]# cryptsetup luksOpen --key-file=/dev/shm/keyfile.img /dev/md3 Raid5Crypt
[root@host ~]# echo $?
0

[root@host ~]# rm /dev/shm/keyfile.img

Step 7: Syncing LVM configuration

A LVM physical volume on top of the newly created luks encrypted volume is created like follows:

[root@host ~]# pvcreate /dev/mapper/Raid5Crypt 
  Physical volume "/dev/mapper/Raid5Crypt" successfully created

Using the live moving capabilities of `pvmove`, the virtual machine storage can be moved to the RAID array during operation of the guest systems. For this, the volume group of the single disk is to be extended to include the RAID array. As this command operates on critical data and processes, the `–test` switch is used to prevent accidents.

[root@host ~]# vgdisplay | grep Name
  VG Name               VGcrypt

[root@host ~]# pvdisplay | grep Name
  PV Name               /dev/mapper/VMsCrypt
  VG Name               VGcrypt
  PV Name               /dev/mapper/Raid5Crypt
  VG Name               
[root@host ~]# vgextend --test VGCrypt /dev/mapper/Raid5Crypt

[root@host ~]# vgextend --test VGcrypt /dev/mapper/Raid5Crypt
  TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.
  Volume group "VGcrypt" successfully extended # It is not telling that it is joking

[root@host ~]# pvdisplay | grep Name
  PV Name               /dev/mapper/VMsCrypt
  VG Name               VGcrypt
  PV Name               /dev/mapper/Raid5Crypt
  VG Name               

[root@host ~]# vgextend VGcrypt /dev/mapper/Raid5Crypt
  Volume group "VGcrypt" successfully extended

[root@host ~]# pvdisplay | grep Name
  PV Name               /dev/mapper/VMsCrypt
  VG Name               VGcrypt
  PV Name               /dev/mapper/Raid5Crypt
  VG Name               VGcrypt

As a LVM volume group is now spanned over both the single disk and the encrypted RAID array partition, the LVM logical volumes can be moved using `pvmove`. For each segment on the source logical volume a segment is created on the pvmove logical volume. Then LVM mirroring is activated for both segments.
From the time of this writing, no further information was available from the `pvmove` manual page or online documentation regarding segment sizes and the exact syncing technique being used by `pvmove`.

##TODO## man pvmove 1-6 examples, explainable? http://www.redhat.com/archives/linux-lvm/2010-November/msg00072.html

The `pvmove` command should not be interrupted and is therefore executed in a detachable `screen` session. If the command is interrupted, issue `pvmove` without any arguments (!), and it will pick of at the last checkpoint. This also works after a reboot or crash.

root@host ~]# pvmove --verbose --interval 5 --name test.domain.com /dev/mapper/VMsCrypt /dev/mapper/Raid5Crypt
  /dev/mapper/VMsCrypt: Moved: 20,0%
  /dev/mapper/VMsCrypt: Moved: 100,0%

During pvmove runtime the tools creates a temporary logical volume called “pvmove”. It will be removed after the process of moving is complete.

[root@host ~]# ls /dev/mapper/VGcrypt-pvmove0

Attemts to use `pvmove` on other logical volumes from the same source physical volume during moving of other volumes will not have any effekt. It is highly recommended to move the volumes by hand and NOT by script, and to move them individually to be able to respond to errors!

[root@host ~]# pvmove -n RequestTracker -i 5 /dev/mapper/VMsCrypt /dev/mapper/Raid5Crypt
  Detected pvmove in progress for /dev/mapper/VMsCrypt
  Ignoring remaining command line arguments
  /dev/mapper/VMsCrypt: Moved: 49,4%

The old physical volume can now be removed from the volume group via `vgreduce`.

[root@host ~]# pvdisplay | egrep 'Name|Allocated PE'
  PV Name               /dev/mapper/VMsCrypt
  VG Name               VGcrypt
  Allocated PE          0
  PV Name               /dev/mapper/Raid5Crypt
  VG Name               VGcrypt
  Allocated PE          125445

[root@host ~]# vgdisplay | egrep 'Name|PE'
  VG Name               VGcrypt
  PE Size               4,00 MiB
  Total PE              699583
  Alloc PE / Size       125445 / 490,02 GiB
  Free  PE / Size       574138 / 2,19 TiB

[root@host ~]# vgreduce --test VGcrypt /dev/mapper/VMsCrypt
  TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.
  Removed "/dev/mapper/VMsCrypt" from volume group "VGcrypt"

[root@host ~]# vgreduce VGcrypt /dev/mapper/VMsCrypt
  Removed "/dev/mapper/VMsCrypt" from volume group "VGcrypt"

[root@host ~]# pvdisplay | egrep 'Name|Allocated PE'
  PV Name               /dev/mapper/Raid5Crypt
  VG Name               VGcrypt
  Allocated PE          125445
  PV Name               /dev/mapper/VMsCrypt
  VG Name               
  Allocated PE          0

[root@host ~]# vgdisplay | egrep 'Name|PE'
  VG Name               VGcrypt
  PE Size               4,00 MiB
  Total PE              466117
  Alloc PE / Size       125445 / 490,02 GiB
  Free  PE / Size       340672 / 1,30 TiB

The logical volumes for the virtual machines have been migrated and are now stored on the new encrypted RAID 5 array.

Step 8: Sync the operating system data

The virtualizing host operating system files will be copied using the tool `rsync` during runtime of the operating system. There are cleaner ways to accomplish a system clone, but this way does not require downtime and it is very fast. Before doing so however, all services (but the guest systems) have to be stopped, unless there files being on another partition. It is not advisable to copy the /var partition with a running MySQL server having files in /var/lib/mysql for example.
By the time rsync is done copying the files, no more changes made to the current operating system will be present on the synced operating system on the RAID array.

To avoid this in the future, the / partition could also be put on a LVM volume, and could as well be moved via pvmove.

[root@host ~]# mount /dev/md2 /mnt/md2
[root@host ~]# rsync --exclude /boot -arv --one-file-system / /mnt/md2
sending incremental file list
./
.autofsck
[...]

sent 4601454051 bytes  received 1315508 bytes  98984291.59 bytes/sec
total size is 4596299677  speedup is 1.00

real	0m46.730s
user	0m23.945s
sys	0m17.670s

[root@host ~]# mount /dev/md0 /mnt/md0
[root@host ~]# rsync --one-file-system -arv /boot/ /mnt/md0/
[root@host ~]# umount /mnt/md0
[root@host ~]# mount /dev/md0 /mnt/md2/boot/

After the host operating system and the boot partition data is copied, grub has to be reconfigured to boot the new installation when `/dev/sdb` or `/dev/sdc` is chosen as boot device by the servers BIOS. The device `/dev/sda` is supposed to still boot the old operating system configuration for now.
Within the new operating systems root directory /mnt/md2 a few more changes have to be applied. The file `/etc/fstab` has to be updated, as the new boot and swap device both have a new UUID. UUIDs can be translated to devices with the tool `blkid` like follows:

[root@host ~]# blkid | grep '/dev/md'
/dev/md0: UUID="62967f07-9ba9-442a-9989-0e1b7704b0b3" TYPE="ext2" 
/dev/md1: UUID="66dd7acb-e7d6-440a-be8c-411f0b0d7818" TYPE="swap" 
/dev/md2: UUID="4aad8e3e-0021-443f-9942-a4c2345eb06f" TYPE="ext4" 
/dev/md3: UUID="9b9505bc-8abc-46c8-ace3-d9ec0a81e497" TYPE="crypto_LUKS" 

# /etc/fstab:
UUID=4aad8e3e-0021-443f-9942-a4c2345eb06f  /                       ext4    defaults        1 1	# /dev/md3
UUID=62967f07-9ba9-442a-9989-0e1b7704b0b3  /boot		   ext2	   defaults        1 1  # /dev/md0
UUID=66dd7acb-e7d6-440a-be8c-411f0b0d7818  swap                    swap    defaults        0 0  # /dev/md1
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0

The file `/boot/grub/grub.cfg` has to be edited to include the new UUID and boot device. The `kernel root=UUID` lines, the `(hdX,X)` lines and the `splashimage` line have to be changed.
Note that boot options given to the kernel affect the presence of mdadm RAID drives in the initramfs, and therefore the ability to mount the root partition to run init. This can be the cause for being able to boot the kernel, but then only being fewed a black screen and a white _ (underscore) on the monitor, kvm switch or vnc screen.
It is recommended to remove `quiet`, `rhgb`, `rd_NO_MD` and `rd_NO_DM` from kernel boot options in `/boot/grub/grub.conf`. As in this setup it is not required to decrypt a luks container during bootup, and as this container contains the LVM volumes, the `rd_NO_LUKS` and `rd_NO_LVM` arguments are left in place.

#boot=/dev/sdb
default=0
timeout=5
splashimage=(hd1,1)/boot/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.32-431.11.2.el6.centos.plus.x86_64)
	root (hd1,1)
	kernel /vmlinuz-2.6.32-431.11.2.el6.centos.plus.x86_64 ro root=UUID=3f77ac6a-5d44-411e-9ed6-c3bdb1af0788 rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 crashkernel=auto  KEYBOARDTYPE=pc KEYTABLE=us
	initrd /initramfs-2.6.32-431.11.2.el6.centos.plus.x86_64.img
title CentOS (2.6.32-431.11.2.el6.x86_64)
	root (hd1,1)
	kernel /vmlinuz-2.6.32-431.11.2.el6.x86_64 ro root=UUID=3f77ac6a-5d44-411e-9ed6-c3bdb1af0788 rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 crashkernel=auto  KEYBOARDTYPE=pc KEYTABLE=us
	initrd /initramfs-2.6.32-431.11.2.el6.x86_64.img
title CentOS (2.6.32-431.el6.x86_64)
	root (hd1,1)
	kernel /vmlinuz-2.6.32-431.el6.x86_64 ro root=UUID=3f77ac6a-5d44-411e-9ed6-c3bdb1af0788 rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 crashkernel=auto  KEYBOARDTYPE=pc KEYTABLE=us
	initrd /initramfs-2.6.32-431.el6.x86_64.img

##TODO## How does grub.conf with RAID 1 over 3 hdds look? I only have hd1,1 now, but not the other 3. More entries, or can grub manage a list and choose “first present / working”?

Grub will now be installed to both physical devices `/dev/sdb` and `/dev/sdc` like follows:

[root@host ~]# grub
grub> device (hd1) /dev/sdb
grub> device (hd2) /dev/sdc
grub> root (hd1,0)
grub> setup (hd1)
grub> root (hd2,0)
grub> setup (hd2)
grub> quit

It is possible to test the installation using kvm by giving the virtual machine direct access to the plain devices `/dev/sda`, `/dev/sdb` and `/dev/sdc`. The currently running drive `/dev/sda` is included, as the presence of drives affects grubs device mapping to `(hdX,X)`.
This procedure is helpful to see if the new boot configuration is functional.
Using this method, you can connect to the kvm virtual machine via the vnc protocol at port 1234 to see the monitor output, using the tool `vncviewer` for example. This connection however would not be encrypted. The tool `virt-manager` is an easy option to manage virtual machine hosts using kvm.

[root@host ~]# kvm -smp 2 -m 2000 -hda /dev/sda -hdb /dev/sdb -hdc /dev/sdc -vnc 0.0.0.0:1234

If during the test run in a virtual machine grub finds the new operating system and is able to boot the kernel, the test can be considered accomplished. As the drives are currently in use by the running operating system and the virtual machine, the virtual machine should be stopped as kernel panics will accour, reporting the to be mounted partitons as already in use.

Step 10: Reboot the host operating system

The host operating system can now be rebooted. During BIOS startup, the corresponding drive to either `/dev/sdb` or `/dev/sdc` has to be chosen as boot device. The operating system should then boot normally and the `/boot` and root partitions should be mounted from the RAID arrays.

If any errors occur, the old operating system drive should be booted. You can find more information in the troubleshooting section at the bottom of this document.

Step 9: Script to open encrypted luks container and LVM volumes after boot

As in this setup an encrypted device is in use, a keyfile needs to be supplied to decrypt the device. As this is not stored on the server itself, a script has to be run manually to decrypt the luks partition and detect and activate the LVM volumes. The following script can be used as a basic example:

#!/bin/bash

# Decrypt encrypted partition
cryptsetup luksOpen /dev/disk/by-uuid/9b9505bc-8abc-46c8-ace3-d9ec0a81e497 Raid5Crypt --key-file=/dev/shm/keyfile.img

# Scan for LVM physical volumes, volume groups, logical volumes
pvscan
vgscan
lvscan

# Activate all logical volumes found
vgchange -ay
echo

# Display present device mappings
dmsetup status

# Mount new partitions (optional)
mount -a

exit 0

Step 11: Add the old disk to the RAID array

For this step `mdadm` will make most of the work. The GPT partition table can be synced via `sgdisk` from another device.

[root@host ~]# sgdisk --backup=table.img /dev/sdb
The operation has completed successfully.

[root@host ~]# sgdisk --load-backup=table.img /dev/sda
The operation has completed successfully.

[root@host ~]# sgdisk -G /dev/sda
Creating new GPT entries.
The operation has completed successfully.

[root@host ~]# gdisk -l /dev/sda
GPT fdisk (gdisk) version 0.8.7
Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048         2099199   1024.0 MiB  FD00  Linux RAID
   2         2099200        10487807   4.0 GiB     FD00  Linux RAID
   3        10487808        44042239   16.0 GiB    FD00  Linux RAID
   4        44042240      1953525134   910.5 GiB   FD00  Linux RAID

[root@host ~]# partprobe

The new partitions on `/dev/sda` can now be added to the RAID arrays:

[root@host ~]# cat /proc/mdstat 
Personalities : [RAID6] [RAID5] [RAID4] [RAID1] 
[...]
md3 : active (auto-read-only) RAID5 sdc4[2] sdb4[1]
      1909220352 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
unused devices: 

[root@host ~]# mdadm --manage /dev/md3 --add /dev/sda4
mdadm: added /dev/sda4

[root@host ~]# cat /proc/mdstat 
Personalities : [RAID6] [RAID5] [RAID4] [RAID1] 
md3 : active RAID5 sda4[3] sdc4[2] sdb4[1]
      1909220352 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
      [>....................]  recovery =  0.0% (570528/954610176) finish=167.2min speed=95088K/sec

During the sync of the data, the RAID array will show even more decreased performance than during runtime with one drive missing, as it is now also being read from to create the data on the newly added drive.

After `mdadm` has completed the recovery the migration complete.

Problems: Troubleshooting host boot

If you are unable to boot the new RAID array try to boot the old disk. If this also does not work boot into the rescue system provided with the CentOS or your linux distributions installation media. Let it take care of assembling the drives. If it does not find the drives try the following:

Assemble RAID:
`mdadm –assemble –scan`

Assemble lvm:
`pvscan; vgscan; lvscan; partprobe`

Open luks encrypted device
`cryptsetup luksOpen –key-file=/dev/shm/keyfile.img /dev/md3 RaidCrypt`
`mkdir /mnt/RaidCrypt`
`mount /dev/mapper/RaidCrypt /mnt/RaidCrypt`

If during boot you are getting error messages like kernel panics and root device (UUID) missing, you can try the following:

Disable selinux by setting `Selinux=disabled` in `/mnt/sysimage/etc/selinux/config`.

Check `/boot/grub/grub.conf` and remove “quiet”, “rhgb”, “rd_no_md” from kernel boot options

Make sure initrd has support for md RAID (CentOS 6.5 way):
`mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.old`
`dracut –mdadmconf –force /boot/initramfs-$(uname -r).img $(uname -r)`