AW: [mythtv-users] An LVM'd drive died! What do I do...
martin.bene at icomedias.com
Thu Oct 27 13:23:26 EDT 2005
> > Also, I wonder if there is any easy way to get the info off from my
> > current 2 drive LVM onto the raid? (I would like to use
> some of these
> > LVM drives to be put in this raid).
> This could be tricky. I can't think of a way to do this at
> the moment. If you can consolidate this data into a 1 disk LVM ...
> > Can I started with all my info on
> > one HD and turn that into a Raid, or does a raid have to be
> > parititoned, constructed and THEN have information copied
> on to it...
Well, moving stuff around on an LVM system is trivial, as LVM supports
moving physical volumes transparently.
I've just been trough a disk upgrade:
Physical: 1x 300GB, 1x 400GB.
Volume Group: Linear with both PVs.
Logical: Two logical volumes (root as / and video as /var/video)
Add 3 400 GB disks and a promise 4-port SATA controller (only two sata
ports on the mainboard).
Finding the right way to convert to a new system is slightly tricky :-).
- want to use full capacity of all drives.
- want to use raid5 to protect against data failures.
- when creating a new raid5 it can be created in degraded mode,
temporarily leaving out exactly one disk.
- don't want to lose any data during the conversion :-)
- I don't mind a system crash if a disk goes down so I'm not putting
swap on raid1;
- I do mind if I can't boot any longer, so /boot stuff goes on raid1
(which can be booted from, while raid5 doesn't work for boot)
To make best use of the available disks, I'll need 2 Raid5 arrays:
* 5x300Gb, size dictated by the 300GB disk
* 4x100GB, to make use of what's left on the 400GB disks
* a few 100 megs outside the raid5 since booting off raid5 is
* I don't really need swap, so ~200MB swap should be enough for me.
To keep setup as easy as possible, compile raid1 and raid5 right into
the kernel (not as modules) and set the partition type for all raid
partitions to "fd" - this way, the kernel can automatically start all
the raid arrays without resorting to initrd; On the other hand if you're
already using root on LVM (like I do for my system) an initrd is
required anyway and raid startup can be added before starting LVM.
/dev/sda => 300GB, old
/dev/sbd => 400GB, old
/dev/sdc, /dev/sdd, /dev/sde => 400GB, new
# Partition the 3 new 400GB disks:
/dev/sdc2 300GB (get size as near to the full size of the 300GB disk as
possible, size must be identical or slightly larger than the existing
/dev/sdc3 100GB (whatever's left on the disk)
Same for sdd and sde
# create /dev/md0 for /boot
mdadm --create --level 1 --raid-devices 2 /dev/md0 /dev/sdc1 /dev/sdd1
# create the raid for the 4 100GB chunks on the 400GB disks, leaving out
/dev/sda since that's still in use in the old config
mdadm --create --level 5 --raid-devices 4 /dev/md1 /dev/sdc3 /dev/sdd3
The last parameter "missing" tells mdadm that the array is to be created
in degraded mode and that the final partition will be added at a later
# create a physical volume on /dev/md1
# add physical volume to my Volume group
vgextend MythVideo /dev/md1
Now I've got at least as much free space in my Volume group as
whatever's stored on /dev/sda (the 300GB disk). In my case that's
already part of the VG, so moving it is very simple:
# move logical volume off /dev/sda using the space available on /dev/md1
pvmove -v /dev/sda1 /dev/md1
This will run for quite some time; at the end /dev/sda1 doesn't hold any
more data. We can now remove /dev/sda1 from the VG
# remove 300GB disk from the VG and remove the pv signature
vgreduce MythVideo /dev/sda1
Ok, the 300GB disk is now completely unused. We can now use it to create
/dev/md2 - the 5x300GB array; again, we'll have to leave out one device
(the old 400GB disk /dev/sdb is still in use)
mdadm --create --raid-level 5 --raid-devices 5 /dev/md2 /dev/sda1
/dev/sdc2 /dev/sdd2 /dev/sde2 missing
# same procedure as before: add the new space to the Volume group and
migrate the stuff stored on /dev/sdb off that disk
vgextend Mythvideo /dev/md2
pvmove -v /dev/sdb1 /dev/md2
# Remove /dev/sdb1 from the Volumne group and remove the physival volume
signature to completely free the disk
vgreduce MythVideo /dev/sdb1
Now we've gotten the stuff previously stored on /dev/sdb onto the new
raid5. Up to now the raid's running in degraded mode, now we're going to
add the final partition to actually get the redundancy we want.
Repartition /dev/sbd just like the other 400GB Disks. Since the disk is
no longer used repartitioning should be possible without reboot.
# add final partitions to the raid5 arrays
mdadm --add /dev/md1 /dev/sdb3
mdadm --add /dev/md2 /dev/sdb2
You can have a look at /dev/mdstat so see when the new partitions are
completely integrated and your raid 5 devices have finished syncing.
# prepare swapspace
# prepare filesystem for the new /boot directory - this needs to be off
the raid5 if you already have /boot mounted on a seperate filesystem
you'll obviously have to change the stuff below slightly.
mkfs.ext3 -j /dev/md0
mount /dev/md0 /newboot
cp -a /boot/* /newboot
mv /boot /boldboot
mv /newboot /boot
mount /dev/md0 /boot
Ok, we're pretty much finished. Adjust /etc/fstab to mount /boot and add
swap from /dev/sdb1 and /dev/sde1.
Rerun lilo if you're using that. Sorry, no idea if you need to do
anything special with grub to make it deal with the changed
You may want to reboot now to make sure you got the configuration of
your boot manager right, but it's not actually required.
Use lvextend to increase the size of the logical volume(s) you want to
grow. In my case the new space goes to the video logical volume and
xfs_growfs can be used to resize the filesystem to make use of the new
Oh, and in case it's not obvious:
- make sure you've got a rescue CD available that has all the device
drivers you need (sata, lvm, raid) so you can get back up and running in
case you break things somewhere along the way
- make a backup of anything yo really don't want to lose :-)
Hope this gives you some ideas for upgrading an LVM system to raid.
More information about the mythtv-users