As Linux Administrators, it is inevitable that we will get a request to grow an existing filesystem for one reason or another. The lazy administrator will take the easy way out and attach a secondary disk to the server, pvcreate it, and add the extra extents to the existing logical volume. This is actually a fairly safe way to do it, it can be done online, and the end result is the same for the user. That makes it a really appealing prospect to do things this way, but it is really dirty, and what do you do if that second (or third, or fourth, …) disk in volume group fails? Then you’re screwed. If this is a physical, non-SAN connected server and your only option is to add more disks, then you’re really better off adding them to your RAID stripe and expanding your filesystem from the “one” volume that the OS sees. The less disk hardware that the OS sees, the better…
Hopefully you’re not using a physical server with physical disks when this issue comes up (who uses physical servers any more anyway?). So, for the purposes of this tutorial/example, my setup was using a VMware guest that started with a single 20GB OS disk attached to it; The end-user requirement was to add an additional 20GB of disk space to the OS partition; The conversion should result in zero data-loss and should be pretty seamless to the end-user. The one obvious stipulation of expanding an OS disk is, of course, the requirement for downtime.
Since this is such a sensitive process and the margin for error is absolutely zero, I have expanded my usual “bare metal” approach to include every single step of the process. You should be able to follow this process step-by-step and have it work. If not, you probably need to recheck whether you should be doing this in the first place
Alright, the extended bare metal:
1. Here’s the start of the process — the box is already up and running, so let’s take a quick look at our disk layout. Of course, by default we’re using LVM for our disk management, and we already know from experience that /dev/sda2 is contained in the VolGroup00-LogVol00 logical volume. In this process you can see that /dev/sda2 is provisioned at 20GB minus 100MB (100MB is for the boot partition, but you already knew that). Shutdown your server, grow the disk — using your preferred method — and power it back up.
2. You want to stop the regular GRUB boot and enter into your list of available kernels. For the purposes of this example I have used a basic RHEL5.5 install. We’re going to need to take the kernel into single-user mode so that we can perform our filesystem modifications off-line, so begin the process by editing the default boot parameters.
3. We remember this from the RH302 class, right? Choose to edit the “Kernel” line.
4. Add a 1 to the end of the line to indicate that we want to boot into single-user mode, then hit enter.
5. “b” to boot.
6. Single-user mode.
7. I’m not going to cover the part where I grew the size of the disk in VMware. I hope that if you’re doing this you know how to grow the size of the disk, or have a qualified person nearby available to walk you through this part of the process, or do it for you. As you can see, I have doubled the size of the disk, and of course the /dev/sda2 partition is still provisioned at 20GB.
8. Enter into the volume using fdisk and display the partitions.
9. Trust me on this one… Delete the /dev/sda2 partition. Create a new partition with partition #2 (the same as it was) and use the default values to maximize the available space. Display the disk’s partitions again. You should see that /dev/sda2 is now (size of disk) minus 100MB. Now, I know that you’re wondering about this, so let me address the concerns real quick. At this point we have not modified the filesystem in any way, so what we’re doing is growing the boundary in which the filesystem can extend. If you think about the filesystem as a bucket inside of a bigger bucket that is the disk, we’re making the bigger bucket EVEN BIGGER, which means that when we’re done with this step, we can make the filesystem “bucket” extend to the boundary of the disk “bucket”. Hope that’s not too many buckets…
10. Write the changes and fdisk will exit.
11. Now, we’re going to tell LVM that the physical volume has grown and that we want to use the maximum capacity of the partition; for the purposes of this example, 40GB. Issue the pvresize command on /dev/sda2 with no parameters — default will consume the entire partition size.
pvresize /dev/sda2 vgdisplay
12. Display the volume group. You can clearly see now that you have 20GB of free Physical Extents that you can add to your Logical Volume.
13. Display the logical volume. From here, you can see that the filesystem size itself has not changed or grown, and it is still available — I hope your fears of deleting the partition are gone at this point. The Logical Volume, as you can see, is still provisioned at 20GB (well, 17.91, but whatever..).
14. Ok, time to add our additional 20GB to the Logical Volume.
lvextend /dev/VolGroup00/LogVol00 -L +20G
15. After you add the existing physical extents to the Logical Volume, the disk must be “clean” before we can perform the filesystem resize. You will get a warning about the disk already being mounted, but that’s ok because remember we’re in single-user mode, and single-user mode, as we know, mounts the root Logical Volume in r/o mode, so the disk itself is currently in a consistent state.
e2fsck -f /dev/VolGroup00/LogVol00
16. And finally, perform the resize2fs command on the Logical Volume and this will expand the filesystem to fit inside of the disk’s “bucket”
Reboot, you’re done. You can see after you reboot that your OS disk has mounted with 40GB of total size. Your end-user will be most happy.
Email me with any questions or comments — firstname.lastname@example.org