I was pondering an interesting, and likely fairly common, situation last night so this morning I was up early getting my geek on (aka, playing, testing, and troubleshooting).
If someone has a Linux VM (cloud server or whatever) with a set size but then need to add more space to their system and make it usable, what’s the best way to do that?
The first though was to power down the VM, increase the size of the disk, then tinker inside of Linux to get the space expanded. I didn’t spend much time messing with this because the first few resources I found sounded very risky and painful – multiple reboots, deleting and recreating existing partitions, etc.
My second thought was to add a second disk to the system and then figure if it could be added seamlessly into the existing space. Well, guess what, it can!
So… first an assumption. I’m starting out with a CentOS server (running in VMware, not that it matters) that has a standard /boot mapped but then has the rest of the space in a logical volume currently mapped to /. I won’t be surprised if someone comments to tell me how that isn’t ideal, but hey, that’s how I was already set up and it works perfectly with my disk-expansion solution.
Here’s what my disk system looked like during the installation:
So you can see that I started out with a 20 GB (20,480 MiB) drive, assigned 500 MB to /boot and the rest set as a logical volume group containing two logical volumes – swap of 2 GB and / with the remaining (almost) 18 GB.
Here’s what it looks like from inside the system:
$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 18G 3.2G 14G 20% / tmpfs 495M 112K 495M 1% /dev/shm /dev/sda1 485M 33M 427M 8% /boot
While some people might not like this, and perhaps for good reasons, I do like it because it seems to be the most flexible, with all of the space allocation to / I don’t have to worry about the size of /home folders or web content in /var or logs in /var or anything else. The space is available where needed.
Okay, so, I have a 20 GB drive and I want a total of 40 GB. Rather than increasing the size of the current drive I added a second drive from within VMware. The drive was added “hot” (while the VM was live online and running) but the system didn’t recognize the drive until after a reboot.
Once rebooted, the system showed the new drive (via fdisk -l) but it wasn’t in any usable state. I found a few resources online that said to run fdisk in interactive mode for the first step of preparing this new disk. I’m not a fan of interactive mode when using the command line so when possible I try to avoid it. After some research I stumbled on parted command, which allowed me to avoid fdisk in my process.
So, I’m going to just jump straight into it. Note that when I ran fdisk -l the new (unusable) disk showed up as /dev/sdb – you’ll want to double-check your own system to see how it gets assigned before running any of the following commands.
First I needed to create a new partition out of the idle drive that was sitting around unavailable. As mentioned earlier, my drive was showing up as /dev/sdb. Here’s the command I ran:
parted -s -a optimal /dev/sdb mklabel gpt -- mkpart primary ext2 1 -1
I then needed to create a physical volume on that new partition:
pvcreate /dev/sdb1
Next I needed to add the newly created volume to my existing volume group – the name of which (VolGroup) was specified during the install (and is shown in the image above) and is also noted in the df -h output.
vgextend /dev/mapper/VolGroup /dev/sdb1
Okay, now I need to actually extend the volume that I want to have the additional space (in this case, / – which I named “root” in LVM). To know how large to make the new volume, I had to check the existing group to see how much free space it had, and then check the existing volume to see how much space it was taking. I did this by running the two commands below and taking note of the number of extents reported.
lvm vgdisplay|grep 'Free PE'
That showed 5119 free extents in the volume group.
lvdisplay /dev/VolGroup/lv_root | grep 'Current LE'
That showed 4498 extents were currently being used by the “root” volume.
So, 5119 plus 4498 equals 9617 – which is the new size I want the volume to be – essentially telling it to consume all the available space.
lvresize -l 9617 /dev/VolGroup/lv_root
Now that the volume has been extended, the space still doesn’t show. One last step is needed and that is to extend the file system to take up all of the space in the volume:
resize2fs /dev/VolGroup/lv_root
After that last step I ran another df -h and got this output:
$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 37G 3.2G 32G 9% / tmpfs 495M 276K 495M 1% /dev/shm /dev/sda1 485M 33M 427M 8% /boot
From that output you can see that the / directory that was originally 18 GB above is now 37 GB. Yeah, I know, 37 GB-18 GB doesn’t equal 20 GB. That’s normal though and explainable but beyond the scope of this post. :-)
Happy hosting!