I found that especially when you migrate Windows vCenter servers to the VCSA you might end up with a deployment size that is less than optimal. Some people say; don’t worry, it’s just thin provisioned. But I’m an old-school guy and I like to keep things neat and simple. The specific use case I have is a former Windows vCenter that ended up as an X-Large VCSA. At the time of the migration no other size than X-Large was presented in the interface even though the data was nowhere near the threshold for an X-Large deployment. As you can see in the following directory listing the
/storage/seat mount point is 1.4TB even though there is only 56GB in use.
root@vcsa02 [ ~ ]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 24G 0 24G 0% /dev
tmpfs 24G 20K 24G 1% /dev/shm
tmpfs 24G 676K 24G 1% /run
tmpfs 24G 0 24G 0% /sys/fs/cgroup
/dev/sda3 11G 6.0G 4.1G 60% /
tmpfs 24G 948K 24G 1% /tmp
/dev/sda1 120M 28M 87M 24% /boot
/dev/mapper/updatemgr_vg-updatemgr 99G 1.5G 92G 2% /storage/updatemgr
/dev/mapper/autodeploy_vg-autodeploy 25G 69M 24G 1% /storage/autodeploy
/dev/mapper/log_vg-log 148G 43G 99G 31% /storage/log
/dev/mapper/dblog_vg-dblog 25G 9.7G 14G 42% /storage/dblog
/dev/mapper/imagebuilder_vg-imagebuilder 25G 45M 24G 1% /storage/imagebuilder
/dev/mapper/netdump_vg-netdump 9.8G 23M 9.2G 1% /storage/netdump
/dev/mapper/core_vg-core 99G 60M 94G 1% /storage/core
/dev/mapper/db_vg-db 296G 19G 265G 7% /storage/db
/dev/mapper/seat_vg-seat 1.4T 56G 1.3T 5% /storage/seat
tmpfs 1.0M 0 1.0M 0% /var/spool/snmp
And sure enough, when I did a dry run to upgrade this vCenter apparently all the provisioned space is taken into account. Because the upgrade process only allows me to select X-Large because 2.2TB of storage is needed. Even if the sum of all the data is only about 150GB.
|Product improvement tip! – If any VMware employee ever gets to read this blog post. Maybe you can make future VCSA upgrades only look at used disk space instead of provisioned space. 😎|
So I created a service request with VMware to see if there’s a way to reduce the deployment size during the upgrade. Unfortunately there was no documented procedure available. Even the guides and KB articles available on the internet only describe how to extend VCSA storage. So I decided to investigate if I could come up with a procedure of my own.
Turns out reducing disks – or volumes I should say – is actually very easy because the VCSA is using LVM. The required steps are:
- Create a backup or snapshot before you begin the process
- Stop the services:
service-control --stop --all
- Unmount the mount points you want to resize:
- Do a filesystem check:
e2fsck -f /dev/mapper/seat_vg-seat
- Resize the filesystem and make it a bit smaller than the size the volume will eventually become:
resize2fs /dev/mapper/seat_vg-seat 9G
- Set the logical volume to the desired size:
lvreduce -L 10G /dev/seat_vg/seat
- Now let the filesystem fill the rest of the logical volume:
- Mount the filesystem(s) you resized:
- And restart the services:
service-control --start --all
- Remove the snapshot if the procedure is successful.
On my test VCSA I resized the
/storage/seat mount point and recorded the process using asciinema. There are a few extra steps I included for my environment to verify that a VCSA backup actually picks up on the resized mount point. But you can ignore those.
Also keep in mind that this procedure only resizes the mount points and logical volumes. This does not change the underlying disks themselves. But as this is intended a pre-upgrade procedure where you move everything over to a new appliance this is not an issue.