Recently I’ve been involved with a project that required the use of physical mode raw device mappings (pRDM) on vSphere. We weren’t thrilled with the prospect to deploy these pRDMs. Now why would that be you might ask? Well, as a cloud service provider you want maximum agility when it comes to the underlying physical infrastructure. If you want to move storage across devices, technologies like storage vMotion and Storage Policy Based Management (SPBM) make these migrations transparent to the customer and execute those migrations without downtime.
Sometimes pRDMs are a requirement for solutions such as Windows Server Failover Clustering (WSFC). So, what happens when pRDMs are added to the mix? When using pRDMs the storage is no longer controlled by the hypervisor but directly by the VMs themselves. Which looks a bit like this.
Given that VMs with pRDMs access the storage devices themselves what options do you have in case you want to retire the underlying storage array? Luckily there are a couple of solutions to choose from.
Storage virtualization
You can use a technology like DellEMC’s VPLEX or IBM’s SVC to virtualize the storage layer itself. This means that you can present a virtual storage device that is backed by one array. And when it is time to retire that array you can simply migrate the blocks to another array. When this procedure is completed you basically flip the switch and you move over to the new storage array. Now obviously this technology is the preferred one as you can migrate without any downtime. But what about the storage virtualization layer itself? Doesn’t that have a lifecycle too? Well sure it does. Can you migrate seamlessly between solutions from different vendors if you choose to do so? Probably not. And to top it all off these solutions require a significant investment. So, unless you have customers that don’t mind paying a premium for that kind of service this option is not for you.
Storage array-based migration
Now some storage arrays allow you to copy from one device to another. This is usually a feature that supports migration from one device from a certain vendor to another device from that same vendor. Obviously when using this kind of migration scenario you are bound to that vendor. And if you are really unlucky you are even confined to a certain product line of said vendor. Using this type of migration does require downtime because you don’t want the original storage device to receive any writes during migration. Maybe there are solutions that work sort of like storage vMotion that use a delta-based approach. That would obviously reduce the downtime for migration significantly. But I’m not aware of any solutions that work in a similar fashion. You might find that this feature has to be licensed separately but I’m guessing most vendors will offer this for free.
vSphere based migration
The vSphere documentation on explains that you can use vmkfstools to clone or convert both virtual disks as well as RDMs. Now I haven’t found any examples on the web of someone actually using vmkfstools to do a pRDM to pRDM migration, but I suppose it would go something like this.
- Shut down all VMs that use the pRDM
- Remove all references to the mapping file of the primary VM
- Present the new LUNs to the hosts and preserve the old ones
- Create the RDM disk mappings on the primary VM
- Run vmkfstools (probably even adding –diskformat as a parameter)
vmkfstools -i /vmfs/volumes/myVMFS/OLDdisk.vmdk /vmfs/volumes/myVMFS/NEWdisk.vmdk
- Remove the disks containing the old mappings. You can safely use the ‘delete from disk’ option as this only removes the mapping file.
- Change the SCSI IDs to match those of the old disks.
- Add the new mapping files to the remaining VMs of your virtual cluster
Now because I did not find any blogs of anyone actually using vmkfstools to do this type of migration. And because I wasn’t sure which parameters to use I decided to use an alternative migration tool.
Clonezilla
The migration tool of choice is Clonezilla. Now what is Clonezilla and why use it? Well, Clonezilla is a disk imaging and cloning tool. Meaning that you can use it to save your disks to an image file for backup purposes. Chances are you have used similar tools in the past like Norton Ghost. You can also use Clonezilla to clone disks which is what I want to demonstrate in this post. Now why would you want to use Clonezilla?
- It’s free of charge!
- It works across storage vendors
- It’s very easy to use
Next, I’ll explain which steps are involved in migrating from one LUN to another if you are using pRDM’s on vSphere.
To start off you probably want to have an overview of the disk layout of the primary VM of the cluster. To automate this, use this PowerCLI code.
Get-VM VMName | Get-HardDisk -DiskType "RawPhysical","RawVirtual" | Select Parent,Name, @{N='SCSIid';E={ $hd = $_ $ctrl = $hd.Parent.Extensiondata.Config.Hardware.Device | where{$_.Key -eq $hd.ExtensionData.ControllerKey} "$($ctrl.BusNumber):$($_.ExtensionData.UnitNumber)" }},CapacityGB,DiskType,ScsiCanonicalName,DeviceName,FileName | out-gridview
If you execute this code the output will look something like this.
Now execute these steps to perform the actual migration.
- First you create the new LUNs on the destination storage array. Make sure to create LUNs that have the same size as the source LUNs and mask them to the same hosts.
- Shut down all VMs that are connected to the pRDMs
- Add the newly created LUNs to the primary VM of the cluster. Check out the vSphere documentation for details on this procedure. Caution! If using vCenter server 6.5 DON’T use the HTML5 UI. You will encounter issues when attaching the RDMs to multiple VMs.
- Repeat the disk overview mentioned earlier and check if the new disks are identical to the old ones.
- Download the Clonezilla livecd from the website https://clonezilla.org (stable version, amd64 architecture, ISO format).
- Attach the ISO file to the primary VM and reboot it using the ISO as a boot device. Make sure you either hit ESC during the boot sequence or make the VM boot to the BIOS setup. Otherwise the VM boots to the OS installed on the virtual hard disk.
- Now execute the steps from the screenshots below.
The screenshots show how to clone a single disk. To clone multiple disks choose the ‘rerun’ menu item to return to the mode selection screen and repeat this process untill all disks have been cloned.
To finalize the process execute these steps.
- On both the primary VM and all secondary VMs remove ALL SHARED disks. Do NOT remove the OS disk(s). You can safely choose the ‘delete from disk’ option as this only removes the mapping files from the datastore.
- Re-attach the new shared disks to all VMs and make sure they are attached to their original SCSI ID’s.
- Boot the VMs regularly and all should be well.
All this is fine and dandy but I must admit my background is more in the compute space rather than storage. So if you happen to have more storage expertise and think my reasoning is flawed please leave a comment so we can all learn from the feedback. ?
0 Comments