by Joe Grant (@dba_jedi), Principal Architect
This summer I was involved with a large-scale datacenter move for one of our clients. Rather than maintaining their own datacenters, they utilize a co-location facility. The datacenter where their workloads were hosted was being decommissioned, so their provider required a move.
While working with this same client several years ago, I was involved with their initial migration to a virtualized platform. At the time their primary reasons for virtualizing were cost savings, a simplified infrastructure, and the flexibility virtualization provides. The possibility of a datacenter move was not even a consideration at this stage.
For this first move, we wound up moving quite a number of Oracle databases off of legacy HP-UX and AIX hardware. In addition, they had a few small vSphere implementations with quite a few VMs. During the move, we placed all of our new VMs for the legacy workloads, as well as their existing VMs, onto a converged architecture from VCE.
For this move, all of the VMs on the VCE Vblock had to move from their current datacenter to the new datacenter approximately 10 miles away. In addition we were very fortunate that we did not have to re IP all of the VMs. The networking team was also able to extend the VLANs to the new datacenter.
Most normal datacenter moves take a significant amount of time. This is both in the planning stage, and also during the actual move. For this project however, we did all the virtualization pieces in around five months. This included approximately 500 VMs ranging in size from just 10 GB all the way to a 50 TB VM, which had a single VMDK that was 40 TB. I will say that the 50 TB VM fought us the entire way.
New hardware was purchased for the new data center, so it was a plus that we didn’t have to try to figure out how to move the existing hardware. When the new hardware was purchased, EMC Recover Point Appliances (RPA) were also purchased in order to replicate storage.
Rather than utilizing the RPAs to replicate the existing datastores, and performing a datastore migration of sorts, we did things a little differently. We created four new datastores per vSphere cluster on the source side. We then used the RPAs to replicate these “transitional” datastores to copies at the new datacenter.
For the actual VM moves:
• We organized all of the VMs into migration groups
• Two or three days before the VMs in a group were to be moved:
• The VMs were storage vMotioned into one of the “transitional” datastores
• Any snapshots were removed
• The ISO images for the CD/DVD drive were either removed or copied to the new hardware
• On the day of the migration:
• The applications were shutdown (depending on needs of the applications)
• The VMs were then shutdown
• Once the VMs were off they were de-registered on the source side
• The replication for the RPA was stopped and then the copy was mounted on the destination side
• The VMs were then registered on the destination side
• The settings for the network interfaces were adjusted
• Start up the VMs and applications and test functionality
• As a last step – the VMs were vMotioned off of the transitional datastores onto their permanent storage.
Most of the groups consisted of 15-20 VMs, and the entire down time for each group was approximately 1 hour.
The Cool Part
The most striking thing for me was that the app owners did not really understand what we were doing. In many of our initial meetings with the application owners, they were preparing for a more traditional move. They were concerned that all OS versions and software installs on the new VMs be the same. Some were asking if they could perform upgrades along the way. At this point we had to slow everyone down, describe our methodology, and explain that (from an application perspective) the VM would simply be rebooted, and that it just may take a little longer than most.
Yes, there are a lot of other technologies out there that could have been used, and some of them would have even eliminated the reboot. But for this article, I wanted to highlight one of the many ways that virtualization makes life easier for the infrastructure geeks out there. I felt this this example was worth showcasing as it was years in the making, and is one that isn’t always considered when implementing virtualization in the first place.