Home > Articles > VMware

  • Print
  • + Share This
This chapter is from the book

vMotion and Storage vMotion

  • vMotion
  • Storage vMotion
  • Eagerzeroed

vMotion is probably the most popular and most sought after feature in the VMware infrastructure suite. The vMotion feature allows a running virtual machine to be migrated without interruption from one host to another, provided that some prerequisites are met on the originating and destination hosts.

Storage vMotion, on the other hand, allows you to migrate a VM's data files from one storage location to another without interruption. The vMotion suite collectively allows you to control a VM's host placement and its data file placement at any time for performance or organization purposes without downtime.


vMotion is an enterprise-level feature and thereby requires vCenter before it can be enabled. vMotion, as you see later in the section "Distributed Resource Scheduler," is used in conjunction with DRS to make sure VMs are always spread out on the most appropriate host, thereby balancing the resource availability of these hosts.

vMotion Host Prerequisites

With vMotion, for the VM to successfully port from one host to another, the following requirements must be satisfied on the source and destination hosts:

  • Access to all datastores on which the VM is configured
  • Virtual switches that are labeled the same, so that when the VM is ported from one host to another, its configuration is the same and finds the same resources
  • Access to the same physical networks for the VM to continue to function after being ported from one host to another
  • Compatible CPUs
  • Gigabit network connection

When you initiate a vMotion from one host to another, the wizard that starts the process warns you if there are errors that prevent the migration from completing successfully. The vMotion wizard also provides warnings that you take into account and possibly address after the migration is completed. Warnings do not prevent the vMotion process from completing successfully, whereas errors do. Table 8.2 outlines the different scenarios that might generate an error or a warning.

Table 8.2. vMotion Errors and Warnings

vMotion Errors

vMotion Warnings

A VM is connected to an internal vSwitch on the source host.

A VM is configured for an internal vSwitch but is not connected to it.

A VM has a removable disk such as a CD/DVD-ROM or floppy connected to it.

A VM is configured for a removable CD/DVD-ROM or floppy but is not connected to it.

A VM has CPU affinity assigned.

A VM has a snapshot.

A heartbeat cannot be detected from the VM to be migrated.

Enabling vMotion

To enable vMotion, you need to create a VMkernel port group with vMotion enabled on all ESX/ESXi hosts that will participate in the vMotion process, as shown in Figure 8.3. The virtual switch where this port group is created should bear the same label on all ESX/ESXi hosts. Typically, vMotion is configured on a dedicated virtual switch on all ESX/ESXi hosts.

Figure 8.3

Figure 8.3 Port group with vMotion enabled.

vMotion also requires that the physical NIC that you choose to service the virtual switch where vMotion is enabled should be a Gigabit or higher.

vMotion CPU Requirements

One of the main obstacles to a successful vMotion migration is the CPU; vMotion requires a strict CPU approach, so keep the following guidelines in mind:

  • vMotion does not work across CPU vendors, so if you have an ESX/ESXi host that is running an AMD processor and one that is running an Intel processor, vMotion errors out and does not work.
  • vMotion does not work across CPU families, so you are not able to migrate between a Pentium III and a Pentium 4, for example.
  • Hyperthreading, the number of CPU cores, and the CPU cache sizes are not relevant to vMotion.
  • vMotion does not work across CPUs with different multimedia instructions—for example, a CPU with Streaming SIMD Extensions 2 (SSE2) and a CPU with Streaming SIMD Extensions 3 (SSE3).
  • NX/XD hides or exposes advanced features in the CPU of an ESX Server. In most cases, this hidden feature is controlled by VMware for stability reasons (see Figure 8.4). In the event that the guest operating system requires it, however, the vSphere client exposes this feature in the properties of a VM. If it is enabled, the CPU characteristics of the host and destination must match; if disabled, an occurring mismatch is ignored and vMotion proceeds.
    Figure 8.4

    Figure 8.4 NX/XD feature exposed in vSphere client.

CPU vendors Intel and AMD now offer a technology known as virtualization assist that aids virtualization. Intel has its VT technology, and AMD has its AMD-V technology, both of which are enabled in the BIOS of a computer.

In the presence of these technologies, you can enable the VMs whose operating system supports the virtualization assist technology to improve their performance. To do this, you can right-click the VM in question and click Edit Settings. Click the Options tab, find the Paravirtualization section, and enable it. Figure 8.5 illustrates this process clearly.

Figure 8.5

Figure 8.5 Enabling Paravirtualization.

The vMotion Stages

Because the virtual machine to be vMotioned resides on a datastore that is visible and accessible to both the source and the destination ESX/ESXi host, the only thing that vMotion needs to do is to copy the VM's memory from one host to another. Because the VM's memory resides on the physical memory of the source host, that memory is what needs to be copied. That being said, the two ways to initiate a vMotion are as follows:

  • Select one or more VMs and then right-click and choose Migrate.
  • Simply choose the Change host option.

When the vMotion process begins, the four stages that it goes through are as follows:

  1. Once vMotion is initiated, a memory bitmap is created to track the changes, and the process of copying the physical RAM from one host to another begins.
  2. Quiesce the VM and copy the contents of the memory bitmap. Quiesce can be defined in simpler terms as a cut-over. This is the only time at which the VM is unavailable. This is a short period of time that for the most part is transparent to the user.
  3. The virtual machine on the destination host starts and moves all connectivity to it from the source host to the destination host.
  4. The VM is removed from the source host.

During your monitoring of the vMotion process, you might notice that it pauses at 10% completion as part of the identification process.

Storage vMotion

Storage vMotion is the process of migrating all the VM's files from one storage to another while the VM is powered on and without any interruption. Traditional vMotion moves the logical representation of a VM from one ESX/ESXi host to another while it is powered on while keeping the files that constitute this VM in the same storage space. Storage vMotion complements this by allowing you to move the VM files as well thereby contributing to a complete VM migration from one location to another without an interruption in service.

Storage vMotion was introduced in Virtual Infrastructure 3.5 but only at the command-line level; with vSphere 4, you can now do Storage vMotion from a GUI. To initiate a Storage vMotion from the GUI you follow the same steps as you would for a normal vMotion, which is to right-click a VM and select Migrate. The difference is the screen shown in Figure 8.6 has been completely changed with the following options:

  • Change Host: This is obviously the traditional vMotion option, which moves the VM while it is powered on or off from one ESX/ESXi host to another.
  • Change Datastore: This is the option to do a Storage vMotion thereby moving all the VM's files from one storage to another while the VM is powered on or off.
  • Change Both Host and Datastore: As the name implies you can move both the VM and its corresponding files from one host to another with one catch, the VM has to be powered off.
Figure 8.6

Figure 8.6 Migrate Wizard.

The next screen shown in Figure 8.7 prompts you to select the destination datastore where you want to move the files to. It is important to note that with vSphere 4 all protocols are now supported, which means, iSCSI, Fiber Channel, Fiber Channel over Ethernet (FCoE), NFS, and RDMs.

Figure 8.7

Figure 8.7 Datastore destination.

This brings us to the last step in the Storage vMotion wizard, which is the disk format. While Storage vMotion is primarily used to move VM files from one storage to another you might find this tool useful to change the disk format from Thin to Thick or vice versa. In Figure 8.8, note two options for disk type: Thin and Thick. The important thing to note here is that the reference to Thick is the Eagerzeroedthick, which means that the VMDK will be zeroed, thus thin provisioning will not be possible once this is converted to this type of Thick.

Figure 8.8

Figure 8.8 Disk format type.

Cram Quiz

Answer these questions. The answers follow the last question. If you cannot answer these questions correctly, consider reading the section again.

  1. Storage vMotion and vMotion can be run simultaneously while_______.



    The VM is powered on.



    The VM is powered off.



    The VM is powered on or off.



    They cannot be run simultaneously under any circumstance.

  2. Which virtual disk type writes zeros across all the capacity of the virtual disk?













Cram Quiz Answers

  1. B is correct. You cannot run Storage vMotion and vMotion simultaneously while the VM is powered on. You can run them while the VM is powered off, or you can schedule them to run consecutively.

  2. B is correct. Eagerzeroedthick is the virtual disk type that writes zeroes across the entire capacity of the disk and commits it all, thereby thin provisioning would not be possible. All other types are incorrect.

  • + Share This
  • 🔖 Save To Your Account