VCP 6 Study Note – VM migration (Cold, vMotion, svMotion, EVC)

The added value for a server to be virtualized in a vSphere environment is the agility to move itself across physical environment with near zero downtime. vSphere started talking about vMotion and storage vMotion to give back to customer the agility to minimize planned downtime for maintenance.

Sources:

It is possible to perform several types of migration according to the virtual machine resource type:

  • Change compute resource only
  • Change storage only
  • Change both (compute and storage)

With version >= 6.0 it is possible perform some enhanced migration:

  • Migrate to another vSwitch
  • Migrate to another DC
  • Migrate to another vCenter system

 

Cold Migration

The way to move suspended or powered-off workloads. This process use management network to perform data movement.

vMotion

Migration with vMotion allows virtual machine processes to continue working throughout a migration.

  • VM only: the entire state of the virtual machine is moved to the new host but associated virtual disk remains in the same location on storage that must be shared between the two hosts.
  • VM + Storage:  virtual machine state is moved to a new host and the virtual disk is moved to another datastore. vMotion migration to another host and datastore is possible in vSphere environments without shared storage

 

Transferred State Information, includes:

  • memory content: includes transaction data and the bits of the operating system and applications that are in the memory
  • identification information stored in the state: includes all the data that maps to the virtual machine hardware elements, such as BIOS, devices, CPU, MAC addresses for the Ethernet cards, chip set states, registers, and so forth

Stages in vMotion:

  • vCenter verify the state of VM (if instable vMotion will not go ahead)
  • VM state information is copied to target host
  • VM resumes its activity to target host but if error occurred VM reverts to original state and host.

vMotion requires:

  • Host
    • licensed for vMotion
    • shared storage
    • vmKernel enablement (it works using TCP port 8000)
  • Shared Storage
    • Must be accessible by both hosts (source and destination)
    • it could be used to migrate with RDM (make sure to have the same LUNID in both hosts)
  • Networking
    • bandwidth >= 205Mbps
    • network latency RT < 150ms
    • Concurrent vMotion (see config max)
    • vMotion features enable in vmkernel ports in both hosts (source and destination)
    • For best practices:
      • use one or more physical adapter with high bandwidth and shared between hosts
      • dedicate at least one adapter for vMotion (1Gbe for small environment 10Gbe for huge environment)

vMotion across long distances requires:

  • network latency RT < 150ms
  • long vMotion licenses (both hosts)
  • place the traffic related to virtual machine files transfer to the destination host on the provisioning TCP/IP stack

vMotion Limitations

  • The source and destination management network IP address families must match (No mixed ipv4 and ipv6 environment)
  • no VM that use RDM for clustering
  • If virtual CPU performance counters are enabled, you can migrate virtual machines only to hosts that have compatible CPU performance counters
  • can migrate virtual machines that have 3D graphics enabled: to migrate virtual machines with the 3D Renderer set to Hardware, the destination host must have a GPU graphics card
  • can migrate virtual machines with USB devices that are connected to a physical USB device on the host. You must enable the devices for vMotion
  • cannot use migration with vMotion to migrate a virtual machine that uses a virtual device backed by a device that is not accessible on the destination host (eg: VM with CD drive backed to CDROM host)
  • cannot use migration with vMotion to migrate a virtual machine that uses a virtual device backed by a device on the client computer
  • can migrate virtual machines that uses Flash Read Cache if the destination host also provides Flash Read Cache
  • For swap file location:
    • migrations between hosts running ESX/ESXi version 3.5 and later, vMotion and migrations of suspended and powered-off virtual machines are allowed
    • if the swapfile location specified on the destination host differs from the swapfile location specified on the source host, the swap file is copied to the new location. This activity can result in slower migrations with vMotion

vMotion without Shared Storage

This is useful for performing cross-cluster migrations, when the target cluster machines might not have access to the source cluster’s storage.

Use case:

  • Host maintenance: move VMs off the host to allow maintenance mode
  • Storage maintenance: move VMs off a storage device to allow maintenance and reconfiguration
  • Storage load redistribution: manually redistribute VMs or vmdks in different in different storage volumes to balance capacity and/or improve performance

Limitation:

  • vMotion licensing
  • esxi host version >= 5.1
  • meet networking requirements for vMotion
  • VM state and configuration for vMotion
  • VM disks must be in persistent mode or RDM
  • destination host must have access to destination storage
  • Moving a virtual machine with RDMs and do not convert those RDMs to VMDKs, the destination host must have access to the RDM LUNs
  • Consider the limits for simultaneous migrations when you perform a vMotion migration without shared storage

 

Migration between vCenter

Use case:

  • Balance workloads across clusters and vCenter Server instances
  • Elastically grow or shrink capacity across resources in different vCenter Server instances in the same site or in another geographical area
  • Move virtual machines between environments that have different purposes
  • Move virtual machines to meet different Service Level Agreements (SLAs)

Requirements:

  • source and destination vCenter Server instances and ESXi hosts must be >= 6.0
  • The cross vCenter Server and long distance vMotion features require an Enterprise Plus license
  • Both vCenter Server instances must be time-synchronized with each other for correct vCenter Single Sign-On token verification
  • For migration of compute resources only, both vCenter Server instances must be connected to the shared virtual machine storage
  • When using the vSphere Web Client, both vCenter Server instances must be in Enhanced Linked Mode and must be in the same vCenter Single Sign-On domain so that the source vCenter Server can authenticate to the destination vCenter Server

Because source and destination are different environments is important check network compatibility to prevent the following problems:

  • MAC address compatibility on the destination host
  • vMotion from a distributed switch to a standard switch
  • vMotion between distributed switches of different versions
  • vMotion to an internal network (eg: without physical nic)
  • vMotion to a distributed switch that is not working properly

NOTE: vCenter Server does not perform checks for and notify you if source and destination distributed switches are not in the same broadcast domain (VM lose connectivity) and if source and destination distributed switches do not have the same services configured

When you move a virtual machine between vCenter Server instances, the environment specifically handles MAC address migration to avoid address duplication and loss of data in the network.

Storage vMotion

With Storage vMotion, it’s possible to migrate a virtual machine and its disk files from one datastore to another while the virtual machine is running. With Storage vMotion, you can move virtual machines off of arrays for maintenance or to upgrade. You also have the flexibility to optimize disks for performance, or to transform disk types, which you can use to reclaim space.

Use Case:

  • Storage maintenance and reconfiguration
  • Storage load redistribution (move disks across different storage volumes to balance capacity and improve performance)

Requirements:

  • Virtual machine disks must be in persistent mode or be raw device mappings (RDMs)
    • vRDM can migrate the mapping file or convert to thick-provisioned or thin-provisioned disks during migration if the destination is not an NFS datastore. If you convert the mapping file, a new virtual disk is created and the contents of the mapped LUN are copied to this disk
    • pRDM you can migrate the mapping file only
  • Migration of virtual machines during VMware Tools installation is not supported
  • Because VMFS3 datastores do not support large capacity virtual disks, you cannot move virtual disks greater than 2 TB from a VMFS5 datastore to a VMFS3 datastore
  • The host on which the virtual machine is running must have a license that includes Storage vMotion
  • ESXi 5.0 and later hosts do not require vMotion configuration in order to perform migration with Storage vMotion
  • The host on which the virtual machine is running must have access to both the source and target datastores
  • See config max for limits of the number of simultaneous svMotion.

 

EVC

vCenter Server performs compatibility checks before it allows migration of running or suspended virtual machines to ensure that the virtual machine is compatible with the target host. Live migration and suspended migration require that the processors of the target host be the same vendor and family to provide the same instructions to the virtual machine after migration that the processors of the source host provided before migration. Clock speed, cache size and number of cores may be different.

When you attempt to migrate a virtual machine with vMotion, one of the following scenarios applies:

  • The destination host feature set matches the virtual machine’s CPU feature set. CPU compatibility requirements are met, and migration with vMotion proceeds
  • The virtual machine’s CPU feature set contains features not supported by the destination host. CPU compatibility requirements are not met, and migration with vMotion cannot proceed
  • The destination host supports the virtual machine’s feature set, plus additional user-level features (such as SSE4.1) not found in the virtual machine’s feature set. CPU compatibility requirements are not met, and migration with vMotion cannot proceed
  • The destination host supports the virtual machine’s feature set, plus additional kernel-level features (such as NX or XD) not found in the virtual machine’s feature set .CPU compatibility requirements are met, and migration with vMotion proceeds and acquires new feature sets when after reboot.

To improve compatibility between varying CPU instruction sets, it is possible to hide some CPU feature ( typical of that family ) to fix a watermark of common CPU instruction available, and be compatible different CPU model. This feature is called EVC and in practice it presents only a common subset of CPU features to VM.

Requirements:

  • ESXi version >= 5.0
  • vCenter Server
  • CPUs single vendor either AMD or Intel
  • Advanced CPU feature:
    • AMD-V or Intel VT
    • AMD No execute NX
    • Intel eXecute disable XD
  • Supported CPU to EVC
  • vMotion configured an all Hosts

Following the KB with EVC processor support: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003212

   Send article as PDF