Distributed Resource Scheduler
- Distributed Resource Scheduler (DRS) Cluster
- Affinity Rules
- VMware EVC
If you can correctly answer these questions before going through this section, save time by skimming the Exam Alerts in this section and then completing the Cram Quiz at the end of the section.
What color is assigned to a DRS cluster that is overcommitted?
How do you configure two VMs so that they are never present on the same host at the same time?
D is correct. A DRS cluster that is overcommitted is assigned the color yellow; therefore, answers A, B, and C are incorrect.
D is correct. Configuring an Anti-Affinity rule would be the correct course of action and the correct answer to the question. Affinity rules force VMs to stay together on the same host. Choices B and C are incorrect.
VMware DRS is an enterprise-level feature that uses vMotion to load balance the CPU and memory resources of all ESX/ESXi hosts within a given DRS cluster. DRS is also used to enforce resource policies and respect placement constraints.
DRS functions efficiently using clusters. A cluster is the implicit collection of CPU and memory resources across ESX/ESXi hosts that are members of this cluster to allow for the creation of VMware DRS clusters and VMware High Availability (HA) clusters. A cluster is an object that appears in the vCenter inventory and, like all other objects, can be assigned permissions. It can have a maximum of 32 nodes, or 320 VMs per host, or 3000 VMs per cluster, whichever maximum is reached first.
In other words, you can have 32 hosts in the cluster, but you are then limited to only 93 VMs per host, or you can have 300 VMs on 10 hosts, or 20 hosts with 150 VMs, and so on.
After you add ESX/ESXi hosts as nodes in a DRS cluster, DRS then monitors these ESX/ESXi hosts. If DRS detects high CPU utilization or high memory utilization on a particular host, it uses vMotion to migrate some VMs off the host with resource constraints to a host that is not experiencing resource constraints. DRS constantly plays this role to ensure that all ESX/ESXi hosts never have resource constraints.
DRS Automation Process
The DRS automation process involves initial placement of the virtual machines when they are first powered on and later on dynamically load balancing VMs on the best-suited host that will render the best performance. As shown in Figure 8.9, the automation process options are as follows:
- Manual: If you select this option, vCenter suggests which VM needs to be initially placed on which host at power on and later suggests which VM should be migrated to a different host; however, vCenter does not perform either task automatically.
- Partially Automated: If you select this option, VMs are automatically placed at power on; however, for future load balancing, vCenter only suggests the migration but does not perform it.
- Fully Automated: If you select this option, vCenter suggests and performs the initial placement of VMs at power on and automatically migrates them to maintain the most adequate load balancing.
Figure 8.9 DRS cluster automation.
When set to Manual or Partially Automated, DRS recommends VMs that need to be migrated to improve performance and maintain proper load balancing in the cluster. To view these recommendations, you can select the DRS cluster in the vCenter inventory and click the DRS Recommendations tab, as shown in Figure 8.10.
Figure 8.10 DRS recommendations.
If you choose a fully automated load-balancing schedule, you can also control the frequency at which migrations occur. DRS analyzes the VMs and rates them on a five-star basis, with five stars meaning the VM must move from one host to another and one star meaning the VM does not necessarily need to move or, if moved, the change is not significant. Your options are as follows:
- Most Conservative: This option means DRS migrates VMs very infrequently and only when it must (that is, when VMs have five stars).
- Moderately Conservative: This option means that DRS migrates VMs with four stars or more. This option promises significant improvement.
- Default: This option moves VMs with three stars or more and promises good improvement.
- Moderately Aggressive: This option moves VMs with two stars or more and promises moderate improvement.
- Aggressive: This option migrates VMs with one star or more and promises slight improvement.
DRS automation levels can also be managed on the virtual machine level, where you manually assign the automation level for each VM in the cluster. To configure the automation level based on the VM, right-click the cluster where the VM is a member and go to Edit Settings. On the left pane, select Virtual Machine Options. You then are presented with a list of VMs that are members of this cluster on the right. You can change the automation level manually. Figure 8.11 shows an example.
Figure 8.11 VM level automation.
DRS Cluster Validity
Monitoring a DRS cluster to ensure that there are no errors is critical. A resource pool can be in one of three states: valid, overcommitted, or invalid. A DRS cluster is considered to be valid, functioning, and healthy when the resource availability satisfies all the reservations and supports all running VMs. In the event that a DRS cluster is not considered valid, resource pools notify you that there is a problem by changing the color of the resource pool in the vSphere client as follows:
- Yellow means that the resource pool is overcommitted in terms of resources.
- Red means that the resource pool has violated the DRS cluster rules or high-availability rules and is thereby considered invalid.
DRS enables you to set rules that govern whether VMs can exist on the same ESX/ESXi host at the same time or if they should always be separated and never exist on the same host at the same time. This capability can be useful if you are trying to avoid a single point of failure for a particular VM and want to make sure that the DRS algorithm never places VMs assigned in the rules on the same host. That being said, you can choose to have the VMs on the same host at all times, so if one VM is migrated, the other follows as well. These rules are known as VM-VM Affinity rules and have two options:
- Affinity: This rule implies that VMs should be on the same ESX/ESXi host at all times.
- Anti-Affinity: This rule implies that VMs cannot exist on the same ESX/ESXi host at the same time.
The release of vSphere 4.1 introduced a new Affinity rule known as VM-Host Affinity Rules. These rules determine whether groups of VMs can or cannot exist on groups of ESX/ESXi hosts. With these rules, you can build groups of specific VMs and groups of specific ESX/ESXi hosts and then implement Affinity or Anti-Affinity rules. VM-Host affinity rules have the following options:
- Must run on hosts in group: This rule implies it is a requirement that the VM group be on the same ESX/ESXi host group at all times.
- Should run on hosts in group: This rule implies it is preferred that the VM group be on the same ESX/ESXi host group at all times.
- Must not run on hosts in group: This rule implies it is a requirement that the VM group NOT be on the same ESX/ESXi host group at all times.
- Should not run on hosts in group: This rule implies it is preferred that the VM group NOT be on the same ESX/ESXi host group at all times.
You can access these rules by right-clicking your cluster and pointing to Edit Settings. You then see the Rules section on the left. Select it and click Add. Figure 8.12 shows an example of how you can set a rule to never allow two VMs to be on the same host at the same time.
Figure 8.12 DRS rules.
As we have been discussing in this chapter, vMotion has certain CPU requirements that need to be met before a successful live migration of VMs can take place between hosts. Considering OEM server manufacturers constantly upgrade the CPUs that ship with their server models, it can become challenging when you purchase servers at different intervals. At some point, you are bound to have hardware of different CPU families.
VMware Enhanced vMotion Compatibility is similar in function to the NX/XD feature, except it is configured on a cluster basis and affects the hosts in the cluster while the NX/XD feature is implemented on a VM level. When creating an EVC cluster, you are instructing vSphere to find the lowest common denominator between all the hosts' CPUs thereby allowing the highest level of vMotion compatibility.
As you can see in Figure 8.13, creating a VMware EVC cluster is easy. Choose Edit Settings on your existing DRS cluster and select VMware EVC from the left pane. You can then configure the options appropriately.
Figure 8.13 VMware EVC enabled cluster.
Answer these questions. The answers follow the last question. If you cannot answer these questions correctly, consider reading the section again.
Which setting is an invalid level when Fully Automated DRS cluster load balancing is selected?
Which of the following is not a DRS cluster automation level? (Select all that apply.)
How many cluster nodes are supported for each DRS cluster?
Cram Quiz Answers
D is correct. Low is not a valid frequency level when Fully Automated is selected; therefore, answers A, B, and C are incorrect.
B and D are correct. Semi Manual and Semi Automated are invalid and do not exist. The three levels of automation are Manual, Partially Automated, and Fully Automated; therefore, answers A and C are incorrect.
C is correct. VMware DRS clusters support up to 32 ESX/ESXi hosts or nodes per cluster; therefore, answers A, B, and D are incorrect.