- Native Multipathing
- Storage Array Type Plug-in (SATP)
- Path Selection Plugin (PSP)
- Third-Party Plug-ins
- Multipathing Plugins (MPPs)
- Anatomy of PSA Components
- I/O Flow Through PSA and NMP
- Listing Multipath Details
- Claim Rules
- MP Claim Rules
- Plug-in Registration
- SATP Claim Rules
- Modifying PSA Plug-in Configurations Using the UI
- Modifying PSA Plug-ins Using the CLI
I/O Flow Through PSA and NMP
In order to understand how I/O sent to storage devices flows through the ESXi storage stack, you first need to understand some of the terminology relevant to this chapter.
Classification of Arrays Based on How They Handle I/O
Arrays can be one of the following types:
- Active/Active—This type of array would have more than one Storage Processor (SP) (also known as Storage Controller) that can process I/O concurrently on all SPs (and SP ports) with similar performance metrics. This type of array has no concept of logical unit number (LUN) ownership because I/O can be done on any LUN via any SP port from initiators given access to such LUNs.
- Active/Passive—This type of array would have two SPs. LUNs are distributed across both SPs in a fashion referred to as LUN ownership in which one of the SPs owns some of the LUNs and the other SP owns the remaining LUNs. The array accepts I/O to given LUN via ports on that SP that “owns” it. I/O sent to the non-owner SPs (also known as Passive SP) is rejected with a SCSI check condition and a sense code that translates to ILLEGAL REQUEST. Think of this like the No Entry sign you see at the entrance of a one-way street in the direction opposite to the traffic. For more details on sense codes, see Chapter 7 ’s “LUN Discovery and Path Enumeration” section.
- Asymmetric Active/Active or AAA (AKA Pseudo Active/Active)—LUNs on this type of arrays are owned by either SP similarly to the Active/Passive Arrays concept of LUN ownership. However, the array would allow concurrent I/O on a given LUN via ports on both SPs but with different I/O performance metrics as I/O is sent via proxy from the non-owner SP to the owner SP. In this case, the SP providing the lower performance metric accepts I/O to that LUN without returning a check condition. You may think of this as a hybrid between Active/Passive and Active/Active types. This can result in poor I/O performance of all paths to the owner SP that are dead, either due to poor design or LUN owner SP hardware failure.
- Asymmetrical Logical Unit Access (ALUA)—This type of array is an enhanced version of the Asymmetric Active/Active arrays and also the newer generation of some of the Active/Passive arrays. This technology allows initiators to identify the ports on the owner SP as one group and the ports on the non-owner SP as a different group. This is referred to as Target Port Group Support (TPGS). The port group on the owner SP is identified as Active Optimized port group with the other group identified as Active Non-Optimized port group. NMP would send the I/O to a given LUN via a port in the ALUA optimized port group only as long as they are available. If all ports in that group are identified as dead, I/O is then sent to a port on the ALUA non-optimized port group. When sustained I/O is sent to the ALUA non-optimized port group, the array can transfer the LUN ownership to the non-owner SP and then transition the ports on that SP to ALUA optimized state. For more details on ALUA see Chapter 6.
Paths and Path States
From a storage perspective, the possible routes to a given LUN through which the I/O may travel is referred to as paths. A path consists of multiple points that start from the initiator port and end at the LUN.
A path can be in one of the states listed in Table 5.2.
Table 5.2. Path States
A path via an Active SP. I/O can be sent to any path in this state.
A path via a Passive or Standby SP. I/O is not sent via such a path.
A path that is disabled usually by the vSphere Administrator.
A path that lost connectivity to the storage network. This can be due to an HBA (Host Bus Adapter), Fabric or Ethernet switch, or SP port connectivity loss. It can also be due to HBA or SP hardware failure.
The state could not be determined by the relevant SATP.
Preferred Path Setting
A preferred path is a setting that NMP honors for devices claimed by VMW_PSP_FIXED PSP only. All I/O to a given device is sent over the path configured as the Preferred Path for that device. When the preferred path is unavailable, I/O is sent via one of the surviving paths. When the preferred path becomes available, I/O fails back to that path. By default, the first path discovered and claimed by the PSP is set as the preferred path. To change the preferred path setting, refer to the “Modifying PSA Plug-in Configurations Using the UI” section later in this chapter.
Figure 5.9 shows an example of a path to LUN 1 from host A (interrupted line) and Host B (interrupted line with dots and dashes). This path goes through HBA0 to target 1 on SPA.
Figure 5.9. Paths to LUN1 from two hosts
Such a path is represented by the following Runtime Name naming convention. (Runtime Name is formerly known as Canonical Name.) It is in the format of HBAx:Cn:Ty:Lz—for example, vmhba0:C0:T0:L1—which reads as follows:
vmhba0, Channel 0, Target 0, LUN1
It represents the path to LUN 0 broken down as the following:
- HBA0—First HBA in this host. The vmhba number may vary based on the number of storage adapters installed in the host. For example, if the host has two RAID controllers installed which assume vmhba0 and vmhba1 names, the first FC HBA would be named vmhba2.
- Channel 0—Channel number is mostly zero for Fiber Channel (FC)- and Internet Small Computer System Interface (iSCSI)-attached devices to target 0, which is the first target. If the HBA were a SCSI adapter with two channels (for example, internal connections and an external port for direct attached devices), the channel numbers would be 0 and 1.
- Target 0—The target definition was covered in Chapters 3, “FCoE Storage Connectivity,” and 4, “iSCSI Storage Connectivity.” The target number is based on the order in which the SP ports are discovered by PSA. In this case, SPA-Port1 was discovered before SPA-Port2 and the other ports on SPB. So, that port was given “target 0” as the part of the runtime name.
Flow of I/O Through NMP
Figure 5.10 shows the flow of I/O through NMP.
Figure 5.10. I/O flow through NMP
The numbers in the figure represent the following steps:
- NMP calls the PSP assigned to the given logical device.
- The PSP selects an appropriate physical path on which to send the I/O. If the PSP is VMW_PSP_RR, it load balances the I/O over paths whose states are Active or, for ALUA devices, paths via a target port group whose AAS is Active/Optimized.
- If the array returns I/O error, NMP calls the relevant SATP.
- The SATP interprets the error codes, activates inactive paths, and then fails over to the new active path.
- PSP selects new active path to which it sends the I/O.