With iSCSI storage, you can take advantage of the multipathing support that the IP network offers. In addition, ESXi supports host-based multipathing for both hardware and software iSCSI initiators.

ESXi can use multipathing support built into the IP network, which allows the network to perform routing. Through dynamic discovery, iSCSI initiators obtain a list of target addresses that the initiators can use as multiple paths to iSCSI LUNs for failover purposes.

ESXi also supports host-based multipathing.

With the hardware iSCSI, the host can have two or more hardware iSCSI adapters and use them as different paths to reach the storage system.

As Hardware iSCSI and Failover illustrates, the host has two hardware iSCSI adapters, HBA1 and HBA2, that provide two physical paths to the storage system. Multipathing plugins on the host, whether the VMkernel NMP or any third-party MPPs, have access to the paths by default and can monitor the health of each physical path. If, for example, HBA1 or the link between HBA1 and the network fails, the multipathing plugins can switch the path over to HBA2.

Hardware iSCSI and Failover
This image shows a host with two hardware iSCSI adapters.

With the software iSCSI, as Software iSCSI and Failover shows, you can use multiple NICs that provide failover and load-balancing capabilities for iSCSI connections between the host and storage systems.

For this setup, because multipathing plugins do not have direct access to the physical NICs on your host, you must first connect each physical NIC to a separate VMkernel port. You then associate all VMkernel ports with the software iSCSI initiator using a port binding technique. As a result, each VMkernel port connected to a separate NIC becomes a different path that the iSCSI storage stack and its storage-aware multipathing plugins can use.

For more information on this setup, see the iSCSI SAN Configuration Guide.

Software iSCSI and Failover
Software iSCSI multipathing using port binding.