When setting up your ESX/ESXi host for multipathing and failover, you can use multiple iSCSI HBAs with the hardware iSCSI and multiple NICs with the software iSCSI.

With the hardware iSCSI, the host typically has two or more hardware iSCSI adapters available, from which the storage system can be reached using one or more switches. Alternatively, the setup might include one adapter and two storage processors so that the adapter can use a different path to reach the storage system.

As Hardware iSCSI and Failover illustrates, the host has two hardware iSCSI adapters, HBA1 and HBA2, that provide two physical paths to the storage system. Multipathing plugins on your host, whether the VMkernel NMP or any third-party MPPs, have access to the paths by default and can monitor health of each physical path. If, for example, HBA1 or the link between HBA1 and the network fails, the multipathing plugins can switch the path over to HBA2.

Hardware iSCSI and Failover
This image shows a host with two hardware iSCSI adapters.

With the software iSCSI, as Software iSCSI and Failover shows, you can use multiple NICs that provide failover and load balancing capabilities for iSCSI connections between your host and storage systems.

For this setup, because multipathing plugins do not have direct access to physical NICs on your host, you first need to connect each physical NIC to a separate VMkernel port. You then associate all VMkernel ports with the software iSCSI initiator using a port binding technique. As a result, each VMkernel port connected to a separate NIC becomes a different path that the iSCSI storage stack and its storage-aware multipathing plugins can use.

For information on how to configure multipathing for the software iSCSI, see Networking Configuration for Software iSCSI Storage.

Software iSCSI and Failover
Software iSCSI multipathing using port binding.