On each host that you want to add to a vSphere HA cluster, you must configure two different networking switches so that the host can also support vSphere Fault Tolerance.

To enable Fault Tolerance for a host, you must complete this procedure twice, once for each port group option to ensure that sufficient bandwidth is available for Fault Tolerance logging. Select one option, finish this procedure, and repeat the procedure a second time, selecting the other port group option.

Multiple gigabit Network Interface Cards (NICs) are required. For each host supporting Fault Tolerance, you need a minimum of two physical gigabit NICs. For example, you need one dedicated to Fault Tolerance logging and one dedicated to vMotion. Use three or more NICs to ensure availability.

Note

The vMotion and FT logging NICs must be on different subnets and IPv6 is not supported on the FT logging NIC.

1

Log in to the vSphere Client and select a host in the inventory pane.

2

Click the Configuration tab.

3

Select Networking under Hardware, and click the Add Networking link.

The Add Network wizard appears.

4

Select VMkernel under Connection Types and click Next.

5

Select Create a virtual switch and click Next.

6

Provide a label for the switch.

7

Select either Use this port group for vMotion or Use this port group for Fault Tolerance logging and click Next.

8

Provide an IP address and subnet mask and click Next.

9

Click Finish.

After you create both a vMotion and Fault Tolerance logging virtual switch, you can create other virtual switches, as needed. You should then add the host to the cluster and complete any steps needed to turn on Fault Tolerance.

To confirm that you successfully enabled both vMotion and Fault Tolerance on the host, view its Summary tab in the vSphere Client. In the General pane, the fields vMotion Enabled and Host Configured for FT should show yes.

Note

If you configure networking to support FT but subsequently disable the Fault Tolerance logging port, pairs of fault tolerant virtual machines that are already powered on remain powered on. However, if a failover situation occurs, when the Primary VM is replaced by its Secondary VM a new Secondary VM is not started, causing the new Primary VM to run in a Not Protected state.