Setup of the physical environment requires careful consideration. Follow best practices for physical switches, leaf switch connectivity, VLANs and subnets, and access port settings.

When configuring Top of Rack (ToR) switches, consider the following best practices.

Configure redundant physical switches to enhance availability.

Configure switch ports that connect to ESXi hosts manually as trunk ports. Virtual switches are passive devices and do not send or receive trunking protocols, such as Dynamic Trunking Protocol (DTP).

Modify the Spanning Tree Protocol (STP) on any port that is connected to an ESXi NIC to reduce the time it takes to transition ports over to the forwarding state, for example using the Trunk PortFast feature found in a Cisco physical switch. 

Provide DHCP or DHCP Helper capabilities on all VLANs that are used by Management and VXLAN VMkernel ports. This setup simplifies the configuration by using DHCP to assign IP address based on the IP subnet in use.

Configure jumbo frames on all switch ports, inter-switch link (ISL) and switched virtual interfaces (SVI's).

Each ESXi host in the compute rack is connected redundantly to the SDDC network fabric ToR switches by means of two 10 GbE ports, as shown in Leaf Switch to Server Connection within Compute Racks. Configure the ToR switches to provide all necessary VLANs via an 802.1Q trunk. 

Leaf Switch to Server Connection within Compute Racks
Two leaf switches connect to two spine switches each. Each leaf switch also connects to the server. No external connection exists.

Each ESXi host in the management/shared edge and compute rack is connected to the SDDC network fabric and also to the Wide Area Network (WAN) and to the Internet, as shown in Leaf Switch to Server Connection within Management/Shared Compute and Edge Rack.  

Leaf Switch to Server Connection within Management/Shared Compute and Edge Rack
Two leaf switches connect to two spine switches each. Each leaf switch also connects to the server. Each leaf switch also has external connections.

Each ESXi host in the compute rack and the management/edge rack uses VLANs and corresponding subnets for internal-only traffic, as shown in Sample VLANs and Subnets within a Pod .

The leaf switches of each rack act as the Layer 3 interface for the corresponding subnet.

The management/edge rack provides externally accessible VLANs for access to the Internet and/or MPL-based corporate networks.  

Sample VLANs and Subnets within a Pod
The high-level architecture of a leaf node includes its sample VLANs.

Follow these guidelines.

Use only /24 subnets to reduce confusion and mistakes when dealing with IPv4 subnetting.

Use the IP address .1 as the (floating) interface with .2 and .3 for Virtual Router Redundancy Protocol (VRPP) or Hot Standby Routing Protocol (HSRP).

Use the RFC1918 IPv4 address space for these subnets and allocate one octet by region and another octet by function. For example, the mapping 172.regionid.function.0/24 results in the following sample subnets.

Note

The following VLANs and IP ranges are meant as samples. Your actual implementation depends on your environment.

Sample Values for VLANs and IP Ranges

Pod

Function

Sample VLAN

Sample IP range

Management

Management

1611 (Native)

172.16.11.0/24

Management

vMotion

1612

172.16.12.0/24

Management

VXLAN

1614

172.16.14.0/24

Management

vSAN

1613

172.16.13.0/24

Shared Edge and Compute

Management

1631 (Native)

172.16.31.0/24

Shared Edge and Compute

vMotion

1632

172.16.32.0/24

Shared Edge and Compute

VXLAN

1634

172.16.34.0/24

Shared Edge and Compute

vSAN

1633

172.16.33.0/24

Configure additional network settings on the access ports that connect the leaf switch to the corresponding servers.

Spanning-Tree Protocol (STP)

Although this design does not use the spanning tree protocol, switches usually come with STP configured by default. Designate the access ports as trunk PortFast.

Trunking

Configure the VLANs as members of a 802.1Q trunk with the management VLAN acting as the native VLAN.

MTU

Set MTU for all VLANS and SVIs (Management, vMotion, VXLAN and Storage) to jumbo frames for consistency purposes.

DHCP helper

Configure the VIF of the Management, vMotion and VXLAN subnet as a DHCP proxy.

Multicast

Configure IGMP snooping on the ToR switches and include an IGMP querier on each VLAN.

The SDDC management networks, VXLAN kernel ports and the edge and compute VXLAN kernel ports of the two regions must be connected. These connections can be over a VPN tunnel, Point to Point circuits, MPLS, etc. End users must be able to reach the public-facing network segments (public management and tenant networks) of both regions.

The region interconnectivity design must support jumbo frames, and ensure latency is less then 150 ms. For more details on the requirements for region interconnectivity see the Cross-VC NSX Design Guide.

The design of a region connection solution is out of scope for this VMware Validated Design.