This NFS design does not give specific vendor or array guidance. Consult your storage vendor for the configuration settings appropriate for your storage array.

NFS (Network File System) presents file devices to an ESXi host for mounting over a network. The NFS server or array makes its local file systems available to ESXi hosts. The ESXi hosts access the metadata and files on the NFS array or server using a RPC-based protocol. NFS is implemented using Standard NIC that is accessed using a VMkernel port (vmknic).

No load balancing is available for NFS/NAS on vSphere because it is based on single session connections. You can configure aggregate bandwidth by creating multiple paths to the NAS array, and by accessing some datastores via one path, and other datastores via another path. You can configure NIC Teaming so that if one interface fails, another can take its place. However these load balancing techniques work only in case of a network failure and might not be able to handle error conditions on the NFS array or on the NFS server. The storage vendor is often the source for correct configuration and configuration maximums.

vSphere is compatible with both NFS version 3 and version 4.1; however, not all features can be enabled when connecting to storage arrays that use NFS v4.1. 

NFS Version Design Decision

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-VI-Storage-NFS-001

Use NFS v3 for all NFS datastores.

NFS v4.1 datastores are not supported with Storage I/O Control and with Site Recovery Manager.

NFS v3 does not support Kerberos authentication.

NFS v3 traffic is transmitted in an unencrypted format across the LAN. Therefore, best practice is to use NFS storage on trusted networks only and to isolate the traffic on dedicated VLANs.

Many NFS arrays have some built-in security, which enables them to control the IP addresses that can mount NFS exports. Best practice is to use this feature to determine which ESXi hosts can mount the volumes that are being exported and have read/write access to those volumes. This prevents unapproved hosts from mounting the NFS datastores.

All NFS exports are shared directories that sit on top of a storage volume. These exports control the access between the endpoints (ESXi hosts) and the underlying storage system. Multiple exports can exist on a single volume, with different access controls on each.

Export Size per Region

Size

vRealize Log Insight Archive

1 TB

NFS Export Design Decisions

Decision ID

Design Decision

Design Justification

Design Implication

SDDC-VI-Storage-NFS-002

Create 1 exports to support the vRealize Log Insight Archive management components.

The storage requirements of these management components are separate from the primary storage.

You can add exports if you expand the design.

SDDC-VI-Storage-NFS-003

Place the vSphere Data Protection export on its own separate volume as per SDDC-PHY-STO-008

Backup activities are I/O intensive. vSphere Data Protection or other applications suffer if vSphere Data Protection is placed on a shared volume.

Dedicated exports can add management overhead to storage administrators.

SDDC-VI-Storage-NFS-004

For each export, limit access to only the application VMs or hosts requiring the ability to mount the storage.

Limiting access helps ensure the security of the underlying data.

Securing exports individually can introduce operational overhead.