VMware Virtual SAN 6.5 Release Notes
Updated on: 10 JANUARY
VMware Virtual SAN 6.5 | 15 November 2016 | ISO Build 4564106
Check for additions and updates to these release notes.
What's in the Release Notes
The release notes cover the following topics:
VMware Virtual SAN 6.5 introduces the following new features and enhancements:
iSCSI target service. The Virtual SAN iSCSI target service enables physical workloads that are outside the Virtual SAN cluster to access the Virtual SAN datastore. An iSCSI initiator on a remote host can transport block-level data to an iSCSI target on a storage device in the Virtual SAN cluster.
- 2 Node Direct Connect with witness traffic separation. Virtual SAN 6.5 provides support for an alternate VMkernel interface to communicate with the witness host in a stretched cluster configuration. This support enables you to separate witness traffic from Virtual SAN data traffic, with no routing required from the Virtual SAN network to the witness host. You can simplify connectivity to the witness host in certain stretched cluster and 2 Node configurations. In 2 Node configurations, you can make one or more node-to-node, direct connections for Virtual SAN data traffic, without using a high speed switch. Using an alternate VMkernel interface for witness traffic is supported in stretched cluster configurations, but only when it is connected to the same physical switch as the interface used for Virtual SAN data traffic.
- PowerCLI support. VMware vSphere PowerCLI adds command-line scripting support for Virtual SAN, to help you automate configuration and management tasks. vSphere PowerCLI provides a Windows PowerShell interface to the vSphere API. PowerCLI includes cmdlets for administering Virtual SAN components.
- 512e drive support. Virtual SAN 6.5 supports 512e magnetic hard disk drives (HDDs) in which the physical sector size is 4096 bytes, but the logical sector size emulates a sector size of 512 bytes.
VMware Virtual SAN Community
Use the Virtual SAN Community Web site to provide feedback and request assistance with any problems you find while using Virtual SAN.
Upgrades for This Release
For instructions about upgrading Virtual SAN, see the VMware Virtual SAN 6.5 documentation.
Upgrading the On-disk Format for Hosts with Limited Capacity
During an upgrade of the Virtual SAN on-disk format, a disk group evacuation is performed. The disk group is removed and upgraded to on-disk format version 3.0, and the disk group is added back to the cluster. For two-node or three-node clusters, or clusters without enough capacity to evacuate each disk group, you must use this following RVC command to upgrade the on-disk format: vsan.ondisk_upgrade --allow-reduced-redundancy
When you allow reduced redundancy, your VMs are unprotected for the duration of the upgrade, because this method does not evacuate data to the other hosts in the cluster. It removes each disk group, upgrades the on-disk format, and adds the disk group back to the cluster. All objects remain available, but with reduced redundancy.
If you enable deduplication and compression during the upgrade to Virtual SAN 6.5, you can select Allow Reduced Redundancy from the vSphere Web Client.
Using VMware Update Manager with Stretched Clusters
Using VMware Update Manager to upgrade hosts in parallel might result in the witness host being upgraded in
parallel with one of the data hosts in a stretched cluster. To avoid upgrade
problems, do not configure VMware Update Manager to upgrade a witness host in parallel with the data
hosts in a stretched cluster. Upgrade the witness host after all data
hosts have been successfully upgraded and have exited maintenance mode.
Verifying Health Check Failures During Upgrade
During upgrades of the Virtual SAN on-disk format, the Physical Disk Health – Metadata Health check can fail intermittently. These failures can occur if the destaging process is slow, most likely because Virtual SAN must allocate physical blocks on the storage devices. Before you take action, verify the status of this health check after the period of high activity, such as multiple virtual machine deployments, is complete. If the health check is still red, the warning is valid. If the health check is green, you can ignore the previous warning. For more information, see Knowledge Base article 2108690.
In an all-flash configuration, Virtual SAN supports a maximum write buffer cache size of 600 GB for each disk group.
For information about other maximum configuration limits for the Virtual SAN 6.5 release, see the Configuration Maximums documentation.
New Limited support for Improved Virtual Disks with Virtual SAN datastores
Virtual SAN 6.5 does not fully support Improved Virtual Disks in Virtual SAN datastores. You might experience the following problems if you use Improved Virtual Disks in a Virtual SAN datastore:
- Virtual SAN health service does not display the health of Improved Virtual Disks correctly.
- The Used Capacity Breakdown includes the used capacity for Improved Virtual Disks in the following category: Other
- The health status of VMs that use Improved Virtual Disks is not calculated correctly.
New HA failover does not occur after setting Traffic Type option on a vmknic to support witness traffic
If you set the traffic type option on a vmknic to support witness traffic, vSphere HA
does not automatically discover the new setting. You must manually disable and then re-enable HA so it
can discover the vmknic. If you configure the vmknic and the Virtual SAN cluster first, and then enable
HA on the cluster, it does discover the vmknic.
Workaround: Manually disable vSphere HA on the cluster, and then re-enable it.
Storage compliance for iSCSI targets and LUNs displayed in Web client does not match ESXCLI
If you create iSCSI targets and LUNs from ESXCLI, the vSphere Web Client shows their compliance status as OUT OF DATE, even if ESXCLI output shows the status as compliant. This issue can also occur if you create iSCSI targets and LUNs with the datastore default policy.
Workaround: You can select the SPBM policy in the Web Client and assign the policy to targets and LUNs that have a status of OUT OF DATE.
After you disable and delete the iSCSI target service, some iSCSI objects remain in the Virtual SAN datastore
If you use the Web Client to remove all iSCSI targets and LUNs, and disable the iSCSI target service, the iSCSI home object still exists in the Virtual SAN datastore.
Workaround: To delete the iSCSI home object and all metadata associated with the iSCSI target service, run the following command on any host in the cluster: esxcli vsan iscsi homeobject delete
iSCSI I/O operation might be interrupted during iSCSI target failover
During iSCSI target failover, the iSCSI I/O operations might be interrupted. A host failure or a host reboot might trigger an iSCSI target failover.
Workaround: Retry the session from the iSCSI initiator.
iSCSI MCS is not supported
Virtual SAN iSCSI target service does not support Multiple Connections per Session (MCS).
Any iSCSI initiator can discover iSCSI targets
Virtual SAN iSCSI target service allows any initiator on the network to discover iSCSI targets.
Workaround: You can isolate your ESXi hosts from iSCSI initiators by placing them on separate VLANs.
Physical disk capacity check in Health monitor shows negative number for free space on a disk
When the disk groups in a Virtual SAN cluster become full, with little space available, this health check might display a negative number for the free space available on a disk: Monitor > Virtual SAN > Health > Physical Disk > Disk capacity.
Workaround: You can add more capacity to the Virtual SAN cluster to increase the amount of free space.
After resolving network partition, some VM operations on linked clone VMs might fail
Some VM operations on linked clone VMs that are not producing I/O inside the guest operating system might fail. The operations that might fail include taking snapshots and suspending the VMs. This problem can occur after a network partition is resolved, if the parent base VM's namespace is not yet accessible. When the parent VM's namespace becomes accessible, HA is not notified to power on the VM.
Workaround: Power cycle VMs that are not actively running I/O operations.
When you log out of the Web client after using the Configure Virtual SAN wizard, some configuration tasks might fail
The Configure Virtual SAN wizard might require up to several hours to complete the configuration tasks. You must remain logged in to the Web client until the wizard completes the configuration. This problem usually occurs in clusters with many hosts and disk groups.
Workaround: If some configuration tasks failed, perform the configuration again.
After enabling Virtual SAN 6.5 through ESXCLI, automatic disk-claiming does not work
If you enable Virtual SAN 6.5 through ESXCLI, the automatic method to claim disks does not work.
Workaround: Use the vSphere Web Client to configure automatic disk claiming. You also can use the manual method to claim disks.
Create New VM Storage Policy wizard shows incorrect labels for rules
When you open the Create New VM Storage Policy wizard to define a policy based on Virtual SAN data services, the labels used to describe the policy rules might display an internal identifier instead of a user-friendly label. For example, you might see vsan.capabilitymetadata.propertymetadata.summary.replicaPreference.label instead of Number of disk stripes per object.
Workaround: Log out of the vSphere Web Client, and log in again.
New policy rules ignored on hosts with older versions of ESXi software
This might occur when you have two or more Virtual SAN clusters, with one cluster running the latest software and another cluster running an older software version. The vSphere Web Client displays policy rules for the latest Virtual SAN software, but those new policies are not supported on the older hosts. For example, RAID-5/6 (Erasure Coding) – Capacity is not supported on hosts running 6.0U1 or earlier software. You can configure the new policy rules and apply them to any VMs and objects, but they are ignored on hosts running the older software version.
Snapshot memory objects are not displayed in the Used Capacity Breakdown of the Virtual SAN Capacity monitor
For Virtual Machines created with hardware version lower than 10, the snapshot memory is included in the Vmem objects on the Used Capacity Breakdown.
Workaround: To view snapshot memory objects in the Used Capacity Breakdown, create Virtual Machines with hardware version 10 or higher.
Storage Usage reported in VM Summary page might appear larger after upgrading to Virtual SAN 6.5
In previous releases of Virtual SAN, the value reported for VM Storage Usage was the space used by a single copy of the data. For example, if the guest wrote 1 GB to a thin-provisioned object with two mirrors, the Storage Usage was shown as 1 GB. In Virtual SAN 6.5, the Storage Usage field displays the actual space used, including all copies of the data. So if the guest writes 1 GB to a thin-provisioned object with two mirrors, the Storage Usage is shown as 2 GB. The reported storage usage on some VMs might appear larger after upgrading to Virtual SAN 6.5, but the actual space consumed did not increase.
Cannot place a witness host in Maintenance Mode
When you attempt to place a witness host in Maintenance Mode, the host remains in the current state and you see the following notification: A specified parameter was not correct.
Workaround: When placing a witness host in Maintenance Mode, choose the No data migration option.
Moving the witness host into and then out of a stretched cluster leaves the cluster in a misconfigured state
If you place the witness host in a Virtual SAN-enabled vCenter cluster, an alarm notifies you that the witness host cannot reside in the cluster. But if you move the witness host out of the cluster, the cluster remains in a misconfigured state.
Workaround: Move the witness host out of the Virtual SAN stretched cluster, and reconfigure the stretched cluster. For more information, see Knowledge Base article 2130587.
When a network partition occurs in a cluster which has an HA heartbeat datastore, VMs are not restarted on the other data site
When the preferred or secondary site in a Virtual SAN cluster loses its network connection to the other sites, VMs running on the site that loses network connectivity are not restarted on the other data site, and the following error might appear: vSphere HA virtual machine HA failover failed.
This is expected behavior for Virtual SAN clusters.
Workaround: Do not select HA heartbeat datastore while configuring vSphere HA on the cluster.
- Unmounted Virtual SAN disks and disk groups displayed as mounted in the vSphere Web Client Operational Status field
After the Virtual SAN disks or disk groups are unmounted by either running the esxcli vsan storage disk group unmount command or by the Virtual SAN Device Monitor service when disks show persistently high latencies, the vSphere Web Client incorrectly displays the Operational Status field as mounted.
Workaround: Use the Health field to verify disk status, instead of the Operational Status field.
- On-disk format upgrade displays disks not on Virtual SAN
When you upgrade the disk format, Virtual SAN might incorrectly display disks that were removed from the cluster. The UI also might show the version status as mixed. This display issue usually occurs after one or multiple disks are manually unmounted from the cluster. It does not affect the upgrade process. Only the mounted disks are checked. The unmounted disks are ignored.
All Virtual SAN clusters share the same external proxy settings
All Virtual SAN clusters share the same external proxy settings, even if you set the proxy at the cluster level. Virtual SAN uses external proxies to connect to Support Assistant, the Customer Experience Improvement Program, and the HCL database, if the cluster does not have direct Internet access.
Multicast performance test of Virtual SAN health check does not run on Virtual SAN network
In some cases, depending on the routing configuration of ESXi hosts, the network multicast performance test does not run on the Virtual SAN network.
Workaround: Use the Virtual SAN network as the only network setting for the ESXi hosts, and conduct the network multicast performance test based on this configuration.
If ESXi hosts have multiple network settings, you also can follow the steps listed in this example. Assume that Virtual SAN runs on the 192.168.0.0 network.
- Bind the multicast group address to this network on each host:
$ esxcli network ip route ipv4 add -n 126.96.36.199/32 -g 192.168.0.0?
- Check the routing table:
$ esxcli network ip route ipv4 list
10.160.63.253 vmk0 DHCP
10.160.32.0 255.255.224.0 0.0.0.0
192.168.0.0 255.255.255.0 0.0.0.0
188.8.131.52 255.255.255.255 192.168.0.0
- Run the proactive multicast network performance test, and check the result.
- After the test is complete, recover the routing table:
$ esxcli network ip route ipv4 remove -n 184.108.40.206/32 -g 192.168.0.0
- VMs in a stretched cluster become inaccessible when preferred site is isolated, then regains connectivity only to the witness host
When the preferred site becomes unavailable or loses its network connection to the secondary site and the witness host, the secondary site forms a cluster with the witness host and continues storage operations. Data on the preferred site might become outdated over time. If the preferred site then reconnects to the witness host but not to the secondary site, the witness host leaves the cluster it is in and forms a cluster with the preferred site, and some VMs might become inaccessible because they do not have access to the most recent data in this cluster.
Workaround: Before you reconnect the preferred site to the cluster, mark the secondary site as the preferred site. After the sites are resynchronized, you can mark the site you want to use as the preferred site.
- Storage Consumption Model for VM Storage Policy wizard shows incorrect information
If one or more hosts in a Virtual SAN cluster is not running software version 6.0 Update 2 or later, the
Storage Consumption Model for the
VM Storage Policy wizard might show incorrect information when you select RAID 5/6 as the failure tolerance method.
Workaround: Upgrade all hosts to the latest software version.