VMware Virtual SAN 6.1 Release Notes

Updated on: 14 OCTOBER 2015

VMware Virtual SAN 6.1| 10 September 2015 | ISO Build 3029758

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

Virtual SAN 6.1 introduces the following new features and enhancements:

  • Stretched clusters: Virtual SAN 6.1 supports stretched clusters that span two geographic locations to protect data from site failures or loss of network connection.

  • The VMware Virtual SAN Witness Appliance 6.1 is a virtual witness host packaged as a virtual appliance. It functions as an ESXi host configured as a witness host for a Virtual SAN stretched cluster. You can download the Virtual SAN Witness Appliance 6.1 OVA from the VMware Virtual SAN download Web site.

  • New on-disk format. Virtual SAN 6.1 supports upgrades to new on-disk virtual file format 2.0 through the vSphere Web Client. This log-based file system, based on Virsto technology, provides highly scalable snapshot and clone management support per Virtual SAN cluster.

  • Hybrid and all-flash configurations. Virtual SAN 6.1 supports both hybrid and all-flash clusters. To configure an all-flash cluster, click Create a new disk group under Virtual SAN Disk Management (Manage > Settings), and select Flash as the Capacity type. When claiming disk groups, you can select flash devices for both capacity and cache.

  • Improved upgrade process. Upgrade supports direct upgrades from Virtual SAN 5.5 and 6.0 to Virtual SAN 6.1.

  • Virtual SAN 6.1 includes an integrated Health Service that monitors the cluster health and enables you to diagnose and fix issues with the Virtual SAN cluster. The Virtual SAN Health Service provides several checks on hardware compatibility, networking configuration and operations, advanced configuration options, storage device health, and Virtual SAN object health. If the Health Service detects any health issues, it triggers vCenter events and alarms. To view the health checks for a Virtual SAN cluster, click Monitor > Virtual SAN > Health.

  • Virtual SAN monitors solid state drive and magnetic disk drive health and proactively isolates unhealthy devices by unmounting them. It detects gradual failure of a Virtual SAN disk and isolates the device before congestion builds up within the affected host and the entire Virtual SAN cluster. An alarm is generated from each host whenever an unhealthy device is detected and an event is generated if an unhealthy device is automatically unmounted.

Earlier Releases of Virtual SAN

Features and known issues of Virtual SAN 6.0 are described in the release notes. Release notes for Virtual SAN 6.0 are available at the following location:

VMware Virtual SAN Community

Use the Virtual SAN Community web site to provide feedback and request assistance with any problems you encounter while using Virtual SAN.

Upgrades for This Release

For instructions about upgrading Virtual SAN, see the VMware Virtual SAN 6.1 documentation.

Note: Using VMware Upgrade Manager (VUM) to upgrade hosts in parallel might result in the witness host being upgraded in parallel with one of the data hosts in a stretched cluster. To avoid upgrade problems, do not configure VUM to upgrade a witness host in parallel with the data hosts within a stretched cluster. Upgrade the witness host after all data hosts have been successfully upgraded and have exited maintenance mode.

Limitations

In an all-flash configuration, Virtual SAN supports a maximum write buffer cache size of 600 GB for each disk group.

For information about other maximum configuration limits for Virtual SAN 6.1 release, see the Configuration Maximums documentation.

Known Issues

  • Attempts to configure all-flash disk group on witness host for stretched cluster fail
    When you attempt to add a witness host with an all-flash disk group to a stretched cluster, the task fails and no disk group is added to the host.

    Workaround: After configuring the stretched cluster, manually add the disk group to the witness host.

    1. In the Virtual SAN cluster, click Manage > Settings > Disk Management, and select the witness host.

    2. Click the Create a new disk group icon, and select Flash as the Capacity type.

    3. Click OK.
  • When a network partition occurs in a cluster which has an HA heartbeat datastore, VMs are not restarted on the other data site
    When the preferred or secondary site in a Virtual SAN cluster loses its network connection to the other sites, VMs running on the site that loses network connectivity are not restarted on the other data site, and the following error might appear: vSphere HA virtual machine HA failover failed.

    This is expected behavior for Virtual SAN clusters.

    Workaround: Do not select HA heartbeat datastore while configuring vSphere HA on the cluster.

  • Unmounted Virtual SAN disks and disk groups displayed as mounted in the vSphere Web Client Operational Status field
    After the Virtual SAN disks or disk groups are unmounted by either running the esxcli vsan storage disk group unmount command or by the Virtual SAN Device Monitor service when disks show persistently high latencies, the vSphere Web Client incorrectly displays the Operational Status field as mounted.

    Workaround: Use the Health field to verify disk status, instead of the Operational Status field.

  • On-disk format upgrade displays disks not on Virtual SAN
    When you upgrade the disk format, Virtual SAN might incorrectly display disks that have been removed from the cluster. The UI also might show the version status as mixed. This display issue usually occurs after one or multiple disks are manually unmounted from the cluster. It does not affect the upgrade process. Only the mounted disks are checked. The unmounted disks are ignored.
  • Workaround: None

  • Adding a host to a Virtual SAN cluster triggers an installer error
    When you add an ESXi host to a cluster on which HA and Virtual SAN health service enabled, you might encounter either one or both of the following errors due to a VIB installation race condition:

    • In the task view, the Configuring vSphere HA task might fail with an error message similar to the following: Cannot install the vCenter Server agent service. Unknown installer error

    • The Enable agent task might fail with an error message similar to the following: Cannot complete the operation, see event log for details

    Workaround: To resolve the HA configuration failure, reboot the host and reconfigure HA.

    • Go to the Hosts and Cluster view, and select the cluster. Click Manage > vSphere HA.

    • To resolve the Enable Agent task failure, go to the cluster view and try to reenable the Virtual SAN health service. Go to the Hosts and Cluster view, and select the cluster. Click Manage > Virtual SAN > Health, and click Retry.

  • During rolling site failure in a large stretched cluster (such as 15:15:1), where each node in a fault domain fails in succession with several seconds between each failure, VMs might become inaccessible or orphaned
    To prevent this issue, follow the best practices when working with a large cluster:

    • Before you take a preferred site down for maintenance, first configure the secondary site to be the preferred site, and wait about a minute for the change to take effect.

    • Before you power off all nodes in the preferred site at the same time, first configure the secondary site to be the preferred site, and wait about a minute for the change take effect.

    • After powering off each host, wait 1-2 minutes before powering off the next host.

    Workaround: Bring the original host back online. If you cannot bring the original host online, use the recovery tool, but note that you might lose the last few data transactions in progress at the time the issue occurred.

  • Cannot enter Virtual SAN fault domain name that is more than 256 characters
    When you attempt to assign a fault domain name with more than 256 bytes in vSphere Web Client, the system displays an error: A specified parameter was not correct: faultDomainInfo.name. Note that when you use multi-byte unicode characters, you can reach the limit of 256 bytes with fewer than 256 characters.

    Workaround: None.

  • When you place a host into a Virtual SAN cluster to be used as a witness, and then move the host out of the cluster, the health check VIB is removed from the host
    If you move an ESXi host out of the Virtual SAN cluster, its health check VIB is removed. Therefore, if the host is a witness for the cluster, the installation status of the witness is red.

    Workaround: Manually install the health check VIB onto the host. See Administering VMware Virtual SAN.

  • When upgrading to Virtual SAN 6.1, an error message appears: Unable to access agent offline bundle
    This error might occur when you are upgrading from Virtual SAN 6.0 (with health check enabled) to Virtual SAN 6.1. During the upgrade process, the health check VIB is substituted and its service is temporarily stopped. In some cases, an error message might be generated by the health check.

    Workaround: Go to Host and Cluster View, click Manage > Settings > Virtual SAN > Health and click Retry.

  • All Virtual SAN clusters share the same external proxy settings
    All Virtual SAN clusters share the same external proxy settings, even if you set the proxy at the cluster level. Virtual SAN uses external proxies to connect to Support Assistant, Customer Experience Improvement Program, and the HCL database, if the cluster does not have direct Internet access.

    Workaround: None

  • Modifying the vCenter HTTP port and certification settings might cause Virtual SAN health checks to malfunction
    Virtual SAN Health Service uses the default HTTP port 443 and non-root read permission for the certificate file (/etc/vmware-vpx/ssl/rui.crt and /etc/vmware-vpx/ssl/rui.key). If you change the default port or modify the certificate so that the permission is non-root unreadable, the Virtual SAN Health Service does not recognize the change, and the health checks do not function properly.

    Workaround: Change the port setting and certificate permission to the default values.

  • Multicast performance test of Virtual SAN health check does not run on Virtual SAN network
    In some cases, depending on the routing configuration of ESXi hosts, the network multicast performance test does not run on the Virtual SAN network.

    Workaround: Use the Virtual SAN network as the only network setting for the ESXi hosts, and conduct the network multicast performance test based on this configuration.

    If ESXi hosts have multiple network settings, you also can follow the steps listed in this example. Assume that Virtual SAN runs on the 192.168.0.0 network.

    1. Bind the multicast group address to this network on each host:

      $ esxcli network ip route ipv4 add -n 224.2.3.4/32 -g 192.168.0.0?

    2. Check the routing table:

      $ esxcli network ip route ipv4 list
      default      0.0.0.0          10.160.63.253  vmk0       DHCP
      10.160.32.0  255.255.224.0    0.0.0.0        vmk0       MANUAL
      192.168.0.0  255.255.255.0    0.0.0.0        vmk3       MANUAL
      224.2.3.4    255.255.255.255  192.168.0.0    vmk3       MANUAL

    3. Run the proactive multicast network performance test, and check the result.

    4. After the test is complete, recover the routing table:

      $ esxcli network ip route ipv4 remove -n 224.2.3.4/32 -g 192.168.0.0

  • VMs in a stretched cluster become inaccessible when preferred site is isolated, then regains connectivity only to the witness host
    When the preferred site goes offline or loses its network connection to both the secondary site and the witness host, the secondary site forms a cluster with the witness host and continues storage operations. Data on the preferred site might become outdated over time (stale). If the preferred site then reconnects to the witness host but not to the secondary site, the witness host leaves the cluster it is in and forms a cluster with the preferred site, and some VMs might become inaccessible because they do not have access to the most recent data in this cluster.

    Workaround: Before you reconnect the preferred site to the cluster, mark the secondary site as the preferred site. After the sites are resynchronized, you can mark the site you want to use as the preferred site.

  • Virtual SAN Witness Host OVA does not support DVS configuration
    The VMware Virtual SAN Witness Host OVA package does not support Distributed Virtual Switch (DVS) network configuration.

    Workaround: Use a legacy virtual switch.

  • New Virtual SAN health service malfunctions when HTTPS port and certification settings are changed from the default values
    Virtual SAN health service only supports the default HTTPS port 443 and default certificate under /etc/vmware-vpx/ssl/rui.crt and /etc/vmware-vpx/ssl/rui.key. If you change the default port or modify the certificate, Virtual SAN health service cannot function properly. You might receive a status code 400 (Bad request) or have a rejected request.

    Workaround: Verify that Virtual SAN health service uses the default HTTPS port and certificate.