VMware ESXi 6.0 Update 3 Release Notes

Updated on: 14 MARCH 2017

ESXi 6.0 Update 3 | 24 FEB 2017 | ISO Build 5050593

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

  • Updated ESXi Host Client: VMware ESXi 6.0 Update 3 includes an updated version of the ESXi Host Client, version 1.14.0. The updated Host Client includes bug fixes and brings it much closer to the functionality provided by the vSphere Client. If you updated the Host Client through ESXi 6.0 patch releases, then install version 1.14.0 provided with ESXi 6.0 Update 3. In addition, new versions of the Host Client continue to be released through the VMware Labs Flings Web site. However, these Fling releases are not officially supported and not recommended for production environments.

  • Support for TLS: Support for TLSv1.0, TLSv1.1 and TLSv1.2 are enabled by default and configurable for ESXi 6.0 Update 3. Learn how to configure TLSv1.0,TLSv1.1 and TLSv1.2 from VMware Knowledge Base article 2148819. For a list of VMware products supported for TLSv1.0 disablement and the use of TLSv1.1/1.2, consult VMware Knowledge Base article 2145796.

  • Virtual SAN Performance: Multiple fixes are introduced in this VMware ESXi 6.0 Update 3 release to optimize I/O path for improved Virtual SAN performance in All Flash and Hybrid configurations:

    • Log management and storage improvements were made that enable more logs to be stored per byte of storage. This should significantly improve performance for write-intensive workloads. Because Virtual SAN is a log based file system, efficient management of log entries is key to preventing unwarranted build up of logs.

    • In addition to increasing the packing density of the log entries, for scenarios involving large file being deleted while data services is turned on, Virtual SAN preemptively de-stages data to the capacity tier which efficiently manages the log growth.

    • The checksum code path is now more efficient.

Earlier Releases of ESXi 6.0

Features and known issues of ESXi 6.0 are described in the release notes for each release. Release notes for earlier releases of ESXi 6.0, are:

Internationalization

VMware ESXi 6.0 is available in the following languages:

  • English
  • French
  • German
  • Japanese
  • Korean
  • Simplified Chinese
  • Spanish
  • Traditional Chinese

Components of VMware vSphere 6.0, including vCenter Server, ESXi, the vSphere Web Client, and the vSphere Client do not accept non-ASCII input.

Compatibility

ESXi, vCenter Server, and vSphere Web Client Version Compatibility

The VMware Product Interoperability Matrix provides details about the compatibility of current and earlier versions of VMware vSphere components, including ESXi, VMware vCenter Server, the vSphere Web Client, and optional VMware products. Check the VMware Product Interoperability Matrix also for information about supported management and backup agents before you install ESXi or vCenter Server.

The vSphere Web Client is packaged with the vCenter Server. You can install the vSphere Client from the VMware vCenter autorun menu that is part of the modules ISO file.

Hardware Compatibility for ESXi

To view a list of processors, storage devices, SAN arrays, and I/O devices that are compatible with vSphere 6.0, use the ESXi 6.0 information in the VMware Compatibility Guide.

Device Compatibility for ESXi

To determine which devices are compatible with ESXi 6.0, use the ESXi 6.0 information in the VMware Compatibility Guide.

Some devices are deprecated and no longer supported on ESXi 6.0. During the upgrade process, the device driver is installed on the ESXi 6.0 host. The device driver might still function on ESXi 6.0, but the device is not supported on ESXi 6.0. For a list of devices that are deprecated and no longer supported on ESXi 6.0, see KB 2087970.

Third-Party Switch Compatibility for ESXi

VMware now supports Cisco Nexus 1000V with vSphere 6.0. vSphere requires a minimum NX-OS release of 5.2(1)SV3(1.4). For more information about Cisco Nexus 1000V, see the Cisco Release Notes. As in previous vSphere releases, Ciscso Nexus 1000V AVS mode is not supported.

Guest Operating System Compatibility for ESXi

To determine which guest operating systems are compatible with vSphere 6.0, use the ESXi 6.0 information in the VMware Compatibility Guide.

Virtual Machine Compatibility for ESXi

Virtual machines that are compatible with ESX 3.x and later (hardware version 4) are supported with ESXi 6.0. Virtual machines that are compatible with ESX 2.x and later (hardware version 3) are not supported. To use such virtual machines on ESXi 6.0, upgrade the virtual machine compatibility. See the vSphere Upgrade documentation.

Installation and Upgrades for This Release

Installation Notes for This Release

Read the vSphere Installation and Setup documentation for guidance about installing and configuring ESXi and vCenter Server.

Although the installations are straightforward, several subsequent configuration steps are essential. Read the following documentation:

vSphere 6.0 Recommended Deployment Models

VMware recommends only two deployment models:

  • vCenter Server with embedded Platform Services Controller. This model is recommended if one or more standalone vCenter Server instances are required to be deployed in a data center. Replication between these vCenter Server with embedded Platform Services Controller models are not recommended.

  • vCenter Server with external Platform Services Controller. This model is recommended only if multiple vCenter Server instances need to be linked or want to have reduced footprint of Platform Services Controller in the data center. Replication between these vCenter Server with external Platform Services Controller models are supported.

Read the vSphere Installation and Setup documentation for guidance on installing and configuring vCenter Server.

Read the Update sequence for vSphere 6.0 and its compatible VMware products for the proper sequence in which vSphere components should be updated.

Also, read KB 2108548 for guidance on installing and configuring vCenter Server.

vCenter Host OS Information

Read the Knowledge Base article KB 2091273.

Backup and Restore for vCenter Server and the vCenter Server Appliance Deployments that Use an External Platform Services Controller

Although statements in the vSphere Installation and Setup documentation restrict you from attempting to backup and restore vCenter Server and vCenter Server Appliance deployments that use an external Platform Services Controller, you can perform this task by following the steps in KB 2110294.

Migration from Embedded Platform Services Controller to External Platform Services Controller

vCenter Server with embedded Platform Services Controller cannot be migrated automatically to vCenter Server with external Platform Services Controller. Testing of this migration utility is not complete.

Before installing vCenter Server, determine your desired deployment option. If more than one vCenter Servers are required for replication setup, always deploy vCenter with external Platform Services Controller.

Migrating Third-Party Solutions

For information about upgrading with third-party customizations, see the vSphere Upgrade documentation. For information about using Image Builder to make a custom ISO, see the vSphere Installation and Setup documentation.

Upgrades and Installations Disallowed for Unsupported CPUs

vSphere 6.0 supports only processors available after June (third quarter) 2006. Comparing the processors supported by vSphere 5.x, vSphere 6.0 no longer supports the following processors:

  • AMD Opteron 12xx Series
  • AMD Opteron 22xx Series
  • AMD Operton 82xx Series

During an installation or upgrade, the installer checks the compatibility of the host CPU with vSphere 6.0. If your host hardware is not compatible, a purple screen appears with an incompatibility information message, and the vSphere 6.0 installation process stops.

Upgrade Notes for This Release

For instructions about upgrading vCenter Server and ESX/ESXi hosts, see the vSphere Upgrade documentation.

Open Source Components for VMware vSphere 6.0

The copyright statements and licenses applicable to the open source software components distributed in vSphere 6.0 are available at http://www.vmware.com. You need to log in to your My VMware account. Then, from the Downloads menu, select vSphere. On the Open Source tab, you can also download the source files for any GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available for the most recent available release of vSphere.

Product Support Notices

  • vCenter Server database. Oracle 11g and 12c as an external database for vCenter Server Appliance has been deprecated in the vSphere 6.0 release. VMware continues to support Oracle 11g and 12c as an external database in vSphere 6.0. VMware will drop support for Oracle 11g and 12c as an external database for vCenter Server Appliance in a future major release.

  • vSphere Web Client. The Storage Reports selection from an object's Monitor tab is no longer available in the vSphere 6.0 Web Client.

  • vSphere Client. The Storage Views tab is no longer available in the vSphere 6.0 Client.

  • Site Recovery Manager: Site Recovery Manager (SRM) versions older than SRM 6.5 do not support IP customization and in-guest callout operations for VMs that are placed on ESXi 6.0 and use VMware Tools version 10.1 and above. For further details, see VMware Tools Issues.

Patches Contained in this Release

This release contains all bulletins for ESXi that were released earlier to the release date of this product. See the VMware Download Patches page for more information about the individual bulletins.

Patch Release ESXi600-Update03 contains the following individual bulletins:

Patch Release ESXi600-Update03 (Security-only build) contains the following individual bulletins:

Patch Release ESXi600-Update03 contains the following image profiles:

Patch Release ESXi600-Update03 (Security-only build) contains the following image profiles:

For information on patch and update classification, see KB 2014447.

Resolved Issues

The resolved issues are grouped as follows.

CIM and API Issues
  • The VMware provider method used to validate user permissions does not work for username and password after you exit lockdown mode
    After the server is removed from lockdown mode, the VMware provider method returns a different value that is not compatible with the value before getting into the lockdown mode. The issue results in the VMware provider method to validate user permissions not to work with the same username and password as it did before the lockdown mode.

    This issue is resolved in this release.

Miscellaneous Issues
  • Upgrading VMware Tools on multiple VMs might fail
    Attempts to upgrade VMware Tools on multiple VMs simultaneously through Update Manager might fail. Not all VMs complete the upgrade process.

    This issue is resolved in this release.

  • High read load of VMware Tools ISO images might cause corruption of flash media
    In VDI environment, the high read load of the VMware Tools images can result in corruption of the flash media.

    This issue is resolved in this release.

    You can copy all the VMware Tools data into its own ramdisk. As a result, the data can be read from the flash media only once per boot. All other reads will go to the ramdisk. vCenter Server Agent (vpxa) accesses this data through the /vmimages directory which has symlinks that point to productLocker.

    To activate this feature, follow the steps:

    1. Use the command to set the advanced ToolsRamdisk option to 1:
    2. esxcli system settings advanced set -o /UserVars/ToolsRamdisk -i 1

    3. Reboot the host.
  • The syslog.log file might get flooded with Unknown error messages
    In ESXi 6.0 update 2, hosts with Dell CIM provider can have their syslog.log file flooded with Unknown error messages if the Dell CIM provider is disabled or in an idle state. Also, when the ESXi 6.0 update 2 host reboots, the syslog.log file might log error messages intermittently with Unknown entries.

    This issue is resolved in this release.

  • Userworld core dump failure
    A userworld dump might fail when a user process runs out of memory. The error message, Unable to allocate memory, is displayed.

    This issue is resolved in this release. The fix provides a global memory for heap allocation of userworld core dump, which is used when any process runs out of memory.

  • Attempts to run failover for a VM fail with an error when synchronizing storage
    Attempts to run failover for a VM might fail with an error message similar to the following during the synchronize storage operation:

    An error occurred while communicating with the remote host.

    The following messages are logged in the HBRsrv.log file:

    YYYY-MM-DDT13:48:46.305Z info hbrsrv[nnnnnnnnnnnn] [Originator@6876 sub=Host] Heartbeat handler detected dead connection for host: host-9
    YYYY-MM-DDT13:48:46.305Z warning hbrsrv[nnnnnnnnnnnn] [Originator@6876 sub=PropertyCollector] Got WaitForUpdatesEx exception: Server closed connection after 0 response bytes read; 171:53410'>, >)>


    Also on the ESXi host, the hostd service might stop responding with messages similar to the following:

    YYYY-MM-DDT13:48:38.388Z panic hostd[468C2B70] [Originator@6876 sub=Default]
    -->
    --> Panic: Assert Failed: "progress >= 0 && progress <= 100" @ bora/vim/hostd/vimsvc/HaTaskImpl.cpp:557
    --> Backtrace:
    -->

    This issue is resolved in this release.

  • Log messages persistently reported in the hostd.log file every 90 seconds
    Log messages related to Virtual SAN similar to the following are logged in the hostd.log file every 90 seconds even when the Virtual SAN is not enabled:

    { YYYY-MM-DDT06:50:01.923Z info hostd[nnnnnnnn] [Originator@6876 sub=Hostsvc opID=21fd2fe8] VsanSystemVmkProvider : GetRuntimeInfo: Complete, runtime info: (vim.vsan.host.VsanRuntimeInfo) {
    YYYY-MM-DDT06:51:33.449Z info hostd[nnnnnnnn] [Originator@6876 sub=Hostsvc opID=21fd3009] VsanSystemVmkProvider : GetRuntimeInfo: Complete, runtime info: (vim.vsan.host.VsanRuntimeInfo) {
    YYYY-MM-DDT06:53:04.978Z info hostd[nnnnnnnn] [Originator@6876 sub=Hostsvc opID=21fd3030] VsanSystemVmkProvider : GetRuntimeInfo: Complete, runtime info: (vim.vsan.host.VsanRuntimeInfo) {

    This issue is resolved in this release.

  • Upgrading VMware Tools on multiple VMs might fail
    Attempts to upgrade VMware Tools on multiple VMs simultaneously through Update Manager might fail. If this issue occurs, VMware Tools for some VMs might not be upgraded.

    This issue is resolved in this release.

  • Enumeration of SMX provider classes might fail
    When you compile HPE ESXi WBEM provider with the 6.0 CIMPDK and install it on an ESXi 6.0 U3 system, enumeration of SMX provider classes might fail.

    This failure might result from enumeration of the following SMX classes, among others:

    • SMX_EthernetPort
    • SMX_Fan
    • SMX_PowerSupply
    • SMX_SAMediaAccessStatData

    The following error is displayed by the sfcbd for these SMX classes:

    # enum_instances SMX_EthernetPort root/hpq error: enumInstances Server returned nothing (no headers, no data)

    Providers respond to enumeration queries and successfully deliver responses to the sfcbd. There are no provider restarts or provider core dumps. Each enumeration produces sfcbd CIMXML core dumps, such as sfcb-CIMXML-Pro-zdump.000.

    This issue is resolved in this release.

Networking Issues
  • ARP request packets might drop
    ARP request packets between two VMs might be dropped if one VM is configured with guest VLAN tagging and the other VM is configured with virtual switch VLAN tagging, and VLAN offload is turned off on the VMs.

  • ESXi firewall configuration might get disabled due to scripted upgrade
    The ESXi firewall configuration might be disabled after scripted upgrade of ESXi 6.0 Update 1 or later using kick start file over NFS or FTP.

    This issue is resolved in this release.

  • The virtual MAC address of 00:00:00:00:00:00 is used during communication for a newly added physical NIC even after a host reboot
    A newly added physical NIC might not have the entry in the esx.conf file after a host reboot, resulting in a virtual MAC address of 00:00:00:00:00:00 listed for physical NIC during communication.

    This issue is resolved in this release.

  • Error message displayed during the boot stage
    Under certain conditions while the ESXi installer reads the installation script during the boot stage, an error message similar to the following is displayed:

    VmkNicImpl::DisableInternal:: Deleting vmk0 Management Interface, so setting advlface to NULL

    This issue is resolved in this release.

  • Physical switch flooded with RARP packets when using Citrix VDI PXE boot
    When you boot a virtual machine for Citrix VDI, the physical switch is flooded with RARP packets (over 1000) which might cause network connections to drop and a momentary outage.

    This release provides an advanced option /Net/NetSendRARPOnPortEnablement. You need to set the value for /Net/NetSendRARPOnPortEnablement to 0 to resolve this issue.

  • An ESXi host might fail with purple diagnostic screen
    An ESXi host might fail with purple diagnostic screen. This happens when DVFilter_TxCompletionCB() is called to complete a dvfilter share memory packet, it frees the IO complete data stored inside the packet, but sometimes, this data member becomes 0 which causes a NULL pointer exception. An error message similar to the following is displayed:

    YYYY-MM-DDT04:11:05.134Z cpu24:33420)@BlueScreen: #PF Exception 14 in world 33420:vmnic4-pollW IP 0x41800147d76d addr 0x28
    PTEs:0x587e436023;0x587e437023;0x587e438023;0x0;
    YYYY-MM-DDT04:11:05.134Z cpu24:33420)Code start: 0x418000800000 VMK uptime: 23:18:59:55.570
    YYYY-MM-DDT04:11:05.135Z cpu24:33420)0x43915461bdd0:[0x41800147d76d]DVFilterShmPacket_TxCompletionCB@com.vmware.vmkapi#v2_3_0_0+0x3d sta
    YYYY-MM-DDT04:11:05.135Z cpu24:33420)0x43915461be00:[0x41800146eaa2]DVFilterTxCompletionCB@com.vmware.vmkapi#v2_3_0_0+0xbe stack: 0x0
    YYYY-MM-DDT04:11:05.136Z cpu24:33420)0x43915461be70:[0x418000931688]Port_IOCompleteList@vmkernel#nover+0x40 stack: 0x0
    YYYY-MM-DDT04:11:05.136Z cpu24:33420)0x43915461bef0:[0x4180009228ac]PktListIOCompleteInt@vmkernel#nover+0x158 stack: 0x0
    YYYY-MM-DDT04:11:05.136Z cpu24:33420)0x43915461bf60:[0x4180009d9cf5]NetPollWorldCallback@vmkernel#nover+0xbd stack: 0x14
    YYYY-MM-DDT04:11:05.137Z cpu24:33420)0x43915461bfd0:[0x418000a149ee]CpuSched_StartWorld@vmkernel#nover+0xa2 stack: 0x0

    This issue is resolved in this release.

Security Issues
  • Update to the Likewise Kerberos
    The Likewise Kerberos is updated to version 1.14.
  • Update to OpenSSL
    The OpenSSL is updated to version openssl-1.0.2j.
  • Update to PAM
    The PAM is updated to version 1.3.0.
  • Update to the libPNG library
    The libPNG library is updated to libpng-1.6.26.
  • Update to the NTP package
    The ESXi NTP package is updated to version 4.2.8p9.
  • Update to the libcurl library
    The ESXi userworld libcurl library is updated to libcurl- 7.51.0.
Server Configuration Issues

  • Connectivity to ESXi host is lost from vCenter Server when host profile is reapplied to a stateless ESXi host
    When a host profile with vmknic adapters in both vSphere Standard Switch and vSphere Distributed Switch is applied to an ESXi host, it might remove the vmknic adapter vmk0 (management interface) from vSphere Standard Switch which could result in the host being disconnected from vCenter Server.

    This issue is resolved in this release.

  • The hostd service might fail when taking quiesced snapshot
    The hostd service might fail when performing a quiesced snapshot operation during replication process. An error message similar to the following appears in the hostd.log file: 2016-06-10T22:00:08.582Z [37181B70 info 'Hbrsvc'] ReplicationGroup will retry failed quiesce attempt for VM (vmID=37) 2016-06-10T22:00:08.583Z [37181B70 panic 'Default'] --> -->Panic: Assert Failed: "0" @ bora/vim/hostd/hbrsvc/ReplicationGroup.cpp:2779

    This issue is resolved in this release.

  • ESXi 6.0 Update 1 hosts might fail with a purple diagnostic screen when collecting statistics
    ESXi hosts with a large number of physical CPUs might stop responding during statistics collection. This issue occurs when the collection process attempts to access pages that lie beyond the range initially assigned to it.

    This issue is resolved in this release.

  • ESXi patch update might fail with a warning message if the image profile size is larger than set limit
    An ESXi patch update installation might fail if the size of the target profile file is larger than 239 MB. This might happen when you upgrade the system using ISO causing image profile size larger than 239MB without getting any warning message. This will prevent any additional VIBs from being installed on the system.

    This issue is resolved in this release.

  • The vmkernel.log file is spammed with multiple USB suspend and resume events
    The vmkernel.log file is spammed with multiple USB resumed and suspended events similar to the following:

    YYYY-MM-DDT

    This issue is resolved in this release.

  • Unable to see the user or group list for assigning permissions in the Permission tab
    Unable to see the users or groups list for assigning permissions in the Permission tab and authentication might fail for the trusted domain's user. The issue occurs when the DNS domain name of a machine is different from the DNS name of the AD domain.

    This issue is resolved in this release. However, after you upgrade the ESXi host to ESXi 6.0 Update 3, you must remove it from the AD domain and re-add it to the same domain.

  • ESXi host might stop responding and display a purple diagnostic screen
    When the Dump file set is called using the esxcfg-dumppart or other commands multiple times in parallel, an ESXi host might stop responding and display a purple diagnostic screen with entries similar to the following as a result of a race condition while dump block map is freed up:

    @BlueScreen: PANIC bora/vmkernel/main/dlmalloc.c:4907 - Corruption in dlmalloc
    Code start: 0xnnnnnnnnnnnn VMK uptime: 234:01:32:49.087
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]PanicvPanicInt@vmkernel#nover+0x37e stack: 0xnnnnnnnnnnnn
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Panic_NoSave@vmkernel#nover+0x4d stack: 0xnnnnnnnnnnnn
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]DLM_free@vmkernel#nover+0x6c7 stack: 0x8
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Heap_Free@vmkernel#nover+0xb9 stack: 0xbad000e
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Dump_SetFile@vmkernel#nover+0x155 stack: 0xnnnnnnnnnnnn
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]SystemVsi_DumpFileSet@vmkernel#nover+0x4b stack: 0x0
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSI_SetInfo@vmkernel#nover+0x41f stack: 0x4fc
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]UWVMKSyscallUnpackVSI_Set@#+0x394 stack: 0x0
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UWVMKSyscallHandler@#+0xb4 stack: 0xffb0b9c8
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UWVMKSyscallHandler@vmkernel#nover+0x1d stack: 0x0
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]gate_entry_@vmkernel#nover+0x0 stack: 0x0

    This issue is resolved in this release.

Storage Issues
  • Unable to remove stale Virtual Volume volumes and VMDK files using esxcli vvol abandonedvvol command
    Attempts use the esxcli storage vvol storagecontainer abandonedvvol command to clean the stale Virtual Volume volume and the VMDK files that remain on the Virtual Volume volume are unsuccessful.

    This issue is resolved in this release.

  • Snapshot creation task cancellation for Virtual Volumes might result in data loss
    Attempts to cancel snapshot creation for a VM whose VMDKs are on Virtual Volumes datastores might result in virtual disks not getting rolled back properly and consequent data loss. This situation occurs when a VM has multiple VMDKs with the same name and these come from different Virtual Volumes datastores.

    This issue is resolved in this release.

  • VMDK does not roll back properly when snapshot creation fails for Virtual Volumes VMs
    When snapshot creation attempts for a Virtual Volumes VM fail, the VMDK is tied to an incorrect data Virtual Volume. The issue occurs only when the VMDK for the Virtual Volumes VM comes from multiple Virtual Volumes datastores.

    This issue is resolved in this release.

  • VM I/O operations stall or cancel when the underlying storage erroneously returns a miscompare error during periodic VMFS heartbeating.
    VMFS uses the SCSI compare-and-write command, also called ATS, for periodic heartbeating. Any miscompare error during ATS command execution is treated as a lost heartbeat and the datastore initiates a recovery action. To prevent corruption, all I/O operations on the device are canceled. When the underlying storage erroneously reports miscompare errors during VMFS heartbeating, the datastore initiates an unnecessary recovery action.

    This issue is resolved in this release.

  • ESXi 6.x hosts stop responding after running for 85 days
    When this problem occurs, the /var/log/vmkernel log file displays entries similar to the following:

    YYYY-MM-DDTHH:MM:SS.833Z cpu58:34255)qlnativefc: vmhba2(5:0.0): Recieved a PUREX IOCB woh oo
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:34255)qlnativefc: vmhba2(5:0.0): Recieved the PUREX IOCB.
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)qlnativefc: vmhba2(5:0.0): sizeof(struct rdp_rsp_payload) = 0x88
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674qlnativefc: vmhba2(5:0.0): transceiver_codes[0] = 0x3
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)qlnativefc: vmhba2(5:0.0): transceiver_codes[0,1] = 0x3, 0x40
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)qlnativefc: vmhba2(5:0.0): Stats Mailbox successful.
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)qlnativefc: vmhba2(5:0.0): Sending the Response to the RDP packet
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674 0 1 2 3 4 5 6 7 8 9 Ah Bh Ch Dh Eh Fh
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)--------------------------------------------------------------
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 53 01 00 00 00 00 00 00 00 00 04 00 01 00 00 10
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) c0 1d 13 00 00 00 18 00 01 fc ff 00 00 00 00 20
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 00 00 00 00 88 00 00 00 b0 d6 97 3c 01 00 00 00
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 0 1 2 3 4 5 6 7 8 9 Ah Bh Ch Dh Eh Fh
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674)--------------------------------------------------------------
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 02 00 00 00 00 00 00 80 00 00 00 01 00 00 00 04
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 18 00 00 00 00 01 00 00 00 00 00 0c 1e 94 86 08
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 0e 81 13 ec 0e 81 00 51 00 01 00 01 00 00 00 04
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 2c 00 04 00 00 01 00 02 00 00 00 1c 00 00 00 01
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 00 00 00 00 40 00 00 00 00 01 00 03 00 00 00 10
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 50 01 43 80 23 18 a8 89 50 01 43 80 23 18 a8 88
    YYYY-MM-DDTHH:MM:SS.833Z cpu58:33674) 00 01 00 03 00 00 00 10 10 00 50 eb 1a da a1 8f

    This issue is caused by a qlnativefc driver bug sending a Read Diagnostic Parameters (RDP) response to the HBA adapter with an incorrect transfer length. As a result, the HBA adapter firmware does not free the buffer pool space. Once the buffer pool is exhausted, the HBA adapter is not able to further process any requests causing the HBA adapter to become unavailable. By default, the RDP routine is initiated by the FC Switch and occurs once every hour, resulting in the buffer pool being exhausted in approximately 80 to 85 days under normal circumstances.

    This issue is resolved in this release.

  • In vSphere 6.0, the HostMultipathStateInfoPath object of the Storage Policy API provides path value as Run Time Name vmhbaX:CX:TX:LX
    In ESXi 5.5, HostMultipathStateInfoPath provided path information in this format: HostWWN-ArrayWWN-LUN_ID, For example, sas.500605b0072b6550-sas.500c0ff1b10ea000-naa.600c0ff0001a20bd1887345701000000. However, in ESXi 6.0, the path value appears as vmhbaX:CX:TX:LX, which might impact users who rely on the HostMultipathStateInfoPath object to retrieve information such as HostWWN and ArrayWWN.

    This issue is resolved in this release. The HostMultipathStateInfoPath object now displays the path information as Run Time Name and HostWWN-ArrayWWN-LUN_ID.

    You can also use the esxcli storage core path list command to retrieve the path related information. This command provides the HostWWN and ArrayWWN details. For more information, see the Knowledge Base article 1003973.

  • An ESXi host might fail with a purple diagnostic screen
    An ESXi host with vFlash configured might fail with a purple diagnostic screen and an error message similar to PSOD: @BlueScreen: #PF Exception 14 in world 500252:vmx-vcpu-0:V.

    This issue is resolved in this release.

  • ESXi host fails with a purple diagnostic screen due to path claiming conflicts
    An ESXi host displays a purple diagnostic screen when it encounters a device that is registered, but whose paths are claimed by a two multipath plugins, for example EMC PowerPath and the Native Multipathing Plugin (NMP). This type of conflict occurs when a plugin claim rule fails to claim the path and NMP claims the path by default. NMP tries to register the device but because the device is already registered by the other plugin, a race condition occurs and triggers an ESXi host failure.

    This issue is resolved in this release.

  • File operations on large files fails as the host runs out of memory
    When you perform file operations such as mounting large files present on a datastore, these operations might fail on an ESXi host. This situation can occur when a memory leak in the buffer cache causes the ESXi host to run out of a memory, for example, when a non-zero copy of data results in buffers not getting freed. An error message similar to the following is displayed on the virtual machine.

    The operation on file /vmfs/volumes/5f64675f-169dc0cb/CloudSetup_20160608.iso failed. If the file resides on a remote file system, make sure that the network connection and the server where this disk resides are functioning properly. If the file resides on removable media, reattach the media. Select Retry to attempt the operation again. Select Cancel to end this session. Select Continue to forward the error to the guest operating system.

    This issue is resolved in this release.

  • Horizon View recompose operation might fail for desktop VMs residing in NFS datastore
    Horizon View recompose operation might fail for a few desktop VMs residing in NFS datastore with Stale NFS file handle error.

    This issue is resolved in this release.

Upgrade and Installation Issues
  • Upgrading ESXi with vSphere Update Manager fails if ESXi was deployed using dd image on USB and /altbootbank contains BOOT.CFG in upper case
    An ESXi dd image generated on certain versions of RHEL by using the esxiso2dd utility can contain BOOT.CFG in upper case in /altbootbank. If BOOT.CFG is in upper case, vSphere Update Manager fails to upgrade the host because the upgrade pre-checker accepts boot.cfg in lowercase only.

    This issue is resolved in this release.

  • Hostd fails when you upgrade ESXi 5.5.x hosts to ESXi 6.0.x with the ESXi 6.0 patch ESXi600-201611011 or higher
    You can observe this issue when you have installed an asynchronous HPSA driver that supports HBA mode. Although ESXi supports getting HPSA disk location information in HBA mode, problems might occur when one of the following conditions is met:

    • You installed an old hpssacli utility, version 2.10.14.0 or older.
    • You used an external array to connect the HPSA controller.

    These problems lead to hostd failures and the host becoming unreachable by vSphere Client and vCenter Server.

    This issue is resolved in this release. When you now use the esxcli command to get the disk location information, hostd does not fail. The esxcli command returns an error message similar to the following:

    # esxcli storage core device physical get -d naa.500003963c888808 Plugin lsu-hpsa-plugin cannot get information for device with name naa.500003963c888808. Error was: Invalid output for physicaldrive.

  • vSphere Update Manager upgrade of ESXi booted with dd image on USB might fail when /altbootbank contains BOOT.CFG (uppercase) instead of boot.cfg (lowercase)
    ESXi dd image generated on certain versions of RHEL using esxiso2dd utility contains BOOT.CFG (in uppercase) in /altbootbank, presence of BOOT.CFG causes vSPhere Update Manager upgrade of ESXi to fail because the upgrade pre-check looks for boot.cfg in lowercase only.
  • This issue is resolved in this release.

  • After upgrade to 6.0, the Image Profile name in the summary tab of the host is not updated properly
    When you use the esxcli software profile update command to apply a new Image Profile, the image profile name does not change to the new image profile name. Also when you use the ISO to perform the upgrade, the new image profile name is not marked as Updated.

    This issue is resolved in this release.

vCenter Server, vSphere Web Client, and vSphere Client Issues
  • New Issue The vSphere Web Client completes certain operations slowly
    Certain operations performed in vSphere Web Client take a long time to complete and display configuration changes. This issue might occur if storage I/O control is enabled on some or all datastores in a medium to large sized vCenter Server inventory. For more details on the issue, see KB 2146935.

    This issue is resolved in this release.

Virtual Machine Management Issues
  • vSphere Update Manager sends reboot reminders for VMware Tools when reboot already occurred after installation
    The VMware Tools installation error code displays that a reboot is required even after the reboot occurred after VMware Tools was installed. The guestInfo.toolsInstallErrCode variable on the virtual machine executable (VMX) side is not cleared when VMware Tools is successfully installed and reboot occurs. This causes vSphere Update Manager to send incorrect reminders to reboot VMware Tools.

    This issue is resolved in this release.

  • Hostd fails when ListProcesses run on guest operating system
    When a large number of processes are present in a guest operating system, the ListProcesses process is invoked more than once and the data from VMware Tools arrives in multiple chunks. When multiple ListProcesses calls to the guest OS (one for every chunk) are assembled together, the implementation creates a conflict. Multiple ListProcesses identify when all the data arrived and calls an internal callback handler. Calling the handler twice results in the failure of hostd.

    This issue is resolved in this release.

  • Possible data corruption or loss when a guest OS issues SCSI unmap commands and an IO filter prevents the unmap operation
    When a VM virtual disk is configured with IO filters and the guest OS issues SCSI unmap commands, the SCSI unmap commands might succeed even when one of the configured IO filters failed the operation. As a result, the state reflected in the VMDK diverges from that of the IO filter and data corruption or loss might be visible to the guest OS.

    This issue is resolved in this release.

  • An ESXi host might fail with purple diagnostic screen
    When a DVFilter_TxCompletionCB() operation attempts to complete a dvfilter share memory packet, it frees the IO complete data member stored inside the packet. In some cases, this data member becomes 0, causing a NULL pointer exception. An error message similar to the following is displayed:

    YYYY-MM-DDT04:11:05.134Z cpu24:33420)@BlueScreen: #PF Exception 14 in world 33420:vmnic4-pollW IP 0x41800147d76d addr 0x28 PTEs:0x587e436023;0x587e437023;0x587e438023;0x0; YYYY-MM-DDT04:11:05.134Z cpu24:33420)Code start: 0x418000800000 VMK uptime: 23:18:59:55.570 YYYY-MM-DDT04:11:05.135Z cpu24:33420)0x43915461bdd0:[0x41800147d76d]DVFilterShmPacket_TxCompletionCB@com.vmware.vmkapi#v2_3_0_0+0x3d sta YYYY-MM-DDT04:11:05.135Z cpu24:33420)0x43915461be00:[0x41800146eaa2]DVFilterTxCompletionCB@com.vmware.vmkapi#v2_3_0_0+0xbe stack: 0x0 YYYY-MM-DDT04:11:05.136Z cpu24:33420)0x43915461be70:[0x418000931688]Port_IOCompleteList@vmkernel#nover+0x40 stack: 0x0 YYYY-MM-DDT04:11:05.136Z cpu24:33420)0x43915461bef0:[0x4180009228ac]PktListIOCompleteInt@vmkernel#nover+0x158 stack: 0x0 YYYY-MM-DDT04:11:05.136Z cpu24:33420)0x43915461bf60:[0x4180009d9cf5]NetPollWorldCallback@vmkernel#nover+0xbd stack: 0x14 YYYY-MM-DDT04:11:05.137Z cpu24:33420)0x43915461bfd0:[0x418000a149ee]CpuSched_StartWorld@vmkernel#nover+0xa2 stack: 0x0

    This issue is resolved in this release.

  • ESXi host with PCI passthru might stop responding and display a purple diagnostic screen
    When you reboot a VM with PCI Passthru multiple times, the ESXi host might stop responding and display a purple diagnostic screen with messages similar to the following in the vmware.log file:

    XXXXXXXXXXXXXXX| vcpu-0| W110: A core file is available in "/vmx-debug-zdump.000"
    XXXXXXXXXXXXXXX| vcpu-0| I120: Msg_Post: Error
    XXXXXXXXXXXXXXX| vcpu-0| I120: [msg.log.error.unrecoverable] VMware ESX
    XXXXXXXXXXXXXXX| vcpu-0| unrecoverable error: (vcpu-0)
    XXXXXXXXXXXXXXX| vcpu-0| I120+ vcpu-7:ASSERT vmcore/vmm/intr/intr.c:459

    This issue is resolved in this release.

  • The hostd service might fail during replication process
    The hostd service might fail when a quiesced snapshot operation fails during replication process. An error message similar to the following might be written to the hostd.log file:

    YYYY-MM-DDT22:00:08.582Z [37181B70 info 'Hbrsvc'] ReplicationGroup will retry failed quiesce attempt for VM (vmID=37)
    YYYY-MM-DDT22:00:08.583Z [37181B70 panic 'Default']
    -->
    --> Panic: Assert Failed: "0" @ bora/vim/hostd/hbrsvc/ReplicationGroup.cpp:2779

    This issue is resolved in this release.

  • The hostd service might stop responding if it encounters I/O failures for a VM provisioned with an LSI virtual SCSI controller
    An ESXi host might stop responding if it encounters storage I/O failures for a VM provisioned with an LSI virtual controller and memory is overcommitted on the ESXi host.

    This issue is resolved in this release.

Virtual SAN Issues
  • New Issue A virtual machine that uses Virtual SAN storage might become slow or unresponsive and a Virtual SAN host management failure might occur as the host enters maintenance mode or a policy is reconfigured in a Virtual SAN cluster
    This issue can occur when zero data accumulates in the cache tier of the Virtual SAN disk group and processing the zero data results in log congestion. Such congestion might cause I/O slowdown for virtual machines that use Virtual SAN storage and a Virtual SAN host management failure. This issue happens only on the deduplication and compression enabled vSAN clusters.

    This issue is resolved in this release.

  • Intermittent failures in Virtual SAN cluster operations related to provisioning or resulting in new object creation
    A memory leak in the Cluster Level Object Manager Daemon (CLOMD) results in memory exhaustion over a long runtime causing the daemon to become temporarily unavailable.

    This issue is resolved in this release.

  • DOM module fails to initialize
    Description: The Cluster Level Object Manager Daemon (CLOMD) might not use Virtual SAN on an ESXi host with large number of physical CPUs. This issue can occur if the Virtual SAN DOM module fails to initialize when joining a cluster.

    An error message similar to the following is displayed in the clomd.log file:

    2016-12-01T22:34:49.446Z 2567759 Failed to run VSI SigCheck: Failure 2016-12-01T22:34:49.446Z 2567759 main: Clomd is starting 2016-12-01T22:34:49.446Z 2567759 main: Is in stretched cluster mode? No 2016-12-01T22:34:49.446Z 2567759 CLOMSetOptions: Setting forground to TRUE 2016-12-01T22:34:49.446Z 2567759 CLOMSetOptions: No default configuration specified. 2016-12-01T22:34:49.447Z 2567759 main: Starting CLOM trace 2016-12-01T22:34:49.475Z 2567759 Cannot open DOM device /dev/dom: No such file or directory 2016-12-01T22:34:49.475Z 2567759 Cannot connect to DOM: Failure 2016-12-01T22:34:49.475Z 2567759 CLOM_CleanupRebalanceContext: Cleaning up rebalancing state 2016-12-01T22:34:49.481Z 2567759 Failed to dump data 2016-12-01T22:34:49.481Z 2567759 2016-12-01T22:34:49.481Z 2567759 main: clomd exit

    This issue is resolved in this release.

  • ESXi host fails to rejoin VMware Virtual SAN cluster after a reboot
    Attempts to rejoin the VMware Virtual SAN cluster manually after a reboot might fail with the following error:

    Failed to join the host in VSAN cluster (Failed to start vsantraced (return code 2)

    This issue is resolved in this release.

  • Constant calling of VSAN API might result in display of a misleading task message
    In an environment with vCenter Server 6.0 Update 2 and Virtual SAN 6.2, calling the VSAN API constantly results in creation of tasks for registering a ticket to Virtual SAN VASA provider and a message similar to the following is displayed:

    Retrieve a ticket to register the Virtual SAN VASA Provider

    This issue is resolved in this release.

  • Virtual SAN Disk Rebalance task halts at 5% for more than 24 hours
    The Virtual SAN Health Service reports Virtual SAN Disk Balance warnings in the vSphere Web Client. When you click Rebalance disks, the task appears to halt at 5% for more than 24 hours.

    This issue is resolved in this release and the Rebalance disks task is shown as completed after 24 hours.

  • ESXi host might stop responding and display a purple diagnostic screen
    An ESXi host might stop responding and display a purple diagnostic screen with messages similar to the following:

    YYYY-MM-DDT22:59:29.686Z cpu40:84493)@BlueScreen: #PF Exception 14 in world 84493:python IP 0xnnnnnnnnnnnn addr 0xfffffffffffffff0 PTEs:0x0;
    YYYY-MM-DDT22:59:29.686Z cpu40:84493)Code start: 0xnnnnnnnnnnnn VMK uptime: 7:15:08:48.373
    YYYY-MM-DDT22:59:29.686Z cpu40:84493)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]DOMClient_IsTopObject@com.vmware.vsan#0.0.0.1+0x18 stack: 0xnnnnnnnn
    YYYY-MM-DDT22:59:29.687Z cpu40:84493)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]DOMListUuidTopHierarchyCbk@com.vmware.vsan#0.0.0.1+0x69 stack: 0x900
    YYYY-MM-DDT22:59:29.687Z cpu40:84493)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSANUUIDTable_Iterate@com.vmware.vsanutil#0.0.0.1+0x4b stack: 0x139d
    YYYY-MM-DDT22:59:29.687Z cpu40:84493)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]DOMVsi_ListTopClients@com.vmware.vsan#0.0.0.1+0x5a stack: 0x66
    YYYY-MM-DDT22:59:29.688Z cpu40:84493)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSI_GetListInfo@vmkernel#nover+0x354 stack: 0xnnnnnnnnnnnn
    YYYY-MM-DDT22:59:29.688Z cpu40:84493)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]UWVMKSyscallUnpackVSI_GetList@#+0x216 stack: 0xnnnnnnnnn
    YYYY-MM-DDT22:59:29.688Z cpu40:84493)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UWVMKSyscallHandler@#+0xb4 stack: 0xnnnnnnn
    YYYY-MM-DDT22:59:29.689Z cpu40:84493)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UWVMKSyscallHandler@vmkernel#nover+0x1d stack: 0x0
    YYYY-MM-DDT22:59:29.689Z cpu40:84493)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]gate_entry_@vmkernel#nover+0x0 stack: 0x0

    This issue is resolved in this release.

  • ESXi hosts might fail with a purple diagnostic screen
    ESXi hosts in a Virtual SAN Cluster might fail with a purple diagnostic screen when a Virtual SAN resync operation is paused.

    This issue is resolved in this release.

VMware HA and Fault Tolerance Configuration Issues
  • ESXi host might fail when enabling fault tolerance on a VM
    An ESXi host might fail with a purple diagnostic screen when a Fault Tolerance Secondary VM fails to power on.

    This issue is resolved in this release.

  • vSphere Guest Application Monitoring SDK fails for VMs with vSphere Fault Tolerance enabled
    When vSphere FT is enabled on an vSphere HA-protected VM where the vSphere Guest Application Monitor is installed, the vSphere Guest Application Monitoring SDK might fail.

    This release significantly reduces the increase in the VM network latency when Fault Tolerance is enabled.

  • Increased latency when SMP Fault Tolerance is enabled on a VM
    When symmetric multiprocessor (SMP) Fault Tolerance is enabled on a VM, the VM network latency might go up significantly in both average and variations. The increased latency might result in significant performance degradation or instability for VM workloads that are sensitive to such latency increases.

    This release significantly reduces the increase in the VM network latency when Fault Tolerance is enabled.

Known Issues

The known issues existing in ESXi 6.0 are grouped as follows:

New known issues documented in this release are highlighted as New Issue.

Installation Issues
  • DNS suffix might persist even after you change the default configuration in DCUI
    An ESXi host might automatically get configured with the default DNS + DNS suffix on first boot, if deployed on a network served by a DHCP server. When you attempt to change the DNS suffix, the DCUI does not remove the existing DNS suffix but just adds the new suffix provided as well.

    Workaround: When configuring DNS hostname of the witness OVF, set the FULL FQDN name in the DNS Hostname field to append the correct DNS suffix. You can then remove unwanted DNS suffixes in the Custom DNS Suffix field.

  • The VMware Tools service user processes might not run on Linux OS after installing the latest VMware Tools package
    On Linux OS, you might encounter VMware Tools upgrade/installation issues or the VMware Tools service (vmtoolsd) user processes might not run after installing the latest VMware Tools package. The issue occurs if the your glibc version is older than version 2.5, like SLES10sp4.

    Workaround: Upgrade the Linux glibc to version 2.5 or above.

Upgrade Issues

Review also the Installation Issues section of the release notes. Many installation issues can also impact your upgrade process.

  • Attempts to upgrade from ESXi 6.x to 6.0 Update 2 and above with the esxcli software vib update command fail
    Attempts to upgrade from ESXi 6.x to 6.0 Update 2 with the esxcli software vib update fails with error messages similar to the following:

    [DependencyError]
    VIB VMware_bootbank_esx-base_6.0.0-2.34.xxxxxxx requires vsan << 6.0.0-2.35, but the requirement cannot be satisfied within the ImageProfile.
    VIB VMware_bootbank_esx-base_6.0.0-2.34.xxxxxxx requires vsan >= 6.0.0-2.34, but the requirement cannot be satisfied within the ImageProfile.


    The issue occurs due to introduction of a new Virtual SAN VIB which is interdependent with the esx-base VIB and the esxcli software vib update command only updates the VIBs already installed on the system.

    Workaround: To resolve this issue, run the esxcli software profile update as shown in the following example:

    esxcli software profile update -d /vmfs/volumes/datastore1/update-from-esxi6.0-6.0_update02.zip -p ESXi-6.0.0-20160302001-standard

  • SSLv3 remains enabled on Auto Deploy after upgrade from earlier release of vSphere 6.0 to vSphere 6.0 Update 1 and above
    When you upgrade from an earlier release of vSphere 6.0 to vSphere 6.0 Update 1 and above, the SSLv3 protocol remains enabled on Auto Deploy.

    Workaround: Perform to the following steps to disable SSLv3 using PowerCLI commands:

    1. Run the following command to Connect to vCenter Server:

      PowerCLI C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI> Connect-VIServer -Server <FQDN_hostname or IP Address of vCenter Server>

    2. Run the following command to check the current sslv3 status:

      PowerCLI C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI> Get-DeployOption

    3. Run the following command to disable sslv3:

      PowerCLI C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI> Set-DeployOption disable-sslv3 1

    4. Restart the Auto Deploy service to update the change.

  • Fibre Channel host bus adapter device number might change after ESXi upgrade from 5.5.x to 6.0

    During ESXi upgrade from 5.5.x to 6.0, the Fibre Channel host bus adapter device number changes occasionally. The device number might change to another number if you use the esxcli storage core adapter list command.

    For example, the device numbers for a Fibre Channel host bus adapter might look similar to the following before ESXi upgrade:

    HBA Name
    ––––––––
    vmhba3
    vmhba4
    vmhba5
    vmhba66

    The device numbers from the Fibre Channel host bus adapter might look similar to the following after an ESXi upgrade 6.0:

    HBA Name
    ––––––––
    vmhba64
    vmhba65
    vmhba5
    vmhba6

    The example illustrates the random change that might occur if you use the esxcli storage core adapter list command: the device alias numbers vmhba2 and vmhba3 change to vmhba64 and vmhba65, while device numbers vmhba5 and vmhba6 are not changed. However, if you used the esxcli hardware pci list command, the device numbers do not change after upgrade.

    This problem is external to VMware and may not affect you. ESXi displays device alias names but it does not use them for any operations. You can use the host profile to reset the device alias name. Consult VMware product documentation and knowledge base articles.

    Workaround: None.

  • Active Directory settings are not retained post-upgrade
    The Active Directory settings configured in the ESXi host before upgrade are not retained when the host is upgraded to ESXi 6.0.

    Workaround: Add the host to the Active Directory Domain after upgrade if the pre-upgrade ESXi version is 5.1 or later. Do not add the host to the Active Directory Domain after upgrade if the pre-upgrade ESXi version is ESXi 5.0.x.

  • After ESXi upgrade to 6.0 hosts that were previously added to the domain are no longer joined to the domain
    When upgrading to from vSphere 5.5 to vSphere 6.0 for the first time, the Active Directory configuration is not retained.

    Workaround: After upgrade, rejoin the hosts to the vCenter Server domain:

    1. Add the hosts to vCenter Server.

    2. Join the hosts to domain (for example, example.com)

    3. Upgrade all the hosts to ESXi 6.0.

    4. Manually join one recently upgraded host to domain.

    5. Extract the host profile and disabled all other profiles except Authentication.

    6. Apply the manually joined host profile to the other recently upgraded hosts.

  • Previously running VMware ESXi Dump Collector service resets to default Disabled setting after upgrade of vCenter Server for Windows
    The upgrade process installs VMware Vsphere ESXi Dump Collector 6.0 as part of a group of optional services for vCenter Server. You must manually enable the VMware vSphere ESXi Dump Collector service to use it as part of vCenter Server 6.0 for Windows.

    Workaround: Read the VMware documentation or search the VMware Knowledge Base for information on how to enable and run optional services in vCenter Server 6.0 for Windows.

    Enable the VMware vSphere ESXi Dump Collector service in the operating system:

    1. In the Control Panel menu, select Administrative Tools and double-click on Services.

    2. Right click VMware vSphere ESXi Dump Collector and Edit Startup Type.

    3. Set the Start-up Type to Automatic.

    4. Right Click VMware vSphere ESXi Dump Collector and Start.

    The Service Start-up Type is set to automatic and the service is in a running state.

vCenter Single Sign-On and Certificate Management Issues
  • Cannot connect to VM console after SSL certificate upgrade of ESXi host
    A certificate validation error might result if you upgrade the SSL certificate that is used by an ESXi host, and you then attempt to connect to the VM console of any VM running when the certificate was replaced. This is because the old certificate is cached, and any new console connection is rejected due to the mismatch.
    The console connection might still succeed, for example, if the old certificate can be validated through other means, but is not guaranteed to succeed. Existing virtual machine console connections are not affected, but you might see the problem if the console was running during the certificate replacement, was stopped, and was restarted.

    Workaround: Place the host in maintenance mode or suspend or power off all VMs. Only running VMs are affected. As a best practice, perform all SSL certificate upgrades after placing the host in maintenance mode.

Networking Issues

  • Certain vSphere functionality does not support IPv6
    You can enable IPv6 for all nodes and components except for the following features:

    • IPv6 addresses for ESXi hosts and vCenter Server that are not mapped to fully qualified domain names (FQDNs) on the DNS server.
      Workaround: Use FQDNs or make sure the IPv6 addresses are mapped to FQDNs on the DNS servers for reverse name lookup.

    • Virtual volumes

    • PXE booting as a part of Auto Deploy and Host Profiles
      Workaround: PXE boot an ESXi host over IPv4 and configure the host for IPv6 by using Host Profiles.

    • Connection of ESXi hosts and the vCenter Server Appliance to Active Directory
      Workaround: Use Active Directory over LDAP as an identity source in vCenter Single Sign-On.

    • NFS 4.1 storage with Kerberos
      Workaround: Use NFS 4.1 with AUTH_SYS.

    • Authentication Proxy

    • Connection of the vSphere Management Assistant and vSphere Command-Line Interface to Active Directory.
      Workaround: Connect to Active Directory over LDAP.

    • Use of the vSphere Client to enable IPv6 on vSphere features
      Workaround: Use the vSphere Web Client to enable IPv6 for vSphere features.

  • Recursive panic might occur when using ESXi Dump Collector
    Recursive kernel panic might occur when the host is in panic state while it displays the purple diagnostic screen and write the core dump over the network to the ESXi Dump Collector. A VMkernel zdump file might not be available for troubleshooting on the ESXi Dump Collector in vCenter Server.

    In the case of a recursive kernel panic, the purple diagnostic screen on the host displays the following message:
    2014-09-06T01:59:13.972Z cpu6:38776)Starting network coredump from host_ip_address to esxi_dump_collector_ip_address.
    [7m2014-09-06T01:59:13.980Z cpu6:38776)WARNING: Net: 1677: Check what type of stack we are running on [0m
    Recursive panic on same CPU (cpu 6, world 38776, depth 1): ip=0x418000876a27 randomOff=0x800000:
    #GP Exception 13 in world 38776:vsish @ 0x418000f0eeec
    Secondary panic trap frame registers:
    RAX:0x0002000001230121 RCX:0x000043917bc1af80 RDX:0x00004180009d5fb8 RBX:0x000043917bc1aef0
    RSP:0x000043917bc1aee8 RBP:0x000043917bc1af70 RSI:0x0002000001230119 RDI:0x0002000001230121
    R8: 0x0000000000000038 R9: 0x0000000000000040 R10:0x0000000000010000 R11:0x0000000000000000
    R12:0x00004304f36b0260 R13:0x00004304f36add28 R14:0x000043917bc1af20 R15:0x000043917bc1afd0
    CS: 0x4010 SS: 0x0000 FS: 0x4018 GS: 0x4018 IP: 0x0000418000f0eeec RFG:0x0000000000010006
    2014-09-06T01:59:14.047Z cpu6:38776)Backtrace for current CPU #6, worldID=38776, rbp=0x43917bc1af70
    2014-09-06T01:59:14.056Z cpu6:38776)0x43917bc1aee8:[0x418000f0eeec]do_free_skb@com.vmware.driverAPI#9.2+0x4 stack: 0x0, 0x43a18b4a5880,
    2014-09-06T01:59:14.068Z cpu6:38776)Recursive panic on same CPU (cpu 6, world 38776): ip=0x418000876a27 randomOff=0x800000:
    #GP Exception 13 in world 38776:vsish @ 0x418000f0eeec
    Halt$Si0n5g# PbC8PU 7.

    Recursive kernel panic might occur when the VMkernel panics while heavy traffic is passing through the physical network adapter that is also configured to send the core dumps to the collector on vCenter Server.

    Workaround: Perform either of the following workarounds:

    • Dedicate a physical network adapter to core dump transmission only to reduce the impact from system and virtual machine traffic.

    • Disable the ESXi Dump Collector on the host by running the following ESXCLI console command:
      esxcli system coredump network set --enable false

Storage Issues

NFS Version 4.1 Issues

  • Virtual machines on an NFS 4.1 datastore fail after the NFS 4.1 share recovers from an all paths down (APD) state
    When the NFS 4.1 storage enters an APD state and then exits it after a grace period, powered on virtual machines that run on the NFS 4.1 datastore fail. The grace period depends on the array vendor.
    After the NFS 4.1 share recovers from APD, you see the following message on the virtual machine summary page in the vSphere Web Client:
    The lock protecting VM.vmdk has been lost, possibly due to underlying storage issues. If this virtual machine is configured to be highly available, ensure that the virtual machine is running on some other host before clicking OK.
    After you click OK, crash files are generated and the virtual machine powers off.

    Workaround: None.

  • NFS 4.1 client loses synchronization with server when trying to create new sessions
    After a period of interrupted connectivity with the server, the NFS 4.1 client might lose synchronization with the server when trying to create new sessions. When this occurs, the vmkernel.log file contains a throttled series of warning messages noting that an NFS41 CREATE_SESSION request failed with NFS4ERR_SEQ_MISORDERED.

    Workaround: Perform the following sequence of steps.

    1. Attempt to unmount the affected file systems. If no files are open when you unmount, this operation succeeds and the NFS client module cleans up its internal state. You can then remount the file systems that were unmounted and resume normal operation.

    2. Take down the NICs connecting to the mounts' IP addresses and leave them down long enough for several server lease times to expire. Five minutes should be sufficient. You can then bring the NICs back up. Normal operation should resume.

    3. If the preceding steps fail, reboot the ESXi host.

  • NFS 4.1 client loses synchronization with an NFS server and connection cannot be recovered even when session is reset
    After a period of interrupted connectivity with the server, the NFS 4.1 client might lose synchronization with the server and the synchronized connection with the server cannot be recovered even if the session is reset. This problem is caused by an EMC VNX server issue. When this occurs, the vmkernel.log file contains a throttled series of warning messages noting that NFS41: NFS41ProcessSessionUp:2111: resetting session with mismatched clientID; probable server bug.

    Workaround: To end the session, unmount all datastores and then remount them.

  • ONTAP Kerberos volumes become inaccessible or experience VM I/O failures
    A NetApp server does not respond when it receives RPCSEC_GSS requests that arrive out of sequence. As a result, the corresponding I/O operation stalls unless it is terminated and the guest OS can stall or encounter I/O errors. Additionally, according to RFC 2203, the client can only have a number of outstanding requests equal to seq_window (32 in case of ONTAP) according to RPCSEC_GSS context and it must wait until the lowest of these outstanding requests is completed by the server. Therefore, the server never replies to the out-of-sequence RPCSEC_GSS request, and the client stops sending requests to the server after it reaches the maximum seq_window number of outstanding requests. This causes the volume to become inaccessible.

    Workaround: None. Check the latest Hardware Compatibility List (HCL) to find a supported ONTAP server that has resolved this problem.

  • You cannot create a larger than 1 TB virtual disk on NFS 4.1 datastore from EMC VNX
    NFS version 4.1 storage from EMC VNX with firmware version 7.x supports only 32-bit file formats. This prevents you from creating virtual machine files that are larger than 1 TB on the NFS 4.1 datastore.

    Workaround: Update the EMC VNX array to version 8.x.

  • NFS 4.1 datastores backed by EMC VNX storage become inaccessible during firmware upgrades
    When you upgrade EMC VNX storage to a new firmware, NFS 4.1 datastores mounted on the ESXi host become inaccessible. This occurs because the VNX server changes its major device number after the firmware upgrade. The NFS 4.1 client on the host does not expect the major number to change after it has established connectivity with the server, and causes the datastores to be permanently inaccessible.

    Workaround: Unmount all NFS 4.1 datastores exported by the VNX server before upgrading the firmware.

  • When ESXi hosts use different security mechanisms to mount the same NFS 4.1 datastore, virtual machine failures might occur
    If different ESXi hosts mount the same NFS 4.1 datastore using different security mechanisms, AUTH_SYS and Kerberos, virtual machines placed on this datastore might experience problems and failure. For example, your attempts to migrate the virtual machines from host1 to host2 might fail with permission denied errors. You might also observe these errors when you attempt to access a host1 virtual machine from host2.

    Workaround: Make sure that all hosts that mount an NFS 4.1 volume use the same security type.

  • Attempts to copy read-only files to NFS 4.1 datastore with Kerberos fail
    The failure might occur when you attempt to copy data from a source file to a target file. The target file remains empty.

    Workaround: None.

  • When you create a datastore cluster, uniformity of NFS 4.1 security types is not guaranteed
    While creating a datastore cluster, vSphere does not verify and enforce the uniformity of NFS 4.1 security types. As a result, datastores that use different security types, AUTH_SYS and Kerberos, might be a part of the same cluster. If you migrate a virtual machine from a datastore with Kerberos to a datastore with AUTH_SYS, the security level for the virtual machine becomes lower.
    This issue applies to such functionalities as vMotion, Storage vMotion, DRS, and Storage DRS.

    Workaround: If Kerberos security is required for your virtual machines, make sure that all NFS 4.1 volumes that compose the same cluster use only the Kerberos security type. Do not include NFS 3 datastores, because NFS 3 supports only AUTH_SYS.

Virtual SAN Issues

  • New Issue Virtual SAN Health UI does not show because of a timeout
    When you access the Virtual SAN Health UI under a Virtual SAN cluster > Monitor > Virtual SAN > Health, the UI does not show. A possible cause is the vSphere ESX Agent Manager hanging and resulting in a timeout. To confirm, open the Virtual SAN health log located at /var/log/vmware/vsan-health/vmware-vsan-health-service.log and search for calls to the vSphere ESX Agent Manager service by using the string VsanEamUtil.getClusterStatus:.

    Workaround: Restart the vSphere ESX Agent Manager service by using the vSphere Web Client and refresh the Virtual SAN health UI.

  • New Issue Virtual SAN disk serviceability does not work when you use third-party lsi_msgpt3 drivers
    A health check of a two or three-node Virtual SAN cluster under Virtual SAN cluster > Monitor > Virtual SAN > Health > Limit Health > after 1 additional host failure shows red and triggers a false vCenter Server event or alarm when the disk space usage of the cluster exceeds 50%.

    Workaround: Add one or more hosts to the Virtual SAN cluster or add more disks to decrease the disk space usage of the cluster under 50%.

  • New Issue The limit health check of a two or three-node Virtual SAN cluster shows red
    The plugin for Virtual SAN disk serviceability, lsu-lsi-lsi-msgpt3-plugin, supports the operation to get the device location and turn on or off the disk LED. The VMware lsi_msgpt3 inbox driver supports the serviceability plugin. However, if you use a third-party asynchronous driver, the plugin does not work.

    Workaround: Use the VMware inbox lsi_msgpt3 driver version 06.255.10.00-2vmw or later.

Virtual Volumes Issues

  • Failure to create virtual datastores due to incorrect certificate used by Virtual Volumes VASA provider
    Occasionally, a self-signed certificate used by the Virtual Volumes VASA provider might incorrectly define the KeyUsage extension as critical without setting the keyCertSign bit. In this case, the provider registration succeeds. However, you are not able to create a virtual datastore from storage containers reported by the VASA provider.

    Workaround: Self-signed certificate used by the VASA provider at the time of provider registration should not define KeyUsage extension as critical without setting the keyCertSign bit.

General Storage Issues

  • ESXi 6.0 Update 2 hosts connected to certain storage arrays with a particular version of the firmware might see I/O timeouts and subsequent aborts
    When ESXi 6.0 Update 2 hosts connected to certain storage arrays with a particular version of the firmware send requests for SMART data to the storage array, and if the array responds with a PDL error, the PDL response behavior in 6.0 update 2 might result in a condition where these failed commands are continuously retried thereby blocking other commands. This error results in widespread I/O timeouts and subsequent aborts.

    Also, the ESXi hosts might take a long time to reconnect to the vCenter Server after reboot or the hosts might go into a Not Responding state in the vCenter Server. Storage-related tasks such as HBA rescan might take a very long time to complete.

    Workaround: To resolve this issue, see Knowledge Base article 2133286.

  • vSphere Web Client incorrectly displays Storage Policy as attached when new VM is created from an existing disk
    When you use the vSphere Web Client to create a new VM from an existing disk and specify a storage policy when setting up the disk. The filter appears to be attached when you select the new VM --> click on VM policies --> Edit VM storage policies, however the filter is not actually attached. You can check the .vmdk file or the vmkfstools --iofilterslist <vmdk-file>' to verify if the filter is attached or not.

    Workaround: After you create the new VM, but before you power it on, add the filter to the vmdk by clicking on VM policies --> Edit VM storage policies.

  • NFS Lookup operation returns NFS STALE errors
    When you deploy large number of VMs in the NFS datastore, the VM deployment fails with an error message similar to the following due to a race condition:

    Stale NFS file handle

    Workaround: Restart the Lookup operation. See Knowledge Based article 2130593 for details.

  • Attempts to create a VMFS datastore on Dell EqualLogic LUNs fail when QLogic iSCSI adapters are used
    You cannot create a VMFS datastore on a Dell EqualLogic storage device that is discovered through QLogic iSCSI adapters.
    When your attempts fail, the following error message appears on vCenter Server: Unable to create Filesystem, please see VMkernel log for more details: Connection timed out. The VMkernel log contains continuous iscsi session blocked and iscsi session unblocked messages. On the Dell EqualLogic storage array, monitoring logs show a protocol error in packet received from the initiator message for the QLogic initiator IQN names.

    This issue is observed when you use the following components:

    • Dell EqualLogic array firmware : V6.0.7

    • QLogic iSCSI adapter firmware versions : 3.00.01.75

    • Driver version : 5.01.03.2-7vmw-debug

    Workaround: Enable the iSCSI ImmediateData adapter parameter on QLogic iSCSI adapter. By default, the parameter is turned off. You cannot change this parameter from the vSphere Web Client or by using esxcli commands. To change this parameter, use the vendor provided software, such as QConvergeConsole CLI.

  • ESXi host with Emulex OneConnect HBA fails to boot
    When an ESXi host has the Emulex OneConnect HBA installed, the host might fail to boot. This failure occurs due to a problem with the Emulex firmware.

    Workaround: To correct this problem, contact Emulex to get the latest firmware for your HBA.

    If you continue to use the old firmware, follow these steps to avoid the boot failure:

    1. When ESXi is loading, press Shift+O before booting the ESXi kernel.

    2. Leave the existing boot option as is, and add a space followed by dmaMapperPolicy=false.

  • Flash Read Cache does not accelerate I/Os during APD
    When the flash disk configured as a virtual flash resource for Flash Read Cache is faulty or inaccessible, or the disk storage is unreachable from the host, the Flash Read Cache instances on that host are invalid and do not work to accelerate I/Os. As a result, the caches do not serve stale data after connectivity is re-established between the host and storage. The connectivity outage might be temporary, all paths down (APD) condition, or permanent, permanent device loss (PDL). This condition persists until the virtual machine is power-cycled.

    Workaround: The virtual machine can be power-cycled to restore I/O acceleration using Flash Read Cache.

  • All Paths Down (APD) or path-failovers might cause system failure
    In a shared SAS environment, APD or path-failover situations might cause system failure if the disks are claimed by the lsi_msgpt3 driver and they are experiencing heavy I/O activity.

    Workaround: None

  • Frequent use of SCSI abort commands can cause system failure
    With heavy I/O activity, frequent SCSI abort commands can cause a very slow response from the MegaRAID controller. If an unexpected interrupt occurs with resource references that were already released in a previous context, system failure might result.

    Workaround: None

  • iSCSI connections fail and datastores become inaccessible when IQN changes
    This problem might occur if you change the IQN of an iSCSI adapter while iSCSI sessions on the adapter are still active.

    Workaround: When you change the IQN of an iSCSI adapter, no session should be active on that adapter. Remove all iSCSI sessions and all targets on the adapter before changing the IQN.

  • nvmecli online and offline operations might not always take effect
    When you perform the nvmecli device online -A vmhba* operation to bring a NVMe device online, the operation appears to be successful. However, the device might still remain in offline state.

    Workaround: Check the status of NVMe devices by running the nvmecli device list command.

Server Configuration Issues
  • Remediation fails when applying a host profile from a stateful host to a host provisioned with Auto Deploy
    When applying a host profile from a statefully deployed host to a host provisioned with Auto Deploy (stateless host) with no local storage, the remediation attempt fails with one of the following error messages:

    • The vmhba device at PCI bus address sxxxxxxxx.xx is not present on your host. You must shut down and then insert a card into PCI slot yy. The type of card should exactly match the one in the reference host.

    • No valid coredump partition found.

    Workaround: Disable the plug-in that is causing the issue (for example, the Device Alias Configuration or Core Dump Configuration) from the host profile, and then remediate the host profile.

  • Applying host profile with static IP to a host results in compliance error
    If you extract a host profile from a host with a DHCP network configuration, and then edit the host profile to have a static IP address, a compliance error occurs with the following message when you apply it to another host:

    Number of IPv4 routes did not match.

    Workaround: Before extracting the host profile from the DHCP host, configure the host so that it has a static IP address.

  • When you hot-add a virtual network adapter that has network resources overcommitted, the virtual machine might be powered off
    On a vSphere Distributed Switch that has Network I/O Control enabled, a powered on virtual machine is configured with a bandwidth reservation according to the reservation for virtual machine system traffic on the physical network adapter on the host. You hot-add a network adapter to the virtual machine setting network bandwidth reservation that is over the bandwidth available on the physical network adapters on the host.

    When you hot-add the network adapter, the VMkernel starts a Fast Suspend and Resume (FSR) process. Because the virtual machine requests more network resources than available, the VMkernel exercises the failure path of the FSR process. A fault in this failure path causes the virtual machine to power off.

    Workaround: Do not configure bandwidth reservation when you add a network adapter to a powered on virtual machine.

VMware HA and Fault Tolerance Issues
  • Legacy Fault Tolerance (FT) not supported on Intel Skylake-DT/S, Broadwell-EP, Broadwell-DT, and Broadwell-DE platform
    Legacy FT is not supported on Intel Skylake-DT/S, Broadwell-EP, Broadwell-DT, and Broadwell-DE platform. Attempts to power on a virtual machine will fail after you enable single-processor, Legacy Fault Tolerance.

    Workaround: None.

Guest Operating System Issues
  • Attempts to enable passthrough mode on NVMe PCIe SSD devices might fail after hot plug
    To enable passthrough mode on an SSD device from the vSphere Web Client, you select a host, click the Manage tab, click Settings, navigate to the Hardware section, click PCI Devices > Edit, select a device from a list of active devices that can be enabled for passthrough, and click OK. However, when you hot plug a new NVMe device to an ESXi 6.0 host that does not have a PCIe NVMe drive, the new NVMe PCIe SSD device cannot be enabled for passthrough mode and does not appear in the list of available passthrough devices.

    Workaround: Restart your host. You can also run the command on your ESXi host.

    1. Log in as a root user.

    2. Run the command
      /etc/init.d/hostd start

Supported Hardware Issues
  • When you run esxcli to get the disk location, the result is not correct for Avago controllers on HP servers
    When you run esxcli storage core device physical get, against an Avago controller on an HP server, the result is not correct.

    For example, if you run:
    esxcli storage core device physical get -d naa.5000c5004d1a0e76
    The system returns:
    Physical Location: enclosure 0, slot 0

    The actual label of that slot on the physical server is 1.

    Workaround: Check the slot on your HP server carefully. Because the slot numbers on the HP server start at 1, you have to increase the slot number that the command returns for the correct result.

CIM and API Issues
  • The sfcb-vmware_raw might fail
    The sfcb-vmware_raw might fail as the maximum default plugin resource group memory allocated is not enough.

    Workaround: Add UserVars CIMOemPluginsRPMemMax for memory limits of sfcbd plugins using the following command and restart the sfcbd for the new plugins value to take effect:

    esxcfg-advcfg -A CIMOemPluginsRPMemMax --add-desc 'Maximum Memory for plugins RP' --add-default XXX --add-type int --add-min 175 --add-max 500

    XXX being the memory limit you want to allocate. This value should be within the minimum (175) and maximum (500) values.