VMware ESXi 6.0 Update 1b Release Notes

Updated on: 7 JAN 2016

ESXi 6.0 Update 1b | 7 JAN 2016 | ISO Build 3380124

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

  • ESXi 6.0 Update 1b enables support for TLS versions 1.1 and 1.2 for most of the vSphere components without breaking the previously supported compatibility/interoperability. Some of the vSphere components that still support only TLS version 1.0 are listed here:
    • vSphere Client
    • Virtual SAN Observer on vCenter Server Appliance (vCSA)
    • Syslog on vCSA
    • Auto Deploy on vCSA
    • Auto Deploy/iPXE

    The ESXi 6.0 Update 1b now supports all TLS versions 1.0, 1.1, and 1.2 with the exceptions listed above. See Knowledge base article 2136185 for the list of supported TLS protocols.

  • Support for the Advanced Encryption Standard (AES) with 128/256-bit key length is added for RPC header authentication in the NFS 4.1 Client.
    Note: See resolved Security Issues section for more information.

  • This release of ESXi 6.0 Update 1b addresses issues that have been documented in the Resolved Issues section.

Earlier Releases of ESXi 6.0

Features and known issues of ESXi 6.0 are described in the release notes for each release. Release notes for earlier releases of ESXi 6.0, are:

Internationalization

VMware ESXi 6.0 is available in the following languages:

  • English
  • French
  • German
  • Japanese
  • Korean
  • Simplified Chinese
  • Traditional Chinese

Components of VMware vSphere 6.0, including vCenter Server, ESXi, the vSphere Web Client, and the vSphere Client do not accept non-ASCII input.

Compatibility

ESXi, vCenter Server, and vSphere Web Client Version Compatibility

The VMware Product Interoperability Matrix provides details about the compatibility of current and earlier versions of VMware vSphere components, including ESXi, VMware vCenter Server, the vSphere Web Client, and optional VMware products. Check the VMware Product Interoperability Matrix also for information about supported management and backup agents before you install ESXi or vCenter Server.

The vSphere Web Client is packaged with the vCenter Server. You can install the vSphere Client from the VMware vCenter autorun menu that is part of the modules ISO file.

Hardware Compatibility for ESXi

To view a list of processors, storage devices, SAN arrays, and I/O devices that are compatible with vSphere 6.0, use the ESXi 6.0 information in the VMware Compatibility Guide.

Device Compatibility for ESXi

To determine which devices are compatible with ESXi 6.0, use the ESXi 6.0 information in the VMware Compatibility Guide.

Some devices are deprecated and no longer supported on ESXi 6.0. During the upgrade process, the device driver is installed on the ESXi 6.0 host. The device driver might still function on ESXi 6.0, but the device is not supported on ESXi 6.0. For a list of devices that are deprecated and no longer supported on ESXi 6.0, see KB 2087970.

Third-Party Switch Compatibility for ESXi

VMware now supports Cisco Nexus 1000V with vSphere 6.0. vSphere requires a minimum NX-OS release of 5.2(1)SV3(1.4). For more information about Cisco Nexus 1000V, see the Cisco Release Notes. As in previous vSphere releases, Ciscso Nexus 1000V AVS mode is not supported.

Guest Operating System Compatibility for ESXi

To determine which guest operating systems are compatible with vSphere 6.0, use the ESXi 6.0 information in the VMware Compatibility Guide.

Virtual Machine Compatibility for ESXi

Virtual machines that are compatible with ESX 3.x and later (hardware version 4) are supported with ESXi 6.0. Virtual machines that are compatible with ESX 2.x and later (hardware version 3) are not supported. To use such virtual machines on ESXi 6.0, upgrade the virtual machine compatibility. See the vSphere Upgrade documentation.

Installation and Upgrades for This Release

Installation Notes for This Release

Read the vSphere Installation and Setup documentation for guidance about installing and configuring ESXi and vCenter Server.

Although the installations are straightforward, several subsequent configuration steps are essential. Read the following documentation:

vSphere 6.0 Recommended Deployment Models

VMware recommends only two deployment models:

  • vCenter Server with embedded Platform Services Controller. This model is recommended if one or more standalone vCenter Server instances are required to be deployed in a data center. Replication between these vCenter Server with embedded Platform Services Controller models are not recommended.

  • vCenter Server with external Platform Services Controller. This model is recommended only if multiple vCenter Server instances need to be linked or want to have reduced footprint of Platform Services Controller in the data center. Replication between these vCenter Server with external Platform Services Controller models are supported.

Read the vSphere Installation and Setup documentation for guidance on installing and configuring vCenter Server.

Read the Update sequence for vSphere 6.0 and its compatible VMware products for the proper sequence in which vSphere components should be updated.

Also, read KB 2108548 for guidance on installing and configuring vCenter Server.

vCenter Host OS Information

Read the Knowledge Base article KB 2091273.

Backup and Restore for vCenter Server and the vCenter Server Appliance Deployments that Use an External Platform Services Controller

Although statements in the vSphere Installation and Setup documentation restrict you from attempting to backup and restore vCenter Server and vCenter Server Appliance deployments that use an external Platform Services Controller, you can perform this task by following the steps in KB 2110294.

Migration from Embedded Platform Services Controller to External Platform Services Controller

vCenter Server with embedded Platform Services Controller cannot be migrated automatically to vCenter Server with external Platform Services Controller. Testing of this migration utility is not complete.

Before installing vCenter Server, determine your desired deployment option. If more than one vCenter Servers are required for replication setup, always deploy vCenter with external Platform Services Controller.

Migrating Third-Party Solutions

For information about upgrading with third-party customizations, see the vSphere Upgrade documentation. For information about using Image Builder to make a custom ISO, see the vSphere Installation and Setup documentation.

Upgrades and Installations Disallowed for Unsupported CPUs

vSphere 6.0 supports only processors available after June (third quarter) 2006. Comparing the processors supported by vSphere 5.x, vSphere 6.0 no longer supports the following processors:

  • AMD Opteron 12xx Series
  • AMD Opteron 22xx Series
  • AMD Operton 82xx Series

During an installation or upgrade, the installer checks the compatibility of the host CPU with vSphere 6.0. If your host hardware is not compatible, a purple screen appears with an incompatibility information message, and the vSphere 6.0 installation process stops.

Upgrade Notes for This Release

For instructions about upgrading vCenter Server and ESX/ESXi hosts, see the vSphere Upgrade documentation.

Open Source Components for VMware vSphere 6.0

The copyright statements and licenses applicable to the open source software components distributed in vSphere 6.0 are available at http://www.vmware.com. You need to log in to your My VMware account. Then, from the Downloads menu, select vSphere. On the Open Source tab, you can also download the source files for any GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available for the most recent available release of vSphere.

Product Support Notices

  • vCenter Server database. Oracle 11g and 12c as an external database for vCenter Server Appliance has been deprecated in the vSphere 6.0 release. VMware continues to support Oracle 11g and 12c as an external database in vSphere 6.0. VMware will drop support for Orace 11g and 12c as an external database for vCenter Server Appliance in a furture major release.

  • vSphere Web Client. The Storage Reports selection from an object's Monitor tab is no longer available in the vSphere 6.0 Web Client.

  • vSphere Client. The Storage Views tab is no longer available in the vSphere 6.0 Client.

Patches Contained in this Release

This release contains all bulletins for ESXi that were released earlier to the release date of this product. See the VMware Download Patches page for more information about the individual bulletins.

Patch Release ESXi600-201601001 contains the following individual bulletins:

ESXi600-201601401-BG: Updates ESXi 6.0 esx-base vib
ESXi600-201601402-BG: Updates ESXi 6.0 tools-light vib
ESXi600-201601403-BG: Updates ESXi 6.0 ehci-ehci-hcd, misc-drivers vib
ESXi600-201601404-BG: Updates ESXi 6.0 net-tg3 vib
ESXi600-201601405-BG: Updates ESXi 6.0 net-e1000e vib

Patch Release ESXi600-201601001 (Security-only build) contains the following individual bulletins:

ESXi600-201601101-SG: Updates ESXi 6.0 esx-base vib
ESXi600-201601102-SG: Updates ESXi 6.0 tools-light vib

Patch Release ESXi600-201601001 contains the following image profiles:

ESXi-6.0.0-20160104001-standard
ESXi-6.0.0-20160104001-no-tools

Patch Release ESXi600-201601001 (Security-only build) contains the following image profiles:

ESXi-6.0.0-20160101001s-standard
ESXi-6.0.0-20160101001s-no-tools

For information on patch and update classification, see KB 2014447.

Resolved Issues

The resolved issues are grouped as follows.

Licensing Issues
  • Attempts to create VFFS datastore fail
    vCenter Server prevents the creation of Virtual Flash File System (VFFS) datastore with an error message similar to the following:

    License not available to perform the operation. Feature 'vSphere Flash Read Cache' is not licensed with this edition

    The issue occurs due to incorrect check of the vSphere Flash Read Cache (VFRC) permissions.

    This issue is resolved in this release.
Networking Issues
  • Virtual machines protected by VMware vShield App lose network connectivity on the ESXi 6.0 hosts
    Virtual machines protected by vShield App Firewall appliances lose network connectivity intermittently on the ESXi 6.0 hosts. You will see log messages similar to the following in the vShield App Firewall logs:

    YYYY-MM-DDTHH:MM:SS+00:00 vShield-FW-hostname.corp.local kernel: d0: tx hang
    YYYY-MM-DDTHH:MM:SS+00:00 vShield-FW-hostname.corp.local kernel: d0: resetting
    YYYY-MM-DDTHH:MM:SS+00:00 vShield-FW-hostname.corp.local kernel: d0: intr type 3, mode 0, 3 vectors allocated
    YYYY-MM-DDTHH:MM:SS+00:00 vShield-FW-hostname.corp.local kernel: RSS indirection table :

    This issue is resolved in this release. See Knowledge base article 2128069 for further details.
  • Attempts to switch mode from emulation to UPT might fail in ESXi 6.0
    Attempts to switch mode from emulation to Universal Pass-through (UPT) might fail in ESXi 6.0. The vCenter Server displays the following message to indicate that Direct Path I/O is disabled:

    The virtual machine does not have full memory reservation which is required to activate DirectPath I/O

    The following message is logged in the vmkernel.log file:

    YYYY-MM-DDTHH:MM:SS.820Z cpu11:1000046564)Vmxnet3: VMKDevCanEnterPT:3193: port 0x3000007: memory reservation 0 smaller than guest memory size 262144

    This issue is resolved in this release.
  • VM support log for an ESXi host takes a long time to collect logs
    The vm-support command takes a long time to collect the logs, as unnecessary .dvsData logs are collected from all the .dvsData folders from all the datastores it can access.

    This issue is resolved in this release.
  • Hostd repeatedly stops responding with Signal 11 error
    After you configure 4 Network Interface Controllers (NICs) for the dvSwitch, the hostd repeatedly stops responding with Signal 11 error.

    This issue is resolved in this release.
Storage Issues
  • ESXi host takes a long time to boot and fails to load VMW_SATP_ALUA SATP module
    An ESXi host might take a long time to boot and fail to load the VMW_SATP_ALUA Storage Array Type Plug-In (SATP) module due to stale LUN entries in the esx.conf file of the LUN that have gone into a Permanent Device Loss (PDL) condition.

    This issue is resolved in this release.
  • The esxtop utility reports incorrect statistics for DAVG/cmd and KAVG/cmd on VAAI supported LUNs
    The esxtop utility reports incorrect statistics on VAAI supported LUNs for DAVG/cmd (average device latency per command) and KAVG/cmd (average ESXi VMkernel latency per command) due to an incorrect calculation.

    This issue is resolved in this release.
  • Expansion of eager zeroed VMDK causes the VM to be inaccessible
    In ESXi 6.0, VMDKs of eager zeroed type are expanded in the eager zeroed format, which takes a long time and might result in the VM being inaccessible.

    This issue is resolved in this release.
Virtual SAN Issues
  • Orphaned Virtual SAN object retained after a APD disk error is observed on the SSD
    Orphaned Virtual SAN object might be retained after an All Paths Down (APD) disk error is observed on the Solid State Disk (SSD).

    This issue is resolved in this release.
  • Disk group validation might fail due to invalid metadata of SSD/MD disk partition
    Attempts to remove a disk from a Virtual SAN disk group might result in a purple diagnostic screen as the disk group validation fails due to invalid metadata of the SSD/MD disk partition. Error messages similar to the following are logged in the vmkernel.log file.

    YYYY-MM-DDTHH:MM:SS.772Z cpu13:xxxxx)PLOG: PLOGRelogCleanup:445: RELOG complete uuid xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx lsn xxxxxxx numRelogged 0
    YYYY-MM-DDTHH:MM:SS.772Z cpu33:xxxxx opID=xxxxxxxx)World: 9728: PRDA 0xnnnnnnnnnnnn ss 0x0 ds 0x10b es 0x10b fs 0x0 gs 0x13b
    YYYY-MM-DDTHH:MM:SS.772Z cpu33:xxxxx opID=xxxxxxxx)World: 9730: TR 0xnnnn GDT 0xnnnnnnnnnnnn (0xnnnn) IDT 0xnnnnnnnnnnnn (0xfff)
    .
    .
    .

    This issue is resolved in this release.
  • Incorrect disk capacity value displayed as new space is added to thin component files during component I/O on Virsto Volumes
    During I/Os on a component, new space might be added to thin component files. For component files on Virsto Volumes, the disk capacity is not recalculated, and the old value is published until new component is created.

    This issue is resolved in this release.
  • VMs within a Virtual SAN cluster might underperform due to SSD log congestion
    Virtual machines within a Virtual SAN cluster might underperform due to Solid-state disk (SSD) log congestion problem. The VMKernel system information (VSI) nodes for the SSD indicates that log space consumed is very high and does not reduce.

    This issue is resolved in this release.
Security Issues
  • Support for the Advanced Encryption Standard (AES) with 128/256-bit key length is added for NFS 4.1 client RPC header authentication
    Support for the Advanced Encryption Standard (AES) with 128/256-bit key length is added for RPC header authentication in the NFS 4.1 Client.
    Note: DES_CBC_MD5 encryption is no longer supported for NFS 4.1 client. If AES encryption is not enabled 'or' current array firmware version does not support AES encryption, you need to update your array firmware or enable AES encryption before running this patch upgrade.
  • SSLv3 protocol is disabled in remote ESXi SSL syslog client
    The remote ESXi SSL syslog client uses the SSLv3 protocol that is considered unsecure. This issue is resolved in this release by disabling SSLv3 by default and enabling support for TLS versions 1.0, 1.1, and 1.2.
  • Update to the OpenSSL library
    The ESXi userworld OpenSSL library is updated to version openssl-1.0.1p.
Server Configuration Issues
  • ESXi host reboots unexpectedly followed by Uncorrectable Machine Check Exception error
    After upgrading the ESXi host with 1.5 TB of memory from 5.1 to 6.0 on an HP Server with AMD processor, the host might unexpectedly stop responding or reboot. Uncorrectable Machine Check Exceptions (UMCEs) similar to the following are written to the Integrated Management Log (IML) file.

    Critical","CPU","##/##/2014 15:41","##/##/2014 15:41","1","Uncorrectable Machine Check Exception (Board 0, Processor 3, APIC ID 0x00000060, Bank 0x00000004, Status 0xF6000000'00070F0F, Address 0x00000050'61EA3B28, Misc 0x00000000'00000000)",
    Mode of failure: Unexpectedly reboot. IML displays UMCE occurred.

    This issue is resolved in this release.
  • Unable to configure an ESXi host for Active Directory authentication
    Attempts to join an ESXi host with a large number of CPUs to an Active Directory domain might fail.

    This issue is resolved in this release. See Knowledge base article 2130395 for further details.
  • Interrupt received on invalid vector message displayed as sys-alert
    After you apply the ESXi patch release, ESXi600-201510001, you might encounter sys-alert messages similar to the following due to invalid vector and might generate several events in the vCenter Server.

    cpu48:88874)ALERT: IntrCookie: 3411: Interrupt received on invalid vector (cpu 48, vector 73); ignoring it.

    This issue is resolved in this release.
  • CBT functionality, QueryChangedDiskAreas API, might return incorrect sectors
    In ESXi 6.0, when you run virtual machine backups which utilize Changed Block Tracking (CBT), the CBT API call QueryDiskChangedAreas() might return incorrect changed sectors that results in inconsistent incremental virtual machine backups. The issue occurs as the CBT fails to track changed blocks on the VMs having I/O during snapshot consolidation.

    This issue is resolved in this release. See Knowledge base article 2136854 for further details.
Virtual Machine Management Issues
  • The esxcli vm process list command might display old VM name after you rename a powered on VM
    After you rename a powered on virtual machine, if you run the esxcli vm process list command to get the list of running VMs from the host, the list might display the old VM name.

    This issue is resolved in this release.
  • vCenter Server might stop responding when ESXi host loses connectivity to remote syslog server
    When an ESXi host loses connectivity to the remote syslog server, the events GeneralHostWarningEvent and AlarmStatusChangedEvent indefinitely keeps logging in too many alert messages. As a result, the vpx_event and vpx_event_arg tables fill up the vCenter database. The issue causes high vCenter latency and the vCenter Server to stop responding.

    This issue is resolved in this release.
  • Windows 10 VM VMX fails
    The Windows 10 VM vmx process fails with an error message similar to the following in the vmware.log file:

    NOT_REACHED bora/devices/ahci/ahci_user.c:1530

    This issue is resolved in this release.
  • VMX might fail when running some 3D applications
    VMX might fail when running some 3D applications. Error messages similar to the following are logged in the vmware.log file:

    xxxx-xx-xxTxx:xx:xx.xxxZ| svga| I120: [msg.log.error.unrecoverable] VMware ESX unrecoverable error: (svga)
    xxxx-xx-xxTxx:xx:xx.xxxZ| svga| I120+ NOT_REACHED bora/mks/hostops/shim/shimOps.c:674

    This issue is resolved in this release.
vMotion and Storage vMotion Issues
  • Virtual machine stops responding during snapshot consolidation on ESXi 6.0
    When consolidating a virtual machine snapshot hosted on an ESXi 6.0 host, the VM stops responding and creates a vmx-zdump file in the VM's working directory. The vmkernel.log file, located at /var/log/vmkernel.log, displays messages similar to the following:

    cpu17:xxxxxxxx)SVM: xxxx: Error destroying device xxxxxxxx-xxxxxxxx-svmmirror (Busy)
    cpu2:xxxxx)FSS: xxxx: No FS driver claimed device 'xxxxxxxx-xxxxxxxx-svmmirror': No filesystem on the device
    cpu2:xxxxx)FSS: xxxx: No FS driver claimed device 'control': No filesystem on the device

    This issue is resolved in this release. See Knowledge base article 2135631 for further details.
  • Storage vMotion of a virtual machine with a name beginning with "core" fails
    Attempts to perform a Storage vMotion or cold migration of a VM with a name beginning with "core" might fail. The vSphere Web Client displays error messages similar to the following:

    Relocate virtual machine core01 Cannot complete the operation because the file or folder
    ds:///vmfs/volumes/xxxxxxxx-xxxxxxxx-xxxx-xxxxxxxxxxxx/core01/core01-xxxxxxxx.hlog already exists
    Checking destination files for conflicts Administrator vCenter_Server_name

    This issue is resolved in this release. See Knowledge base article 2130819 for further details.
VMware HA and Fault Tolerance Configuration Issues
  • Virtual machines fail to start after High Availability failover
    After a ESXi host failure, when the HA attempts to start the affected VMs on other hosts, some of the VMs might stop responding while booting.

    This issue is resolved in this release.
Miscellaneous Issues
  • Universal Security Groups not displayed in ESXi Assign Permissions UI
    Attempts to Assign Permissions on an ESXi host only displays the Global Security Groups and does not display the Universal Security Groups in the Select Users and Groups list.

    This issue is resolved in this release.
  • VMKernel log file is flooded with warnings in the VM page fault path and might result in the host to fail
    Attempts to power on VMs with higher display resolution or a multiple monitor setup might cause several warning messages similar to the following to be written to the vmkernel.log file and might cause the host to fail due to excessive load of logging:

    XXXX-XX-XXTXX:XX:XX.XXXZ cpuXX:XXXXXXX)WARNING: VmMemPf: vm XXXXXXX: 654: COW copy failed: pgNum=0xXXXXX, mpn=0x3fffffffff
    XXXX-XX-XXTXX:XX:XX.XXXZ cpuXX:XXXXXXX)WARNING: VmMemPf: vm XXXXXXX: 654: COW copy failed: pgNum=0xXXXXX, mpn=0x3fffffffff

    This issue is resolved in this release.
VMware Tools Issues
  • Attempts to upgrade VMware Tools fail with a 21009 error code
    When you attempt to upgrade VMware Tools for a virtual machine running on VMware ESXi 6.0, the auto-upgrade fails with the error:

    vix error code = 21009

    The issue occurs if the following guest files exist on the virtual machine:

    C:\Windows\\Temp\\vmware-SYSTEM\\VMwareToolsUpgrader.exe

    Red Hat Enterprise Linux VM:

    /tmp/vmware-root

    This issue is resolved in this release.
  • Virtual machines running SAP randomly fail
    Virtual machines that run SAP might randomly fail into the vmx.zdump when executing too many VMware Tools statistics commands inside the VM. Error message similar to the following is logged in the vmware.log file.

    CoreDump error line 2160, error Cannot allocate memory.

    This issue is resolved in this release. See Knowledge base article 2137310 for further details.

Known Issues

The known issues existing in ESXi 6.0 are grouped as follows:

New known issues documented in this release are highlighted as New Issue.

Installation Issues
  • New Issue DNS suffix might persist even after you change the default configuration in DCUI
    An ESXi host might automatically get configured with the default DNS + DNS suffix on first boot, if deployed on a network served by a DHCP server. When you attempt to change the DNS suffix, the DCUI does not remove the existing DNS suffix but just adds the new suffix provided as well.

    Workaround: When configuring DNS hostname of the witness OVF, set the FULL FQDN name in the DNS Hostname field to append the correct DNS suffix. You can then remove unwanted DNS suffixes in the Custom DNS Suffix field.

  • The VMware Tools service user processes might not run on Linux OS after installing the latest VMware Tools package
    On Linux OS, you might encounter VMware Tools upgrade/installation issues or the VMware Tools service (vmtoolsd) user processes might not run after installing the latest VMware Tools package. The issue occurs if the your glibc version is older than version 2.5, like SLES10sp4.

    Workaround: Upgrade the Linux glibc to version 2.5 or above.

Upgrade Issues

Review also the Installation Issues section of the release notes. Many installation issues can also impact your upgrade process.

  • SSLv3 remains enabled on Auto Deploy after upgrade from earlier release of ESXi 6.0 to ESXi 6.0 Update 1
    When you upgrade from an earlier release of ESXi 6.0 to ESXi 6.0 Update 1, the SSLv3 protocol remains enabled on Auto Deploy.

    Workaround: Perform to the following steps to disable SSLv3 using PowerCLI commands:

    1. Run the following command to Connect to vCenter Server:

      PowerCLI C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI> Connect-VIServer -Server <FQDN_hostname or IP Address of vCenter Server>

    2. Run the following command to check the current sslv3 status:

      PowerCLI C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI> Get-DeployOption

    3. Run the following command to disable sslv3:

      PowerCLI C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI> Set-DeployOption disable-sslv3 1

    4. Restart the Auto Deploy service to update the change.

  • Fibre Channel host bus adapter device number might change after ESXi upgrade from 5.5.x to 6.0

    During ESXi upgrade from 5.5.x to 6.0, the Fibre Channel host bus adapter device number changes occasionally. The device number might change to another number if you use the esxcli storage core adapter list command.

    For example, the device numbers for a Fibre Channel host bus adapter might look similar to the following before ESXi upgrade:

    HBA Name
    ––––––––
    vmhba3
    vmhba4
    vmhba5
    vmhba66

    The device numbers from the Fibre Channel host bus adapter might look similar to the following after an ESXi upgrade 6.0:

    HBA Name
    ––––––––
    vmhba64
    vmhba65
    vmhba5
    vmhba6

    The example illustrates the random change that might occur if you use the esxcli storage core adapter list command: the device alias numbers vmhba2 and vmhba3 change to vmhba64 and vmhba65, while device numbers vmhba5 and vmhba6 are not changed. However, if you used the esxcli hardware pci list command, the device numbers do not change after upgrade.

    This problem is external to VMware and may not affect you. ESXi displays device alias names but it does not use them for any operations. You can use the host profile to reset the device alias name. Consult VMware product documentation and knowledge base articles.

    Workaround: None.

  • Active Directory settings are not retained post-upgrade
    The Active Directory settings configured in the ESXi host before upgrade are not retained when the host is upgraded to ESXi 6.0.

    Workaround: Add the host to the Active Directory Domain after upgrade if the pre-upgrade ESXi version is 5.1 or later. Do not add the host to the Active Directory Domain after upgrade if the pre-upgrade ESXi version is ESXi 5.0.x.

  • After ESXi upgrade to 6.0 hosts that were previously added to the domain are no longer joined to the domain
    When upgrading to from vSphere 5.5 to vSphere 6.0 for the first time, the Active Directory configuration is not retained.

    Workaround: After upgrade, rejoin the hosts to the vCenter Server domain:

    1. Add the hosts to vCenter Server.

    2. Join the hosts to domain (for example, example.com)

    3. Upgrade all the hosts to ESXi 6.0.

    4. Manually join one recently upgraded host to domain.

    5. Extract the host profile and disabled all other profiles except Authentication.

    6. Apply the manually joined host profile to the other recently upgraded hosts.

  • Previously running VMware ESXi Dump Collector service resets to default Disabled setting after upgrade of vCenter Server for Windows
    The upgrade process installs VMware Vsphere ESXi Dump Collector 6.0 as part of a group of optional services for vCenter Server. You must manually enable the VMware vSphere ESXi Dump Collector service to use it as part of vCenter Server 6.0 for Windows.

    Workaround: Read the VMware documentation or search the VMware Knowledge Base for information on how to enable and run optional services in vCenter Server 6.0 for Windows.

    Enable the VMware vSphere ESXi Dump Collector service in the operating system:

    1. In the Control Panel menu, select Administrative Tools and double-click on Services.

    2. Right click VMware vSphere ESXi Dump Collector and Edit Startup Type.

    3. Set the Start-up Type to Automatic.

    4. Right Click VMware vSphere ESXi Dump Collector and Start.

    The Service Start-up Type is set to automatic and the service is in a running state.

vCenter Single Sign-On and Certificate Management Issues
  • Cannot connect to VM console after SSL certificate upgrade of ESXi host
    A certificate validation error might result if you upgrade the SSL certificate that is used by an ESXi host, and you then attempt to connect to the VM console of any VM running when the certificate was replaced. This is because the old certificate is cached, and any new console connection is rejected due to the mismatch.
    The console connection might still succeed, for example, if the old certificate can be validated through other means, but is not guaranteed to succeed. Existing virtual machine console connections are not affected, but you might see the problem if the console was running during the certificate replacement, was stopped, and was restarted.

    Workaround: Place the host in maintenance mode or suspend or power off all VMs. Only running VMs are affected. As a best practice, perform all SSL certificate upgrades after placing the host in maintenance mode.

Networking Issues

  • Certain vSphere functionality does not support IPv6
    You can enable IPv6 for all nodes and components except for the following features:

    • IPv6 addresses for ESXi hosts and vCenter Server that are not mapped to fully qualified domain names (FQDNs) on the DNS server.
      Workaround: Use FQDNs or make sure the IPv6 addresses are mapped to FQDNs on the DNS servers for reverse name lookup.

    • Virtual volumes

    • PXE booting as a part of Auto Deploy and Host Profiles
      Workaround: PXE boot an ESXi host over IPv4 and configure the host for IPv6 by using Host Profiles.

    • Connection of ESXi hosts and the vCenter Server Appliance to Active Directory
      Workaround: Use Active Directory over LDAP as an identity source in vCenter Single Sign-On.

    • NFS 4.1 storage with Kerberos
      Workaround: Use NFS 4.1 with AUTH_SYS.

    • Authentication Proxy

    • Connection of the vSphere Management Assistant and vSphere Command-Line Interface to Active Directory.
      Workaround: Connect to Active Directory over LDAP.

    • Use of the vSphere Client to enable IPv6 on vSphere features
      Workaround: Use the vSphere Web Client to enable IPv6 for vSphere features.

  • Recursive panic might occur when using ESXi Dump Collector
    Recursive kernel panic might occur when the host is in panic state while it displays the purple diagnostic screen and write the core dump over the network to the ESXi Dump Collector. A VMkernel zdump file might not be available for troubleshooting on the ESXi Dump Collector in vCenter Server.

    In the case of a recursive kernel panic, the purple diagnostic screen on the host displays the following message:
    2014-09-06T01:59:13.972Z cpu6:38776)Starting network coredump from host_ip_address to esxi_dump_collector_ip_address.
    [7m2014-09-06T01:59:13.980Z cpu6:38776)WARNING: Net: 1677: Check what type of stack we are running on [0m
    Recursive panic on same CPU (cpu 6, world 38776, depth 1): ip=0x418000876a27 randomOff=0x800000:
    #GP Exception 13 in world 38776:vsish @ 0x418000f0eeec
    Secondary panic trap frame registers:
    RAX:0x0002000001230121 RCX:0x000043917bc1af80 RDX:0x00004180009d5fb8 RBX:0x000043917bc1aef0
    RSP:0x000043917bc1aee8 RBP:0x000043917bc1af70 RSI:0x0002000001230119 RDI:0x0002000001230121
    R8: 0x0000000000000038 R9: 0x0000000000000040 R10:0x0000000000010000 R11:0x0000000000000000
    R12:0x00004304f36b0260 R13:0x00004304f36add28 R14:0x000043917bc1af20 R15:0x000043917bc1afd0
    CS: 0x4010 SS: 0x0000 FS: 0x4018 GS: 0x4018 IP: 0x0000418000f0eeec RFG:0x0000000000010006
    2014-09-06T01:59:14.047Z cpu6:38776)Backtrace for current CPU #6, worldID=38776, rbp=0x43917bc1af70
    2014-09-06T01:59:14.056Z cpu6:38776)0x43917bc1aee8:[0x418000f0eeec]do_free_skb@com.vmware.driverAPI#9.2+0x4 stack: 0x0, 0x43a18b4a5880,
    2014-09-06T01:59:14.068Z cpu6:38776)Recursive panic on same CPU (cpu 6, world 38776): ip=0x418000876a27 randomOff=0x800000:
    #GP Exception 13 in world 38776:vsish @ 0x418000f0eeec
    Halt$Si0n5g# PbC8PU 7.

    Recursive kernel panic might occur when the VMkernel panics while heavy traffic is passing through the physical network adapter that is also configured to send the core dumps to the collector on vCenter Server.

    Workaround: Perform either of the following workarounds:

    • Dedicate a physical network adapter to core dump transmission only to reduce the impact from system and virtual machine traffic.

    • Disable the ESXi Dump Collector on the host by running the following ESXCLI console command:
      esxcli system coredump network set --enable false

Storage Issues

NFS Version 4.1 Issues

  • Virtual machines on an NFS 4.1 datastore fail after the NFS 4.1 share recovers from an all paths down (APD) state
    When the NFS 4.1 storage enters an APD state and then exits it after a grace period, powered on virtual machines that run on the NFS 4.1 datastore fail. The grace period depends on the array vendor.
    After the NFS 4.1 share recovers from APD, you see the following message on the virtual machine summary page in the vSphere Web Client:
    The lock protecting VM.vmdk has been lost, possibly due to underlying storage issues. If this virtual machine is configured to be highly available, ensure that the virtual machine is running on some other host before clicking OK.
    After you click OK, crash files are generated and the virtual machine powers off.

    Workaround: None.

  • NFS 4.1 client loses synchronization with server when trying to create new sessions
    After a period of interrupted connectivity with the server, the NFS 4.1 client might lose synchronization with the server when trying to create new sessions. When this occurs, the vmkernel.log file contains a throttled series of warning messages noting that an NFS41 CREATE_SESSION request failed with NFS4ERR_SEQ_MISORDERED.

    Workaround: Perform the following sequence of steps.

    1. Attempt to unmount the affected file systems. If no files are open when you unmount, this operation succeeds and the NFS client module cleans up its internal state. You can then remount the file systems that were unmounted and resume normal operation.

    2. Take down the NICs connecting to the mounts' IP addresses and leave them down long enough for several server lease times to expire. Five minutes should be sufficient. You can then bring the NICs back up. Normal operation should resume.

    3. If the preceding steps fail, reboot the ESXi host.

  • NFS 4.1 client loses synchronization with an NFS server and connection cannot be recovered even when session is reset
    After a period of interrupted connectivity with the server, the NFS 4.1 client might lose synchronization with the server and the synchronized connection with the server cannot be recovered even if the session is reset. This problem is caused by an EMC VNX server issue. When this occurs, the vmkernel.log file contains a throttled series of warning messages noting that NFS41: NFS41ProcessSessionUp:2111: resetting session with mismatched clientID; probable server bug

    Workaround: To end the session, unmount all datastores and then remount them.

  • ONTAP Kerberos volumes become inaccessible or experience VM I/O failures
    A NetApp server does not respond when it receives RPCSEC_GSS requests that arrive out of sequence. As a result, the corresponding I/O operation stalls unless it is terminated and the guest OS can stall or encounter I/O errors. Additionally, according to RFC 2203, the client can only have a number of outstanding requests equal to seq_window (32 in case of ONTAP) according to RPCSEC_GSS context and it must wait until the lowest of these outstanding requests is completed by the server. Therefore, the server never replies to the out-of-sequence RPCSEC_GSS request, and the client stops sending requests to the server after it reaches the maximum seq_window number of outstanding requests. This causes the volume to become inaccessible.

    Workaround: None. Check the latest Hardware Compatibility List (HCL) to find a supported ONTAP server that has resolved this problem.

  • You cannot create a larger than 1 TB virtual disk on NFS 4.1 datastore from EMC VNX
    NFS version 4.1 storage from EMC VNX with firmware version 7.x supports only 32-bit file formats. This prevents you from creating virtual machine files that are larger than 1 TB on the NFS 4.1 datastore.

    Workaround: Update the EMC VNX array to version 8.x.

  • NFS 4.1 datastores backed by EMC VNX storage become inaccessible during firmware upgrades
    When you upgrade EMC VNX storage to a new firmware, NFS 4.1 datastores mounted on the ESXi host become inaccessible. This occurs because the VNX server changes its major device number after the firmware upgrade. The NFS 4.1 client on the host does not expect the major number to change after it has established connectivity with the server, and causes the datastores to be permanently inaccessible.

    Workaround: Unmount all NFS 4.1 datastores exported by the VNX server before upgrading the firmware.

  • When ESXi hosts use different security mechanisms to mount the same NFS 4.1 datastore, virtual machine failures might occur
    If different ESXi hosts mount the same NFS 4.1 datastore using different security mechanisms, AUTH_SYS and Kerberos, virtual machines placed on this datastore might experience problems and failure. For example, your attempts to migrate the virtual machines from host1 to host2 might fail with permission denied errors. You might also observe these errors when you attempt to access a host1 virtual machine from host2.

    Workaround: Make sure that all hosts that mount an NFS 4.1 volume use the same security type.

  • Attempts to copy read-only files to NFS 4.1 datastore with Kerberos fail
    The failure might occur when you attempt to copy data from a source file to a target file. The target file remains empty.

    Workaround: None.

  • When you create a datastore cluster, uniformity of NFS 4.1 security types is not guaranteed
    While creating a datastore cluster, vSphere does not verify and enforce the uniformity of NFS 4.1 security types. As a result, datastores that use different security types, AUTH_SYS and Kerberos, might be a part of the same cluster. If you migrate a virtual machine from a datastore with Kerberos to a datastore with AUTH_SYS, the security level for the virtual machine becomes lower.
    This issue applies to such functionalities as vMotion, Storage vMotion, DRS, and Storage DRS.

    Workaround: If Kerberos security is required for your virtual machines, make sure that all NFS 4.1 volumes that compose the same cluster use only the Kerberos security type. Do not include NFS 3 datastores, because NFS 3 supports only AUTH_SYS.

Virtual Volumes Issues

  • Failure to create virtual datastores due to incorrect certificate used by Virtual Volumes VASA provider
    Occasionally, a self-signed certificate used by the Virtual Volumes VASA provider might incorrectly define the KeyUsage extension as critical without setting the keyCertSign bit. In this case, the provider registration succeeds. However, you are not able to create a virtual datastore from storage containers reported by the VASA provider.

    Workaround: Self-signed certificate used by the VASA provider at the time of provider registration should not define KeyUsage extension as critical without setting the keyCertSign bit.

General Storage Issues

  • vSphere Web Client incorrectly displays Storage Policy as attached when new VM is created from an existing disk
    When you use the vSphere Web Client to create a new VM from an existing disk and specify a storage policy when setting up the disk. The filter appears to be attached when you select the new VM --> click on VM policies --> Edit VM storage policies, however the filter is not actually attached. You can check the .vmdk file or the vmkfstools --iofilterslist <vmdk-file>' to verify if the filter is attached or not.

    Workaround: After you create the new VM, but before you power it on, add the filter to the vmdk by clicking on VM policies --> Edit VM storage policies.

  • Installing I/O Filters on IPv6 setup does not publish its capabilities to VPXD
    After successful installation of I/O Filter through VIM API, the installed filter is not able to publish the filter capabilities to VPXD. You are unable to attach the filter profile to any disks as there are no capabilities published to the VMware vSphere Storage Policy Based Management (SPBM).

    Workaround: None.

  • NFS Lookup operation returns NFS STALE errors
    When you deploy large number of VMs in the NFS datastore, the VM deployment fails with an error message similar to the following due to a race condition:

    Stale NFS file handle

    Workaround: Restart the Lookup operation. See Knowledge Based article 2130593 for details.

  • Attempts to create a VMFS datastore on Dell EqualLogic LUNs fail when QLogic iSCSI adapters are used
    You cannot create a VMFS datastore on a Dell EqualLogic storage device that is discovered through QLogic iSCSI adapters.
    When your attempts fail, the following error message appears on vCenter Server: Unable to create Filesystem, please see VMkernel log for more details: Connection timed out. The VMkernel log contains continuous iscsi session blocked and iscsi session unblocked messages. On the Dell EqualLogic storage array, monitoring logs show a protocol error in packet received from the initiator message for the QLogic initiator IQN names.

    This issue is observed when you use the following components:

    • Dell EqualLogic array firmware : V6.0.7

    • QLogic iSCSI adapter firmware versions : 3.00.01.75

    • Driver version : 5.01.03.2-7vmw-debug

    Workaround: Enable the iSCSI ImmediateData adapter parameter on QLogic iSCSI adapter. By default, the parameter is turned off. You cannot change this parameter from the vSphere Web Client or by using esxcli commands. To change this parameter, use the vendor provided software, such as QConvergeConsole CLI.

  • ESXi host with Emulex OneConnect HBA fails to boot
    When an ESXi host has the Emulex OneConnect HBA installed, the host might fail to boot. This failure occurs due to a problem with the Emulex firmware.

    Workaround: To correct this problem, contact Emulex to get the latest firmware for your HBA.

    If you continue to use the old firmware, follow these steps to avoid the boot failure:

    1. When ESXi is loading, press Shift+O before booting the ESXi kernel.

    2. Leave the existing boot option as is, and add a space followed by dmaMapperPolicy=false.

  • Flash Read Cache does not accelerate I/Os during APD
    When the flash disk configured as a virtual flash resource for Flash Read Cache is faulty or inaccessible, or the disk storage is unreachable from the host, the Flash Read Cache instances on that host are invalid and do not work to accelerate I/Os. As a result, the caches do not serve stale data after connectivity is re-established between the host and storage. The connectivity outage might be temporary, all paths down (APD) condition, or permanent, permanent device loss (PDL). This condition persists until the virtual machine is power-cycled.

    Workaround: The virtual machine can be power-cycled to restore I/O acceleration using Flash Read Cache.

  • All Paths Down (APD) or path-failovers might cause system failure
    In a shared SAS environment, APD or path-failover situations might cause system failure if the disks are claimed by the lsi_msgpt3 driver and they are experiencing heavy I/O activity.

    Workaround: None

  • Frequent use of SCSI abort commands can cause system failure
    With heavy I/O activity, frequent SCSI abort commands can cause a very slow response from the MegaRAID controller. If an unexpected interrupt occurs with resource references that were already released in a previous context, system failure might result.

    Workaround: None

  • iSCSI connections fail and datastores become inaccessible when IQN changes
    This problem might occur if you change the IQN of an iSCSI adapter while iSCSI sessions on the adapter are still active.

    Workaround: When you change the IQN of an iSCSI adapter, no session should be active on that adapter. Remove all iSCSI sessions and all targets on the adapter before changing the IQN.

  • nvmecli online and offline operations might not always take effect
    When you perform the nvmecli device online -A vmhba* operation to bring a NVMe device online, the operation appears to be successful. However, the device might still remain in offline state.

    Workaround: Check the status of NVMe devices by running the nvmecli device list command.

Virtual SAN Issues
  • Adding a host to a Virtual SAN cluster triggers an installer error
    When you add an ESXi host to a cluster with HA and Virtual SAN health service enabled, you might encounter either one or both of the following errors due to a VIB installation race condition:

    • In the task view, the Configuring vSphere HA task might fail with an error message similar to the following:

      Cannot install the vCenter Server agent service. ‘Unknown installer error’

    • The Enable agent task might fail with an error message similar to the following:

      Cannot complete the operation, see event log for details status.

    Workaround:

    • To fix the HA configuration failure, reboot the host and reconfigure HA as shown here:

      Hosts and Cluster view -> click cluster name -> Manage tab -> vSphere HA

    • To fix the enable agent task failure, go to the cluster view and retry the enablement of the VSAN health service as shown here:

      Hosts and Cluster view -> click cluster name -> Manage tab -> Health under Virtual SAN category, and click Retry button on top

Server Configuration Issues
  • New Issue Unable to connect the ESXi 6.0 host with TLS versions 1.1 and 1.2 only enabled to the vCenter Virtual Appliance 5.5
    Attempts to connect the ESXi 6.0 host that has TLS versions 1.1 and 1.2 only enabled to vCenter Virtual Appliance (VCVA) 5.5 fails as VCVA only supports SSLv3 and TLSv1.0 protocols.

    Workaround: Enable SSLv3 or TLS1.0 protocol on ESXi 6.0 host to connect to vCenter Virtual Appliance 5.5

  • Remediation fails when applying a host profile from a stateful host to a host provisioned with Auto Deploy
    When applying a host profile from a statefully deployed host to a host provisioned with Auto Deploy (stateless host) with no local storage, the remediation attempt fails with one of the following error messages:

    • The vmhba device at PCI bus address sxxxxxxxx.xx is not present on your host. You must shut down and then insert a card into PCI slot yy. The type of card should exactly match the one in the reference host.

    • No valid coredump partition found.

    Workaround: Disable the plug-in that is causing the issue (for example, the Device Alias Configuration or Core Dump Configuration) from the host profile, and then remediate the host profile.

  • Applying host profile with static IP to a host results in compliance error
    If you extract a host profile from a host with a DHCP network configuration, and then edit the host profile to have a static IP address, a compliance error occurs with the following message when you apply it to another host:

    Number of IPv4 routes did not match.

    Workaround: Before extracting the host profile from the DHCP host, configure the host so that it has a static IP address.

  • When you hot-add a virtual network adapter that has network resources overcommitted, the virtual machine might be powered off
    On a vSphere Distributed Switch that has Network I/O Control enabled, a powered on virtual machine is configured with a bandwidth reservation according to the reservation for virtual machine system traffic on the physical network adapter on the host. You hot-add a network adapter to the virtual machine setting network bandwidth reservation that is over the bandwidth available on the physical network adapters on the host.

    When you hot-add the network adapter, the VMkernel starts a Fast Suspend and Resume (FSR) process. Because the virtual machine requests more network resources than available, the VMkernel exercises the failure path of the FSR process. A fault in this failure path causes the virtual machine to power off.

    Workaround: Do not configure bandwidth reservation when you add a network adapter to a powered on virtual machine.

VMware HA and Fault Tolerance Issues
  • Legacy Fault Tolerance (FT) not supported on Intel Skylake-DT/S, Broadwell-EP, Broadwell-DT, and Broadwell-DE platform
    Legacy FT is not supported on Intel Skylake-DT/S, Broadwell-EP, Broadwell-DT, and Broadwell-DE platform. Attempts to power on a virtual machine will fail after you enable single-processor, Legacy Fault Tolerance.

    Workaround: None.

Guest Operating System Issues
  • Attempts to enable passthrough mode on NVMe PCIe SSD devices might fail after hot plug
    To enable passthrough mode on an SSD device from the vSphere Web Client, you select a host, click the Manage tab, click Settings, navigate to the Hardware section, click PCI Devices > Edit, select a device from a list of active devices that can be enabled for passthrough, and click OK. However, when you hot plug a new NVMe device to an ESXi 6.0 host that does not have a PCIe NVMe drive, the new NVMe PCIe SSD device cannot be enabled for passthrough mode and does not appear in the list of available passthrough devices.

    Workaround: Restart your host. You can also run the command on your ESXi host.

    1. Log in as a root user.

    2. Run the command
      /etc/init.d/hostd start

Supported Hardware Issues
  • When you run esxcli to get the disk location, the result is not correct for Avago controllers on HP servers

    When you run esxcli storage core device physical get, against an Avago controller on an HP server, the result is not correct.

    For example, if you run:
    esxcli storage core device physical get -d naa.5000c5004d1a0e76
    The system returns:
    Physical Location: enclosure 0, slot 0

    The actual label of that slot on the physical server is 1.

    Workaround: Check the slot on your HP server carefully. Because the slot numbers on the HP server start at 1, you have to increase the slot number that the command returns for the correct result.

CIM and API Issues
  • The sfcb-vmware_raw might fail
    The sfcb-vmware_raw might fail as the maximum default plugin resource group memory allocated is not enough.

    Workaround: Add UserVars CIMOemPluginsRPMemMax for memory limits of sfcbd plugins using the following command and restart the sfcbd for the new plugins value to take effect:

    esxcfg-advcfg -A CIMOemPluginsRPMemMax --add-desc 'Maximum Memory for plugins RP' --add-default XXX --add-type int --add-min 175 --add-max 500

    XXX being the memory limit you want to allocate. This value should be within the minimum (175) and maximum (500) values.

>