VMware ESXi 5.5 Update 3a Release Notes

VMware ESXi™ 5.5 Update 3a | 6 OCT 2015 | Build 3116895

Last updated: 6 OCT 2015

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

This release of VMware ESXi contains the following enhancements:

  • Log Rotation Enablement Log rotation for vmx files allows you to reduce the log file sizes by specifying the size of each log and the number of previous logs to keep.


  • Certification of PVSCSI Adapter PVSCSI adapter is certified for use with MSCS, core clustering and applications including SQL and Exchange. This creates performance gains when moving from LSI Logic SAS to PVSCSI.


  • Support for Next Generation Processors In this release, we will continue our support for next generation processors from Intel and AMD. Please see the VMware Compatibility Guide for more info.


  • ESXi Authentication for Active Directory ESXi is modified to only support AES256-CTS/AES128-CTS/RC4-HMAC encryption for Kerberos communication between ESXi and Active Directory.


  • Resolved Issues This release delivers a number of bug fixes that have been documented in the Resolved Issues section.

Earlier Releases of ESXi 5.5

Features and known issues of ESXi 5.5 are described in the release notes for each release. Release notes for earlier releases of ESXi 5.5, are:

Internationalization

VMware vSphere 5.5 Update 3a is available in the following languages:

  • English
  • French
  • German
  • Japanese
  • Korean
  • Simplified Chinese
  • Traditional Chinese

Compatibility and Installation

ESXi, vCenter Server, and vSphere Web Client Version Compatibility

The VMware Product Interoperability Matrix provides details about the compatibility of current and earlier versions of VMware vSphere components, including ESXi, VMware vCenter Server, the vSphere Web Client, and optional VMware products. Check the VMware Product Interoperability Matrix also for information about supported management and backup agents before you install ESXi or vCenter Server.

The vSphere Client and the vSphere Web Client are packaged on the vCenter Server ISO. You can install one or both clients by using the VMware vCenter™ Installer wizard.

ESXi, vCenter Server, and VDDK Compatibility

Virtual Disk Development Kit (VDDK) 5.5.3 adds support for ESXi 5.5 Update 3 and vCenter Server 5.5 Update 3 releases.
For more information about VDDK, see http://www.vmware.com/support/developer/vddk/.

ESXi and Virtual SAN Compatibility

Virtual SAN does not support clusters that are configured with ESXi hosts earlier than 5.5 Update 1. Make sure all hosts in the Virtual SAN cluster are upgraded to ESXi 5.5 Update 1 or later, before enabling Virtual SAN. vCenter Server should also be upgraded to 5.5 Update 1 or later.

Hardware Compatibility for ESXi

To view a list of processors, storage devices, SAN arrays, and I/O devices that are compatible with vSphere 5.5 Update 3, use the ESXi 5.5 Update 3 information in the VMware Compatibility Guide.

Device Compatibility for ESXi

To determine which devices are compatible with ESXi 5.5 Update 3a, use the ESXi 5.5 Update 3 information in the VMware Compatibility Guide.

Some devices are deprecated and no longer supported on ESXi 5.5 and later. During the upgrade process, the device driver is installed on the ESXi 5.5.x host. It might still function on ESXi 5.5.x, but the device is not supported on ESXi 5.5.x. For a list of devices that have been deprecated and are no longer supported on ESXi 5.5.x, see the VMware Knowledge Base article Deprecated devices and warnings during ESXi 5.5 upgrade process.

Guest Operating System Compatibility for ESXi

To determine which guest operating systems are compatible with vSphere 5.5 Update 3a, use the ESXi 5.5 Update 3 information in the VMware Compatibility Guide.

Virtual Machine Compatibility for ESXi

Virtual machines that are compatible with ESX 3.x and later (hardware version 4) are supported with ESXi 5.5 Update 3. Virtual machines that are compatible with ESX 2.x and later (hardware version 3) are not supported. To use such virtual machines on ESXi 5.5 Update 3, upgrade the virtual machine compatibility. See the vSphere Upgrade documentation.

vSphere Client Connections to Linked Mode Environments with vCenter Server 5.x

vCenter Server 5.5 can exist in Linked Mode only with other instances of vCenter Server 5.5.

Installation Notes for This Release

Read the vSphere Installation and Setup documentation for guidance about installing and configuring ESXi and vCenter Server.

Although the installations are straightforward, several subsequent configuration steps are essential. Read the following documentation:

Migrating Third-Party Solutions

You cannot directly migrate third-party solutions installed on an ESX or ESXi host as part of a host upgrade. Architectural changes between ESXi 5.1 and ESXi 5.5 result in the loss of third-party components and possible system instability. To accomplish such migrations, you can create a custom ISO file with Image Builder. For information about upgrading your host with third-party customizations, see the vSphere Upgrade documentation. For information about using Image Builder to make a custom ISO, see the vSphere Installation and Setup documentation.

Upgrades and Installations Disallowed for Unsupported CPUs

vSphere 5.5.x supports only CPUs with LAHF and SAHF CPU instruction sets. During an installation or upgrade, the installer checks the compatibility of the host CPU with vSphere 5.5.x. If your host hardware is not compatible, a purple screen appears with a message about incompatibility. You cannot install or upgrade to vSphere 5.5.x.

Upgrades for This Release

For instructions about upgrading vCenter Server and ESX/ESXi hosts, see the vSphere Upgrade documentation.

Supported Upgrade Paths for Upgrade to ESXi 5.5 Update 3a:

Upgrade Deliverables

Supported Upgrade Tools

Supported Upgrade Paths to ESXi 5.5 Update 3a

ESX/ESXi 4.0:
Includes
ESX/ESXi 4.0 Update 1
ESX/ESXi 4.0 Update 2

ESX/ESXi 4.0 Update 3
ESX/ESXi 4.0 Update 4

ESX/ESXi 4.1:
Includes
ESX/ESXi 4.1 Update 1
ESX/ESXi 4.1 Update 2

ESX/ESXi 4.1 Update 3

ESXi 5.0:
Includes
ESXi 5.0 Update 1

ESXi 5.0 Update 2
ESXi 5.0 Update 3

ESXi 5.1
Includes
ESXi 5.1 Update 1
ESXi 5.1 Update 2

ESXi 5.5
Includes
ESXi 5.5 Update 1
ESXi 5.5 Update 2

VMware-VMvisor-Installer-5.5.0.update03-3116895.x86_64.iso

 

  • VMware vCenter Update Manager
  • CD Upgrade
  • Scripted Upgrade

Yes

Yes

Yes

Yes

Yes

ESXi550-201510001.zip
  • VMware vCenter Update Manager
  • ESXCLI
  • VMware vSphere CLI

No

No

Yes*

Yes*

Yes

Using patch definitions downloaded from VMware portal (online) VMware vCenter Update Manager with patch baseline

No

No

No

No

Yes


*Note: Upgrade from ESXi 5.0.x, or ESXi 5.1.x, to ESXi 5.5 Update 3a using - ESXi550-201510001.zip is supported only with ESXCLI. You need to run the esxcli software profile update --depot=<depot_location> --profile=<profile_name> command to perform the upgrade. For more information, see the ESXi 5.5.x Upgrade Options topic in the vSphere Upgrade guide.

Open Source Components for VMware vSphere 5.5 Update 3

The copyright statements and licenses applicable to the open source software components distributed in vSphere 5.5 Update 3 are available at http://www.vmware.com/download/vsphere/open_source.html, on the Open Source tab. You can also download the source files for any GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available for the most recent available release of vSphere.

Product Support Notices

  • vSphere Web Client. Because Linux platforms are no longer supported by Adobe Flash, vSphere Web Client is not supported on the Linux OS. Third party browsers that add support for Adobe Flash on the Linux desktop OS might continue to function.

    VMware vCenter Server Appliance. In vSphere 5.5, the VMware vCenter Server Appliance meets high-governance compliance standards through the enforcement of the DISA Security Technical Information Guidelines (STIG). Before you deploy VMware vCenter Server Appliance, see the VMware Hardened Virtual Appliance Operations Guide for information about the new security deployment standards and to ensure successful operations.

  • vCenter Server database. vSphere 5.5 removes support for IBM DB2 as the vCenter Server database.

  • VMware Tools. Beginning with vSphere 5.5, all information about how to install and configure VMware Tools in vSphere is merged with the other vSphere documentation. For information about using VMware Tools in vSphere, see the vSphere documentation. Installing and Configuring VMware Tools is not relevant to vSphere 5.5 and later.

  • VMware Tools. Beginning with vSphere 5.5, VMware Tools do not provide ThinPrint features.

  • vSphere Data Protection. vSphere Data Protection 5.1 is not compatible with vSphere 5.5 because of a change in the way vSphere Web Client operates. vSphere Data Protection 5.1 users who upgrade to vSphere 5.5 must also update vSphere Data Protection to continue using vSphere Data Protection.

Patches Contained in this Release

This release contains all bulletins for ESXi that were released earlier to the release date of this product. See the VMware Download Patches page for more information about the individual bulletins.

Patch Release ESXi550-Update03 contains the following individual bulletins:

ESXi550-201510401-BG: Updates ESXi 5.5 esx-base vib

Patch Release ESXi550-Update03 contains the following image profiles:

ESXi-5.5.0-20151004001-standard
ESXi-5.5.0-20151004001-no-tools

For information on patch and update classification, see KB 2014447.

Resolved Issues

This section describes resolved issues in this release:

Backup Issues

  • Attempts to restore a virtual machine might fail with an error
    Attempts to restore a virtual machine on an ESXi host using vSphere Data Protection might fail and display an error message similar to the following:

    Unexpected exception received during reconfigure

    This issue is resolved in this release.

CIM and API Issues

  • Spew in syslog when system event log is full and indication subscriptions exist
    Spew in syslog when System Event Log (SEL) is full and indication subscriptions exist. The following logs are logged in rapidly:

    sfcb-vmware_raw[xxxxxxxxxx]: Can't get Alert Indication Class. Use default
    sfcb-vmware_raw[xxxxxxxxxx]: Can't get Alert Indication Class. Use default
    sfcb-vmware_raw[xxxxxxxxxx]: Can't get Alert Indication Class. Use default

    This issue is resolved in this release.
  • CIM indications might fail when you use Auto Deploy to reboot the ESXi hosts
    If the sfcbd service stops running, the CIM indications in host profile cannot be applied successfully.

    This issue is resolved in this release by ensuring that the CIM indications do not rely on the status of the sfcbd service while applying the host profile.
  • Status of some disks might be displayed as UNCONFIGURED GOOD instead of ONLINE
    Status of some disks on an ESXi 5.5 host might be displayed as UNCONFIGURED GOOD instead of ONLINE. This issue occurs for LSI controller using the LSI CIM provider.

    This issue is resolved in this release.
  • Load kernel module might fail through CIM interface
    The LoadModule command might fail when using the CIM interface client to load the kernel module. An error message similar to the following is displayed:

    Access denied by VMkernel access control policy.

    This issue is resolved in this release.
  • Monitoring an ESXi 5.5 host with Dell OpenManage might fail due to openwsmand error
    Monitoring an ESXi 5.5 host with Dell OpenManage might fail due to an openwsmand error. An error message similar to the following might be reported in the syslog.log file:

    Failed to map segment from shared object: No space left on device

    This issue is resolved in this release.
  • Querying hardware status on the vSphere Client might fail with an error
    Attempts to query the hardware status on the vSphere Client might fail. An error message similar to the following is displayed in the /var/log/syslog.log file in the ESXi host:

    TIMEOUT DOING SHARED SOCKET RECV RESULT (1138472) Timeout (or other socket error) waiting for response from provider Header Id (16040) Request to provider 111 in process 4 failed. Error:Timeout (or other socket error) waiting for response from provider Dropped response operation details -- nameSpace: root/cimv2, className: OMC_RawIpmiSensor, Type: 0

    This issue is resolved in this release.
  • Monitoring an ESXi 5.5 host with Dell OpenManage might fail due to openwsmand error
    Monitoring an ESXi 5.5 host with Dell OpenManage might fail due to an openwsmand error. An error message similar to the following might be reported in the syslog.log file:

    Failed to map segment from shared object: No space left on device

    This issue is resolved in this release.
  • The sfcbd service might stop responding with an error message
    The sfcbd service might stop responding and you might find the following error message in the syslog file:

    spSendReq/spSendMsg failed to send on 7 (-1)
    Error getting provider context from provider manager: 11

    This issue occurs when there is a contention for semaphore between the CIM server and the providers.

    This issue is resolved in this release.
  • False alarms appear in the Hardware Status tab of the vSphere Client
    After you upgrade Integrated Lights Out (iLO) firmware on HP DL980 G7, false alarms appear in the Hardware Status tab of the vSphere Client. Error messages similar to the following might be logged in the /var/log/syslog.log file:

    sfcb-vmware_raw[nnnnn]: IpmiIfruInfoAreaLength: Reading FRU for 0x0 at 0x8 FAILED cc=0xffffffff
    sfcb-vmware_raw[nnnnn]: IpmiIfcFruChassis: Reading FRU Chassis Info Area length for 0x0 FAILED
    sfcb-vmware_raw[nnnnn]: IpmiIfcFruBoard: Reading FRU Board Info details for 0x0 FAILED cc=0xffffffff
    sfcb-vmware_raw[nnnnn]: IpmiIfruInfoAreaLength: Reading FRU for 0x0 at 0x70 FAILED cc=0xffffffff
    sfcb-vmware_raw[nnnnn]: IpmiIfcFruProduct: Reading FRU product Info Area length for 0x0 FAILED
    sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: data length mismatch req=19,resp=3
    sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0001,resp=0002
    sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0002,resp=0003
    sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0003,resp=0004
    sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0004,resp=0005
    sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0005,resp=0006
    sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0006,resp=0007

    This issue is resolved in this release.
  • ESXi might send duplicate events to management software
    ESXi might send duplicate events to the management software when an Intelligent Platform Management Interface (IPMI) sensor event is triggered on the ESXi Host.

    This issue is resolved in this release.
  • Unable to monitor Hardware Status after removing CIM indication subscription
    If the CIM client sends two requests of Delete Instance to the same CIM indication subscription, the sfcb-vmware_int might stop responding due to memory contention. You might not be able to monitor the Hardware Status with the vCenter Server and ESXi.

    This issue is resolved in this release.
  • Monitoring an ESXi 5.5 host with Dell OpenManage might fail due to openwsmand error
    Monitoring an ESXi 5.5 host with Dell OpenManage might fail respond due to openwsmand error. An error message similar to the following might be reported:

    Failed to map segment from shared object: No space left on device

    This issue is resolved in this release.
  • CIM client might display an error due to multiple enumeration
    When you execute multiple enumerate queries on VMware Ethernet port class using the CBEnumInstances method, servers running on an ESXi 5.5 might notice an error message similar to the following:

    CIM error: enumInstances Class not found

    This issue occurs when the management software fails to retrieve information provided by VMware_EthernetPort()class. When the issue occurs, query on memstats might display the following error message:

    MemStatsTraverseGroups: VSI_GetInstanceListAlloc failure: Not found.

    This issue is resolve in this release.
  • Unable to monitor Hardware Status on an ESXi host
    An ESXi host might report an error in the Hardware Status tab due to the unresponsive hardware monitoring service (sfcbd). An error similar to the following is written to syslog.log:

    sfcb-hhrc[5149608]: spGetMsg receiving from 65 5149608-11 Resource temporarily unavailable
    sfcb-hhrc[5149608]: rcvMsg receiving from 65 5149608-11 Resource temporarily unavailable
    sfcb-hhrc[5149608]: Timeout or other socket error
    sfcb-LSIESG_SMIS13_HHR[6064161]: spGetMsg receiving from 51 6064161-11 Resource temporarily unavailable
    sfcb-LSIESG_SMIS13_HHR[6064161]: rcvMsg receiving from 51 6064161-11 Resource temporarily unavailable
    sfcb-LSIESG_SMIS13_HHR[6064161]: Timeout or other socket error
    sfcb-kmoduleprovider[6064189]: spGetMsg receiving from 57 6064189-11 Resource temporarily unavailable
    sfcb-kmoduleprovider[6064189]: rcvMsg receiving from 57 6064189-11 Resource temporarily unavailable
    sfcb-kmoduleprovider[6064189]: Timeout or other socket error

    The syslog below in debug level indicates that an invalid data of 0x3c is sent by IPMI when the expected data is 0x01.

    sfcb-vmware_raw[35704]: IpmiIfcRhFruInv: fru.header.version: 0x3c

    This issue occurs when sfcb-vmware_raw provider receives invalid data from the Intelligent Platform Management Interface (IPMI) tool while reading the Field Replaceable Unit (FRU) inventory data.

    This issue is resolved in this release.

Miscellaneous Issues

  • Cloning CBT-enabled virtual machine templates from ESXi hosts might fail
    Attempt to clone CBT-enabled virtual machines templates simultaneously from two different ESXi 5.5 hosts might fail. An error message similar to the following is displayed:

    Failed to open VM_template.vmdk': Could not open/create change tracking file (2108).

    This issue is resolved in this release.
  • Unable to log into ESXi host with Active Directory credentials
    Attempts to login to an ESXi host might fail after the host successfully joins an Active Directory. This occurs when a user from one domain attempts to join another trusted domain, which is not present in the ESXi client site. An error similar to the following is written to sys.log/netlogon.log file:

    netlogond[17229]: [LWNetDnsQueryWithBuffer() /build/mts/release/bora-1028347/likewise/esxi-esxi/src/linux/netlogon/utils/lwnet-dns.c:1185] DNS lookup for '_ldap._tcp.<domain details>' failed with errno 0, h_errno = 1

    This issue is resolved in this release.
  • Update to the cURL library
    cURL fails to resolve localhost when IPv6 is disabled. An error message similar to the following is displayed:

    error: enumInstances Failed initialization

    This issue is resolved in this release.

Networking Issues

  • ESXi hosts with the virtual machines having e1000 or e1000e vNIC driver might fail with a purple screen
    ESXi hosts with the virtual machines having e1000 or e1000e vNIC driver might fail with a purple screen when you enable TCP segmentation Offload (TSO). Error messages similar to the following might be written to the log files:

    cpu7:nnnnnn)Code start: 0xnnnnnnnnnnnn VMK uptime: 9:21:12:17.991
    cpu7:nnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]E1000TxTSOSend@vmkernel#nover+0x65b stack: 0xnnnnnnnnnn
    cpu7:nnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]E1000PollTxRing@vmkernel#nover+0x18ab stack: 0xnnnnnnnnnnnn
    cpu7:nnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]E1000DevAsyncTx@vmkernel#nover+0xa2 stack: 0xnnnnnnnnnnnn
    cpu7:nnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]NetWorldletPerVMCB@vmkernel#nover+0xae stack: 0xnnnnnnnnnnnn
    cpu7:nnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]WorldletProcessQueue@vmkernel#nover+0x488 stack: 0xnnnnnnnnnnnn
    cpu7:nnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]WorldletBHHandler@vmkernel#nover+0x60 stack: 0xnnnnnnnnnnnnnnn
    cpu7:nnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]BH_Check@vmkernel#nover+0x185 stack: 0xnnnnnnnnnnnn

    This issue is resolved in this release.
  • ESXi responds to unnecessary Internet Control Message Protocol request types
    The ESXi host might respond to unnecessary Internet Control Message Protocol (ICMP) request types.

    This issue is resolved in this release.
  • ESXi hostd might fail when performing storage device rescan operations
    When you perform storage device rescan operations, the hostd might fail as multiple threads attempt to modify the same object. You might see error messages similar to the following in the vmkwarning.log file:

    cpu43:nnnnnnn)ALERT: hostd detected to be non-responsive
    cpu20:nnnnnnn)ALERT: hostd detected to be non-responsive

    This issue is resolved in this release.
  • Log spew is observed when ESXi host is added in vCenter server
    When you add an ESXi host in vCenter server and create a VMkernel interface for vMotion, you will see the following message displayed in quick succession (log spew) in the hostd.log file:

    Failed to find vds Id for portset vSwitch0

    This issue is resolved in this release.
  • Microsoft Windows Deployment Services (WDS) might fail to PXE boot virtual machines that use the VMXNET3 network adapter
    Attempts to PXE boot virtual machines that use the VMXNET3 network adapter by using the Microsoft Windows Deployment Services (WDS) might fail with messages similar to the following:

    Windows failed to start. A recent hardware or software change might be the cause. To fix the problem:
    1. Insert your Windows installation disc and restart your computer.
    2. Choose your language setting, and then click Next.
    3. Click Repair your computer.
    If you do not have the disc, contact your system administrator or computer manufacturer for assistance.

    Status: 0xc0000001

    Info: The boot selection failed because a required device is inaccessible.

    This issue is resolved in this release.
  • Enable the configuration of Rx Ring#2 to solve the Rx Ring#2 out of memory and packet drops on the receiver side issues
    A Linux virtual machine enabled with Large Receive Offload (LRO) functionality on VMXNET3 device might experience packet drops on the receiver side when the Rx Ring #2 runs out of memory, since the size of Rx Ring#2 is unable to be configured originally.

    This issue is resolved in this release.
  • Purple diagnostic screen might be displayed when using DvFilter with a NetQueue supported uplink
    An ESXi server might experience a purple diagnostic screen when using DvFilter with a NetQueue supported uplink connected to a vSwitch or a vSphere Distributed Switch (VDS). The ESXi host might report a backtrace similar to the following:

    pcpu:22 world:4118 name:"idle22" (IS)
    pcpu:23 world:2592367 name:"vmm1:S10274-AAG" (V)
    @BlueScreen: Spin count exceeded (^P) - possible deadlock
    Code start: 0xnnnnnnnnnnnn VMK uptime: 57:09:18:15.770
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]Panic@vmkernel#nover+0xnn stack: 0xnnnnnnnnnnnn
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]SP_WaitLock@vmkernel#nover+0xnnn stack: 0xnnnnnnnnnnnn
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]NetSchedFIFOInput@vmkernel#nover+0xnnn stack: 0x0
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]NetSchedInput@vmkernel#nover+0xnnn stack: 0xnnnnnnnnnnnn
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]IOChain_Resume@vmkernel#nover+0xnnn stack: 0xnnnnnnnnnnnn
    0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]PortOutput@vmkernel#nover+0xnn stack: 0xnn

    This issue is resolved in this release.
  • ESXi host fails with purple diagnostic screen when the Netflow feature is deactivated
    An ESXi host might fail with a PF exception 14 purple diagnostic screen when the Netflow feature of vSphere Distributed Switch gets deactivated. The issue occurs due to a timer synchronization problem.

    This issue is resolved in this release.
  • Changing the network scheduler to SFQ during heavy I/O might result in an unrecoverable transmission
    When heavy I/O load is in progress, the SFQ Network scheduler might reset the physical NIC when switching the network schedulers. This might cause an unrecoverable transmission where no packets are transmitted to the driver.

    This issue is resolved in this release.
  • vmkping command with Jumbo Frames might fail
    The vmkping command with Jumbo Frames might fail after one vmknic MTU is changed amongst many in the same switch. An error message similar to the following is displayed:

    sendto() failed (Message too long)

    This issue is resolved in this release.
  • ESXi firewall might reject the services that use port 0-65535 as service port
    The Virtual Serial Port Concentrator (vSPC) or NFS client service might not function on the ESXi platform. This happens when there is a different ruleset order, which allows port 0-65535, as a result of enabling sequence. This results in the vSPC or NFS Client related packets to be dropped unexpectedly even if the allowed IP on corresponding ruleset is specified.

    This issue is resolved in this release.
  • IPv6 RA does not function as expected when tagging 802.1q with VMXNET3 adapters
    IPv6 Router Advertisements (RA) does not function as expected when tagging 802.1q with VMXNET3 adapters in an Linux virtual machine as the IPv6 RA address intended for the VLAN interface is delivered to the base interface.

    This issue is resolved in this release.
  • ESXi host might lose network connectivity
    An ESXi host might lose network connectivity and experience stability issues when multiple error messages similar to the following are logged in:

    WARNING: Heartbeat: 785: PCPU 63 didn't have a heartbeat for 7 seconds; *may* be locked up.

    This issue is resolved in this release.
  • Network connectivity lost when applying host profile during Auto Deploy
    When applying host profile during Auto Deploy, you might lose network connectivity because the VXLAN Tunnel Endpoint (VTEP) NIC gets tagged as management vmknic.

    This issue is resolved in this release.

Security Issues

  • Update to the libxml2 library
    The ESXi userworld libxml2 library is updated to verion 2.9.2.
  • Update to the ESXi userworld OpenSSL library
    The ESXi userworld OpenSSL library is updated to version 1.0.1m.
  • Update to the libPNG library
    The libPNG library is updated to libpng-1.6.16.

Server Configuration Issues

  • Serial over LAN Console Redirection might not function properly
    A PCIe serial port redirection card might not function properly when connected to an Industry Standard Architecture (ISA) Interrupt Request (IRQ) (0-15 decimals) on an Advanced Programmable Interrupt Controller (APIC) as it is unable have its interrupts received by the CPU. To allow these and other PCI devices connected to ISA IRQs to function, VMkernel will now allow level-triggered interrupts on ISA IRQs.

    This issue is resolved in this release.
  • Esxtop might incorrectly display the CPU utilization at 100%
    The PCPU UTIL/CORE UTIL in esxtop utility incorrectly displays CPU utilization at 100% if you have the PcpuMigrateIdlePcpus set at 0.

    This issue is resolved in this release.
  • Unknown(1) status reported when querying Fiber Channel Host Bus Adapters
    After you upgrade the ESXi host from ESXi 5.1 to 5.5 and import the latest MIB module, the third-party monitoring software returns "unknown(1)" status when querying Fiber Channel (FC) Host Bus Adapters (HBA).

    This issue is resolved in this release.
  • Host gateway deleted and compliance failures might occur when existing ESXi host profile re-applied to stateful ESXi host
    When an existing ESXi host profile is applied to a newly installed ESXi 5.5 host, the profile compliance status might show as noncompliant. This happens when the host profile is created from hosts with VXLAN interface configured, the test for compliance on hosts with the previously created host profile might fail. An error message similar to the following is displayed:

    IP route configuration doesn't match the specification

    This issue is resolved in this release.
  • Purple diagnostic screen with Page Fault exception displayed in a nested ESXi environment
    In a nested ESXi environment, implementation of CpuSchedAfterSwitch() results in a race condition in the scheduler code and a purple diagnostic screen with Page Fault exception is displayed.

    This issue is resolved in this release.
  • iSCSI initiator name allowed when enabling software iSCSI using esxcli
    You can now specify an iSCSI initiator name to the esxcli iscsi software set command.
  • Virtual machine might not display a warning message when the CPU is not fully reserved
    When you create a virtual machine with sched.cpu.latencySensitivity set to high and power it on, the exclusive affinity for the vCPUs might not get enabled if the VM does not have a full CPU reservation.

    In earlier releases, the VM did not display a warning message when the CPU is not fully reserved. For more information, see Knowledge Base article 2087525.

    This issue is resolved in this release.
  • SNMPD might start automatically after ESXi host upgrade
    The SNMPD might start automatically after you upgrade the ESXi host to 5.5 Update 2.

    This issue is resolved in this release.
  • Host profiles become non-compliant with simple change to SNMP syscontact or syslocation
    Host Profiles become non-compliant with a simple change to SNMP syscontact or syslocation. The issue occurs as the SNMP host profile plugin applies only a single value to all hosts attached to the host profile. An error message similar to the following might be displayed:

    SNMP Agent Configuration differs

    This issue is resolved in this release by enabling per-host value settings for certain parameters like syslocation, syscontact, v3targets,v3users and engineid.
  • Attempts to create a FIFO and write data on it might result in a purple diagnostic screen
    When you create a FIFO and attempt to write data to the /tmp/dpafifo, a purple diagnostic screen might displayed under certain conditions.

    This issue is resolved in this release.
  • Attempts to reboot Windows 8 and Windows 2012 server on ESXi host virtual machines might fail
    After you reboot, the Windows 8 and Windows 2012 Server virtual machines might become unresponsive when the Microsoft Windows boot splash screen appears. For more information refer, Knowledge Base article 2092807.

    This issue is resolved in this release.
  • Attempts to reboot Windows 8 and Windows 2012 server on ESXi host virtual machines might fail
    When you set the CPU limit of a uni-processor virtual machine, the overall ESXi utilization might decrease due to a defect in the ESXi scheduler. This happens when the ESXi scheduler considers the cpu-limited VMs as runnable (when they were not running) while making cpu-load estimations. Hence leading to incorrect load balancing decision.

    For details, see Knowledge Base article 2096897.

    This issue is resolved in this release.

Supported Hardware Issues

  • Power usage and power cap value missing in esxtop command
    On Lenovo systems, the value of power usage and power cap is not available in the esxtop command.

    This issue is resolved in this release.

Storage Issues

  • During an High Availability failover or a host crash, the .vswp files of powered ON VMs on that host might be left behind on the storage
    During a High Availability failover or host crash, the .vswp files of powered ON virtual machines on that host might be left behind on the storage. When many such failovers or crashes occur, the storage capacity might become full.

    This issue is resolved in this release.
  • False PE change message might be displayed in the VMkernel log file when you rescan a VMFS datastore with multiple extents
    When you rescan a VMFS datastore with multiple extents, the following log message might be written in the VMkernel log even without any issues from storage connectivity:

    Number of PEs for volume changed from 3 to 1. A VMFS volume rescan may be needed to use this volume.

    This issue is resolved in this release.
  • During transient error conditions, I/O to a device might repeatedly fail and not failover to an alternate working path
    During transient error conditions like BUS BUSY, QFULL, HOST ABORTS, HOST RETRY and so on, you might repeatedly attempt commands on current path and do not failover to another path even after a reasonable amount of time.

    This issue is resolved in this release. During occurrence of such transient errors, if the path is busy after a couple of retries, the path state is now changed to DEAD. As a result, a failover is triggered and an alternate working path to the device is used to send I/Os.
  • During an High Availability failover or a host crash, the .vswp files of powered ON VMs on that host might be left behind on the storage
    During a High Availability failover or host crash, the .vswp files of powered ON virtual machines on that host might be left behind on the storage. When many such failovers or crashes occur, the storage capacity might become full.

    This issue is resolved in this release.
  • Attempts to get block map of an offline storage might cause the hostd service to crash
    The hostd service might fail on an ESXi 5.x host when there is an acquireLeaseExt API execution attempt on a snapshot disk which goes offline. This snapshot disk may be on an extent which has gone offline. The API caller may be a third-party backup solution. An error message similar to the following is displayed in vmkernel.log:

    cpu4:4739)LVM: 11729: Some trailing extents missing (498, 696).

    This issue is resolved in this release.
  • ESXi 5.5 host might stop responding with a purple diagnostic screen during collection of vm-support log bundle
    When any inbox or third-party drivers do not have their SCSI transport-specific interfaces defined, the ESXi host might stop responding and display a purple diagnostic screen. The issue occurs during collection of vm-support log bundles or when you run I/O Device Management (IODM) Command-Line Interfaces (CLI) such as:

    • esxcli storage san sas list

    • esxcli storage san sas stats get


    This issue is resolved in this release.
  • Attempts to expand VMFS volumes beyond 16 TB might not succeed in certain scenarios
    An ESXi host might fail when you attempt to expand a VMFS5 datastore beyond 16 TB. Error messages similar to the following is written to the vmkernel.log file:

    cpu38:34276)LVM: 2907: [naa.600000e00d280000002800c000010000:1] Device expanded (actual size 61160331231 blocks, stored size 30580164575 blocks)
    cpu38:34276)LVM: 2907: [naa.600000e00d280000002800c000010000:1] Device expanded (actual size 61160331231 blocks, stored size 30580164575 blocks)
    cpu47:34276)LVM: 11172: LVM device naa.600000e00d280000002800c000010000:1 successfully expanded (new size: 31314089590272)
    cpu47:34276)Vol3: 661: Unable to register file system ds02 for APD timeout notifications: Already exists
    cpu47:34276)LVM: 7877: Using all available space (15657303277568).
    cpu7:34276)LVM: 7785: Error adding space (0) on device naa.600000e00d280000002800c000010000:1 to volume 52f05483-52ea4568-ce0e-901b0e0cd0f0: No space left on device
    cpu7:34276)LVM: 5424: PE grafting failed for dev naa.600000e00d280000002800c000010000:1 (opened: t), vol 52f05483-52ea4568-ce0e-901b0e0cd0f0: Limit exceeded
    cpu7:34276)LVM: 7133: Device scan failed for <naa.600000e00d280000002800c000010000:1>: Limit exceeded
    cpu7:34276)LVM: 7805: LVMProbeDevice failed for device naa.600000e00d280000002800c000010000:1: Limit exceeded
    cpu32:38063)<3>ata1.00: bad CDB len=16, scsi_op=0x9e, max=12
    cpu30:38063)LVM: 5424: PE grafting failed for dev naa.600000e00d280000002800c000010000:1 (opened: t), vol 52f05483-52ea4568-ce0e-901b0e0cd0f0: Limit exceeded
    cpu30:38063)LVM: 7133: Device scan failed for <naa.600000e00d280000002800c000010000:1>: Limit exceeded

    This issue is resolved in this release.
  • ESXi host might fail with a purple diagnostic screen when multiple vSCSI filters are attached to a VM disk
    An ESXi 5.5 host might fail with a purple diagnostic screen similar to the following when multiple vSCSI filters are attached to a VM disk.

    cpu24:103492 opID=nnnnnnnn)@BlueScreen: #PF Exception 14 in world 103492:hostd-worker IP 0xnnnnnnnnnnnn addr 0x30
    PTEs:0xnnnnnnnnnn;0xnnnnnnnnnn;0x0;
    cpu24:103492 opID=nnnnnnnn)Code start: 0xnnnnnnnnnnnn VMK uptime: 21:06:32:38.296
    cpu24:103492 opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSCSIFilter_GetFilterPrivateData@vmkernel#nover+0x1 stack: 0x4136c7d
    cpu24:103492 opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSCSIFilter_IssueInternalCommand@vmkernel#nover+0xc3 stack: 0x410961
    cpu24:103492 opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]CBRC_FileSyncRead@<None>#<None>+0xb1 stack: 0x0
    cpu24:103492 opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]CBRC_DigestRecompute@<None>#<None>+0x291 stack: 0x1391
    cpu24:103492 opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]CBRC_FilterDigestRecompute@<None>#<None>+0x36 stack: 0x20
    cpu24:103492 opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSI_SetInfo@vmkernel#nover+0x322 stack: 0x411424b18120
    cpu24:103492 opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]UWVMKSyscallUnpackVSI_Set@<None>#<None>+0xef stack: 0x41245111df10
    cpu24:103492 opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UWVMKSyscallHandler@<None>#<None>+0x243 stack: 0x41245111df20
    cpu24:103492 opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UWVMKSyscallHandler@vmkernel#nover+0x1d stack: 0x275c3918
    cpu24:103492 opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]gate_entry@vmkernel#nover+0x64 stack: 0x0

    This issue is resolved in this release.
  • ESXi host stops responding and loses connection to vCenter Server during storage hiccups on Non-ATS VMFS datastores
    An ESXi host might stop responding and the virtual machines become inaccessible. Also, the ESXi host might lose connection to vCenter Server due to a deadlock during storage hicuups on Non-ATS VMFS datastores.

    This issue is resolved in this release.
  • ESXi host gets registered with an incorrect IQN on a target management software
    Unisphere Storage Management software registers the given initiator IQN when software iSCSI is first enabled. During stateless boot, the registered IQN does not change with the name defined in host profile. You are required to manually remove the initiators from the array and add them again under the new IQN.

    This issue is resolved by adding a new parameter to the software iSCSI enable command so that Unisphere registers the initiator under the name defined in the host profile. The command line to set the IQN during software iSCSI enablement is:

    esxcli iscsi software set --enabled=true --name iqn.xyz
  • vSphere Replication sync might fail due to change in source datastore name
    If you rename a datastore on which replication source virtual machines are running, replication sync operations for these virtual machines fail with an error message similar to the following:

    VRM Server runtime error. Please check the documentation for any troubleshooting information.
    The detailed exception is: 'Invalid datastore format '<Datastore Name>'

    This issue is resolved in this release.
  • Attempts to unmount NFS Datastore might fail
    Attempts to unmount NFS Datastore might fail as the NFS IOs could be stuck due to connectivity issues during NFS LOCK LOST errors. You will see an error message similar to the following:

    cpu23:xxxxx opID=xxxxxabf)WARNING: NFS: 1985: datastore1 has open files, cannot be unmounted

    This issue is resolved in this release.

Upgrade and Installation Issues

  • Error message observed on the boot screen when ESXi 5.5 host boots from vSphere Auto Deploy Stateless Caching
    An error message similar to the following with tracebacks is observed on the boot screen when ESXi 5.5 host boots from Auto Deploy Stateless Caching. The error is due to an unexpected short length message of less than four characters in the syslog network.py script.

    IndexError: string index out of range

    This issue is resolved in this release.
  • Attempts to install or upgrade VMware Tools on a Solaris 10 Update 3 virtual machine might fail
    Attempts to install or upgrade VMware Tools on a Solaris 10 Update 3 virtual machine might fail with the following error message:

    Detected X version 6.9
    Could not read /usr/lib/vmware-tools/configurator/XOrg/7.0/vmwlegacy_drv.so Execution aborted.

    This issue occurs if the vmware-config-tools.pl script copies the vmwlegacy_drv.so file, which should not be used in Xorg 6.9.
  • Keyboard layout option for DCUI and host profile user interface might be incorrectly displayed as Czechoslovakian
    The keyboard layout option for the Direct Console User Interface (DCUI) and host profile user interface might incorrectly appear as Czechoslovakian. This option is displayed during ESXi installation and also in the DCUI after installation.

    This issue is resolved in this release by renaming the keyboard layout option to Czech.
  • Option to retain tools.conf file available by default
    When you upgrade the Vmware Tools in 64-bit Windows guest operating system, the tools.conf file gets removed automatically. The tools.conf file will be retained by default from ESXi 5.5 Update 3 release onwards.
  • Guest Operating System might fail on reboot after install, upgrade, or uninstall of VMware Tools
    When you power off a virtual machine immediately after an install, upgrade or uninstall of VMware Tools in a Linux environment (RHEL or Cent OS 6), the guest OS might fail during the next reboot due to corrupted RAMDISK image file. The guest OS reports an error similar to the following:

    RAMDISK: incomplete write (31522 != 32768)
    write error
    Kernel panic - not syncing : VFS: Unable to mount root fs on unknown-block(0,0)


    This release resolves the complete creation of the initramfs file creation during an install, upgrade or uninstall of VMware Tools.

    Guest OS with corrupted RAMDISK image file can be rescued to complete boot state. For more information, see Knowledge Base article 2086520.

    This issue is resolved in this release.
  • Applying host profile with stateless caching enabled on stateless ESXi host might take a long time to complete
    Applying host profile on stateless ESXi host with large number of storage LUNs might take long time to reboot when you enable stateless caching with the esx as first disk argument. This happens when you manually apply host profile or during the reboot of the host.

    This issue is resolved in this release.
  • VIB stage operation might cause VIB installation or configuration change to be lost after an ESXi host reboot
    When some VIBs are installed on the system, esxupdate constructs a new image in /altbootbank and changes the /altbootbank boot.cfg bootstate to be updated. When a live installable VIB is installed, the system saves the configuration change to /altbootbank. The stage operation deletes the contents of /altbootbank unless you perform a remediate operation after the stage operation. The VIB installation might be lost if you reboot the host after a stage operation.

    This issue is resolved in this release.

Virtual SAN Issues

  • Virtual SAN cluster check might fail due to unexpected network partitioning in the cluster
    Virtual SAN cluster check might fail due to an unexpected network partitioning where the IGMP v3 query is not reported if the system is in V2 mode.

    This issue is resolved in this release.
  • Virtual SAN on high latency disks might cause the Input/Output backlogs and the cluster to become unresponsive
    Virtual SAN does not gracefully handle extremely high latency disks that are about to die. Such a dying disk might cause Input/Output backlogs and the Virtual SAN cluster nodes might become unresponsive in the vCenter Server.

    This issue is resolved in this release with a new feature, Dying Disk Handling (DDH) which provides latency monitoring framework in the kernel, a daemon to detect high latency periods, and a mechanism to unmount individual disks and diskgroups.
  • Improvement in Virtual SAN resynchronization operation
    There might be a situation where the Virtual SAN component resynchronization operation might stall or become very slow. This release introduces the component-based congestion to improve the resynchronization operation and the virtual SAN cluster stability.

vCenter Server and vSphere Web Client Issues

  • The Summary tab might display incorrect values for provisioned space values of virtual machines and NFS or NAS Datastores on VAAI enabled hosts
    When a virtual disk with Thick Provision Lazy zeroed format is created on a VAAI supported NAS in a VAAI enabled ESXi host, the provisioned space for the corresponding virtual machine and datastore might be displayed incorrectly.

    This issue is resolved in this release.

Virtual Machine Management Issues

  • Attempts to add USB device through vSphere Client or the vSphere Web Client might fail
    Attempts to add USB device(s) through vSphere Client and vSphere Web Client might fail if Intel USB 3.0 driver is used.

    This issue is resolved in this release.
  • Taking quiesce snapshot of a virtual machine might result in the MOB value of currentSnapshot field to be unset
    After you create a quiesce snapshot and browse through the Managed Object Browser (MOB) of the virtual machine, the MOB value of currentSnapshot field is observed to be unset. To view the currentSnapshot, you can navigate to Content -> root folder -> datacenter -> vmFolder -> vmname -> snapshot -> currentSnaphot.

    This issue is resolved in this release.
  • Multiple opID tagging log messages are rapidly logged in the VMkernel log
    The helper world opID tagging generates a lot of log messages that are logged rapidly in the VMkernel log filling it up. Logs similar to the following are logged in the VMkernel log:

    cpu16:nnnnn)World: nnnnn: VC opID hostd-60f4 maps to vmkernel opID nnnnnnnn cpu16:nnnnn)World: nnnnn: VC opID HB-host-nnn@nnn-nnnnnnn-nn maps to vmkernel opID nnnnnnnn cpu8:nnnnn)World: nnnnn: VC opID SWI-nnnnnnnn maps to vmkernel opID nnnnnnnn cpu14:nnnnn)World: nnnnn: VC opID hostd-nnnn maps to vmkernel opID nnnnnnnn cpu22:nnnnn)World: nnnnn: VC opID hostd-nnnn maps to vmkernel opID nnnnnnnn cpu14:nnnnn)World: nnnnn: VC opID hostd-nnnn maps to vmkernel opID nnnnnnnn cpu14:nnnnn)World: nnnnn: VC opID hostd-nnnn maps to vmkernel opID nnnnnnnn cpu4:nnnnn)World: nnnnn: VC opID hostd-nnnn maps to vmkernel opID nnnnnnnn

    This issue is resolved in this release.
  • Support for USB 3.0
    Support for USB 3.0 has been added in this release, currently only for Apple Mac Pro.

High Availability and Fault Tolerance Issues

vMotion and Storage vMotion Issues

  • Unable to perform Fast Suspend and Resume or Storage vMotion on preallocated virtual machines
    When you perform Fast Suspend and Resume (FSR) or Storage vMotion on preallocated virtual machines, the operation might fail as the reservation validation fails during reservation transfer from the source to the destination virtual machine.

    This issue is resolved in this release.
  • Storage vMotion fails on a virtual machine
    Performing a storage vMotion on a virtual machine might fail if you have configured the local host swap and set the value of the checkpoint.cptConfigName in the VMX file. An error message similar to the following might be displayed:

    xxxx-xx-xxT00:xx:xx.808Z| vmx| I120: VMXVmdbVmVmxMigrateGetParam: type: 2 srcIp=<127.0.0.1> dstIp=<127.0.0.1> mid=xxxxxxxxxxxxx uuid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx priority=none checksumMemory=no maxDowntime=0 encrypted=0 resumeDuringPageIn=no latencyAware=no diskOpFile=
    <snip>
    xxxx-xx-xxT00:xx:xx.812Z| vmx| I120: VMXVmdb_SetMigrationHostLogState: hostlog state transits to failure for migrate 'to' mid xxxxxxxxxxxxxxxx


    This issue is resolved in this release.
  • Changed Block Tracking (CBT) is reset for virtual RDM disks during cold migration
    Cold migration between different datastores does not support CBT reset for the virtual Raw Device Mapping (RDM) disks.

    This issue is resolved in this release.

VMware Tools Issues

  • Attempts to upgrade VMware Tools on a Windows 2000 virtual machine might fail
    Attempts to upgrade VMware Tools on a Windows 2000 virtual machine might fail with an error message similar to the following written to the vmmsi.log file:

    Invoking remote custom action. DLL: C:\WINNT\Installer\MSI12.tmp, Entrypoint: VMRun
    VM_CacheMod. Return value 3.
    PROPERTY CHANGE: Deleting RESUME property. Its current value is '1'.
    INSTALL. Return value 3.

    This issue is resolved in this release.
  • Some of the drivers might not work as expected on Solaris 11 virtual machine
    On an ESXi 5.5 host, some of the drivers installed on Solaris 11 guest operating system might be from Solaris 10. As a result, the drivers might not work as expected.

    This issue is resolved in this release.
  • Attempts to configure VMware Tools with new kernel might truncate the driver list in add_drivers entry
    When you attempt to configure VMware Tools with new kernel using the /usr/bin/vmware-config-tools.pl -k <kernel version> script after the kernel has been updated with Dracut, the driver list in add_drivers entry of /etc/dracut.conf.d/vmware-tools.conf file gets truncated. This issue occurs when the VMware Tools are upstreamed in the kernel.

    This issue is resolved in this release.
  • Unable to open telnet on Windows 8 or Windows Server 2012 guest operating system after installing VMware Tools
    After installing VMware Tools on Windows 8 or Windows Server 2012 guest operating system, attempts to open telnet using the start telnet://xx.xx.xx.xx command fails with the following error message

    Make sure the virtual machine's configuration allows the guest to open host applications

    This issue is resolved in this release.
  • Guest operating system event viewer displays warning messages after you install VMware Tools
    After you install VMware Tools, if you attempt to do a RDP to a Windows virtual machine, some of the plugins might display a warning message in the Windows event log. The warning message indicates the failure to send remote procedure calls to the host.

    This issue is resolved in this release.
  • VMware Tools service might fail on a Linux virtual machine during shutdown
    On a Linux virtual machine, the VMware Tools service, vmtoolsd, might fail when you shut down the guest operating system.

    This issue is resolved in this release.
  • VMware Tools might fail to automatically upgrade during the first power-on operation of the virtual machine
    When a virtual machine is deployed or cloned with guest customization and the VMware Tools Upgrade Policy is set to allow the virtual machine to automatically upgrade VMware Tools at next power-on, VMware Tools might fail to automatically upgrade during the first power-on operation of the virtual machine.

    This issue is resolved in this release.
  • Quiescing operations might result in a Windows virtual machine to panic
    Attempts to perform a quiesced snapshot on a virtual machine running Microsoft Windows 2008 or later might fail and the VM might panic with a blue screen and error message similar to the following:

    A problem has been detected and Windows has been shut down to prevent damage to your computer. If this is the first time you've seen this Stop error screen restart your computer. If this screen appears again, follow these steps:

    Disable or uninstall any anti-virus, disk defragmentation or backup utilities. Check your hard drive configuration, and check for any updated drivers. Run CHKDSK /F to check for hard drive corruption, and then restart your computer.


    For more information, see Knowledge Base article 2115997.

    This issue is resolved in this release.
  • Virtual machine might fail to respond after a snapshot operation on a Linux VM
    When you attempt to create a quiesced snapshot of a Linux virtual machine, the VM might fail after the snapshot operation and require a reboot. Error messages similar to the following are written to vmware.log file:

    TZ| vmx| I120: SnapshotVMXTakeSnapshotComplete: done with snapshot 'smvi_UUID': 0
    TZ| vmx| I120: SnapshotVMXTakeSnapshotComplete: Snapshot 0 failed: Failed to quiesce the virtual machine (40).
    TZ| vmx| I120: GuestRpcSendTimedOut: message to toolbox timed out.
    TZ| vmx| I120: Vix: [18631 guestCommands.c:1926]: Error VIX_E_TOOLS_NOT_RUNNING in
    MAutomationTranslateGuestRpcError(): VMware Tools are not running in the guest


    For further details, see Knowledge Base article 2116120

    This issue is resolved in this release.
  • New Issue Attempts to perform snapshot consolidation might fail with the error: Unexpected signal: 11
    Snapshot consolidation or deletion results in the virtual machines running on VMware ESXi 5.5 Update 3 hosts to fail with the error: Unexpected signal: 11. You will see an log message similar to the following in the vmware.log file:

    [YYYY-MM-DD] <time>Z| vcpu-0| I120: SNAPSHOT: SnapshotDiskTreeFind: Detected node change from 'scsiX:X' to ''.

    For further details, see Knowledge Base article 2133118.

    This issue is resolved in this release.

Known Issues

The known issues existing in ESXi 5.5 are grouped as follows:

New known issues documented in this release are highlighted as New Issue.

Installation and Upgrade Issues

  • New Issue The VMware Tools service user processes might not run on Linux OS after installing the latest VMware Tools package
    On Linux OS, you might encounter VMware Tools upgrade or installation issues or the VMware Tools service (vmtoolsd) user processes might not run after installing the latest VMware Tools package. The issue occurs if the your glibc version is older than version 2.5, like SLES10sp4.

    Workaround: Upgrade the Linux glibc to version 2.5 or above.

  • Attempts to get all image profiles might fail while running the Get-EsxImageProfile command in vSphere PowerCLI
    When you run the Get-EsxImageProfile command using vSphere PowerCLI to get all image profiles, an error similar to the following is displayed:

    PowerCLI C:\Windows\system32> Get-EsxImageProfile
    Get-EsxImageProfile : The parameter 'name' cannot be an empty string.
    Parameter name: name
    At line:1 char:20
    + Get-EsxImageProfile <<<<
    + CategoryInfo : NotSpecified: (:) [Get-EsxImageProfile], ArgumentException
    + FullyQualifiedErrorId : System.ArgumentException,VMware.ImageBuilder.Commands.GetProfiles


    Workaround: Run the Get-EsxImageProfile -name "ESXi-5.x*" command, which includes the -name option and display all image profiles created during the PowerCLI session.

    For example, running the command Get-EsxImageProfile -name "ESXi-5.5.*" displays all 5.5 image profiles similar to the following:

    PowerCLI C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI> Get-EsxmageProfile -name "ESXi-5.5.*"

    Name Vendor Last Modified Acceptance Level
    ---- ------ ------------- ----------------
    ESXi-5.5.0-20140701001s-no-... VMware, Inc. 8/23/2014 6:... PartnerSupported
    ESXi-5.5.0-20140302001-no-t... VMware, Inc. 8/23/2014 6:... PartnerSupported
    ESXi-5.5.0-20140604001-no-t... VMware, Inc. 8/23/2014 6:... PartnerSupported
    ESXi-5.5.0-20140401020s-sta... VMware, Inc. 8/23/2014 6:... PartnerSupported
    ESXi-5.5.0-20131201001s-sta... VMware, Inc. 8/23/2014 6:... PartnerSupported
  • Simple Install fails on Windows Server 2012
    Simple Install fails on Windows Server 2012 if the operating system is configured to use a DHCP IP address

    Workaround: Configure the Windows 2012 Server to use a static IP address.

  • If you use preserve VMFS with Auto Deploy Stateless Caching or Auto Deploy Stateful Installs, no core dump partition is created
    When you use Auto Deploy for Stateless Caching or Stateful Install on a blank disk, an MSDOS partition table is created. However, no core dump partition is created.

    Workaround: When you enable the Stateless Caching or Stateful Install host profile option, select Overwrite VMFS, even when you install on a blank disk. When you do so, a 2.5GB coredump partition is created.

  • During scripted installation, ESXi is installed on an SSD even though the --ignoressd option is used with the installorupgrade command
    In ESXi 5.5, the --ignoressd option is not supported with the installorupgrade command. If you use the --ignoressd option with the installorupgrade command, the installer displays a warning that this is an invalid combination. The installer continues to install ESXi on the SSD instead of stopping the installation and displaying an error message.

    Workaround: To use the --ignoressd option in a scripted installation of ESXi, use the install command instead of the installorupgrade command.

  • Delay in Auto Deploy cache purging might apply a host profile that has been deleted
    After you delete a host profile, it is not immediately purged from the Auto Deploy. As long as the host profile is persisted in the cache, Auto Deploy continues to apply the host profile. Any rules that apply the profile fail only after the profile is purged from the cache.

    Workaround: You can determine whether any rules use deleted host profiles by using the Get-DeployRuleSet PowerCLI cmdlet. The cmdlet shows the string deleted in the rule's itemlist. You can then run the Remove-DeployRule cmdlet to remove the rule.

  • Applying host profile that is set up to use Auto Deploy with stateless caching fails if ESX is installed on the selected disk
    You use host profiles to set up Auto Deploy with stateless caching enabled. In the host profile, you select a disk on which a version of ESX (not ESXi) is installed. When you apply the host profile, an error that includes the following text appears.
    Expecting 2 bootbanks, found 0

    Workaround: Select a different disk to use for stateless caching, or remove the ESX software from the disk. If you remove the ESX software, it becomes unavailable.

  • Installing or booting ESXi version 5.5.0 fails on servers from Oracle America (Sun) vendors
    When you perform a fresh ESXi version 5.5.0 installation or boot an existing ESXi version 5.5.0 installation on servers from Oracle America (Sun) vendors, the server console displays a blank screen during the installation process or when the existing ESXi 5.5.0 build boots. This happens because servers from Oracle America (Sun) vendors have a HEADLESS flag set in the ACPI FADT table, even though they are not headless platforms.

    Workaround: When you install or boot ESXi 5.5.0, pass the boot option ignoreHeadless="TRUE".

  • If you use ESXCLI commands to upgrade an ESXi host with less than 4GB physical RAM, the upgrade succeeds, but some ESXi operations fail upon reboot
    ESXi 5.5 requires a minimum of 4GB of physical RAM. The ESXCLI command-line interface does not perform a pre-upgrade check for the required 4GB of memory. You successfully upgrade a host with insufficient memory with ESXCLI, but when you boot the upgraded ESXi 5.5 host with less than 4GB RAM, some operations might fail.

    Workaround: None. Verify that the ESXi host has more than 4GB of physical RAM before the upgrade to version 5.5.

  • After upgrade from vCenter Server Appliance 5.0.x to 5.5, vCenter Server fails to start if an external vCenter Single Sign-On is used
    If the user chooses to use an external vCenter Single Sign-On instance while upgrading the vCenter Server Appliance from 5.0.x to 5.5, the vCenter Server fails to start after the upgrade. In the appliance management interface, the vCenter Single Sign-On is listed as not configured.

    Workaround: Perform the following steps:

    1. In a Web browser, open the vCenter Server Appliance management interface (https://appliance-address:5480).
    2. On the vCenter Server/Summary page, click the Stop Server button.
    3. On the vCenter Server/SSO page, complete the form with the appropriate settings, and click Save Settings.
    4. Return to the Summary page and click Start Server.

  • When you use ESXCLI to upgrade an ESXi 4.x or 5.0.x host to version 5.1 or 5.5, the vMotion and Fault Tolerance Logging (FT Logging) settings of any VMKernel port group are lost after the upgrade
    If you use the command esxcli software profile update <options> to upgrade an ESXi 4.x or 5.0.x host to version 5.1 or 5.5, the upgrade succeeds, but the vMotion and FT Logging settings of any VMkernel port group are lost. As a result, vMotion and FT Logging are restored to the default setting (disabled).

    Workaround: Perform an interactive or scripted upgrade, or use vSphere Update Manager to upgrade hosts. If you use the esxcli command, apply vMotion and FT Logging settings manually to the affected VMkernel port group after the upgrade.

  • When you upgrade vSphere 5.0.x or earlier to version 5.5, system resource allocation values that were set manually are reset to the default value
    In vSphere 5.0.x and earlier, you modify settings in the system resource allocation user interface as a temporary workaround. You cannot reset the value for these settings to the default without completely reinstalling ESXi. In vSphere 5.1 and later, the system behavior changes, so that preserving custom system resource allocation settings might result in values that are not safe to use. The upgrade resets all such values.

    Workaround: None.

  • IPv6 settings of virtual NIC vmk0 are not retained after upgrade from ESX 4.x to ESXi 5.5
    When you upgrade an ESX 4.x host with IPv6 enabled to ESXi 5.5 by using the --forcemigrate option, the IPv6 address of virtual NIC vmk0 is not retained after the upgrade.

    Workaround: None.

Networking Issues

  • Unable to use PCNet32 network adapter with NSX opaque network
    When PCNet32 flexible network adapter is configured with NSX opaque network backing, the adapter disconnects while powering on the VM.
  • Workaround: None

  • Upgrading to ESXi 5.5 might change the IGMP configuration of TCP/IP stack for multicast group management
    The default IGMP version of the management interfaces is changed from IGMP V2 to IGMP V3 for ESXi 5.5 hosts for multicast group management. As a result, when you upgrade to ESXi 5.5, the management interface might revert back to IGMP V2 from IGMP V3 if it receives an IGMP query of a previous version and you might notice IGMP version mismatch error messages.

    Workaround: Edit the default IGMP version by modifying the TCP/IP IGMP rejoin interval in the Advanced Configuration option.
  • Static routes associated with vmknic interfaces and dynamic IP addresses might fail to appear after reboot
    After you reboot the host, static routes that are associated with VMkernel network interface (vmknic) and dynamic IP address might fail to appear.
    This issue occurs due to a race condition between DHCP client and restore routes command. The DHCP client might not finish acquiring an IP address for vmknics when the host attempts to restore custom routes during the reboot process. As a result, the gateway might not be set up and the routes are not restored.

    Workaround: Run the esxcfg-route –r command to restore the routes manually.
  • An ESXi host stops responding after being added to vCenter Server by its IPv6 address
    When you add an ESXi host to vCenter Server by IPv6 link-local address of the form fe80::/64, within a short time the host name becomes dimmed and the host stops responding to vCenter Server.

    Workaround: Use a valid IPv6 address that is not a link-local address.

  • The vSphere Web Client lets you configure more virtual functions than are supported by the physical NIC and does not display an error message
    In the SR-IOV settings of a physical adapter, you can configure more virtual functions than are supported by the adapter. For example, you can configure 100 virtual functions on a NIC that supports only 23, and no error message appears. A message prompts you to reboot the host so that the SR-IOV settings are applied. After the host reboots, the NIC is configured with as many virtual functions as the adapter supports, or 23 in this example. The message that prompts you to reboot the host persists when it should not appear.

    Workaround: None

  • On an SR-IOV enabled ESXi host, virtual machines associated with virtual functions might not start
    When SR-IOV is enabled on an ESXi host 5.1 or later with Intel ixgbe NICs, if several virtual functions are enabled in the environment, some virtual machines might fail to start.
    The vmware.log file contains messages similar to the following:
    2013-02-28T07:06:31.863Z| vcpu-1| I120: Msg_Post: Error
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.log.error.unrecoverable] VMware ESX unrecoverable error: (vcpu-1)
    2013-02-28T07:06:31.863Z| vcpu-1| I120+ PCIPassthruChangeIntrSettings: 0a:17.3 failed to register interrupt (error code 195887110)
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.haveLog] A log file is available in "/vmfs/volumes/5122262e-ab950f8e-cd4f-b8ac6f917d68/VMLibRoot/VMLib-RHEL6.2-64-HW7-default-3-2-1361954882/vmwar
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.requestSupport.withoutLog] You can request support.
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.requestSupport.vmSupport.vmx86]
    2013-02-28T07:06:31.863Z| vcpu-1| I120+ To collect data to submit to VMware technical support, run "vm-support".
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.response] We will respond on the basis of your support entitlement.

    Workaround: Reduce the number of virtual functions associated with the affected virtual machine before starting it.

  • On an Emulex BladeEngine 3 physical network adapter, a virtual machine network adapter backed by a virtual function cannot reach a VMkernel adapter that uses the physical function as an uplink
    Traffic does not flow between a virtual function and its physical function. For example, on a switch backed by the physical function, a virtual machine that uses a virtual function on the same port cannot contact a VMkernel adapter on the same switch. This is a known issue of the Emulex BladeEngine 3 physical adapters. For information, contact Emulex.

    Workaround: Disable the native driver for Emulex BladeEngine 3 devices on the host. For more information, see VMware KB 2044993.

  • The ESXi Dump Collector fails to send the ESXi core file to the remote server
    The ESXi Dump Collector fails to send the ESXi core file if the VMkernel adapter that handles the traffic of the dump collector is configured to a distributed port group that has a link aggregation group (LAG) set as the active uplink. An LACP port channel is configured on the physical switch.

    Workaround: Perform one of the following workarounds:

    • Use a vSphere Standard Switch to configure the VMkernel adapter that handles the traffic for the ESXi Dump Collector with the remote server.
    • Use standalone uplinks to handle the traffic for the distributed port group where the VMkernel adapter is configured.
  • If you change the number of ports that a vSphere Standard Switch or vSphere Distributed Switch has on a host by using the vSphere Client, the change is not saved, even after a reboot
    If you change the number of ports that a vSphere Standard Switch or vSphere Distributed Switch has on an ESXi 5.5 host by using the vSphere Client, the number of ports does not change even after you reboot the host.

    When a host that runs ESXi 5.5 is rebooted, it dynamically scales up or down the ports of virtual switches. The number of ports is based on the number of virtual machines that the host can run. You do not have to configure the number of switch ports on such hosts.

    Workaround: None in the vSphere Client.

Server Configuration Issues

  • New Issue NIC hardware might stop responding with an hardware error message
    The NIC hardware might occassionally stop responding under certain circumstances with the following error message in the driver logs:

    Detected Hardware Unit Hang

    The issue is observed with some new e1000e devices like 82579, i217, i218 and i219.

    Workaround: The NIC hardware resets itself after the issue occurs.

  • Menu navigation problem is experienced When Direct Control User Interface is accessed from a serial console
    When Direct Control User Interface is accessed from a serial console, the Up and Down arrow keys do not work while navigating to the menu and the user is forcefully logged out of the DCUI configuration screen.

    Workaround: Stop the DCUI process. The DCUI process will be restarted automatically.

  • Host profiles might incorrectly appear as compliant after ESXi hosts are upgrade to 5.5 Update 2 followed by changes in host configuration
    If an ESXi host that is compliant with an host profile is updated to ESXi 5.5 Update 2 followed by some changes in host configuration and you re-check the compliance of the host with the host profile, the profile is incorrectly reported to be compliant.

    Workaround:
    • In vSPhere Client, navigate to the host profile that has the issue and run Update profile From Reference Host.
    • In vSPhere Web Client, navigate to host Profile that has the issue, click Copy settings from host, select the host from which you want to copy the configuration settings and click OK.
  • Host Profile remediation fails with vSphere Distributed Switch
    Remediation errors might occur when applying a Host Profile with a vSphere Distributed Switch and a virtual machine with Fault Tolerance is in a powered off state on a host that uses the distributed switch in that Host Profile.

    Workaround: Move the powered off virtual machines to another host in order for the Host Profile to succeed.

  • Host profile receives firewall settings compliance errors when you apply ESX 4.0 or ESX 4.1 profile to ESXi 5.5.x host
    If you extract a host profile from an ESX 4.0 or ESX 4.1 host and attempt to apply it to an ESXi 5.5.x host, the profile remediation succeeds. The compliance check receives firewall settings errors that include the following:

    Ruleset LDAP not found
    Ruleset LDAPS not found
    Ruleset TSM not found
    Ruleset VCB not found
    Ruleset activeDirectorKerberos not found

    Workaround: No workaround is required. This is expected because the firewall settings for an ESX 4.0 or ESX 4.1 host are different from those for an ESXi 5.5.x host.

  • Changing BIOS device settings for an ESXi host might result in invalid device names
    Changing a BIOS device setting on an ESXi host might result in invalid device names if the change causes a shift in the <segment:bus:device:function> values assigned to devices. For example, enabling a previously-disabled integrated NIC might shift the <segment:bus:device:function> values assigned to other PCI devices, causing ESXi to change the names assigned to these NICs. Unlike previous versions of ESXi, ESXi 5.5 attempts to preserve devices names through <segment:bus:device:function> changes if the host BIOS provides specific device location information. Due to a bug in this feature, invalid names such as vmhba1 and vmnic32 are sometimes generated.

    Workaround: Rebooting the ESXi host once or twice might clear the invalid device names and restore the original names. Do not run an ESXi host with invalid device names in production.

Storage Issues

  • ESXi hosts with HBA drivers might stop responding when the VMFS heartbeats to the datastores timeout
    ESXi hosts with HBA drivers might stop responding when the VMFS heartbeats to the datastores timeout with messages similar to the following:

    mem>2014-05-12T13:34:00.639Z cpu8:1416436)VMW_SATP_ALUA: satp_alua_issueCommandOnPath:651: Path "vmhba2:C0:T1:L10" (UP) command 0xa3 failed with status Timeout. H:0x5 D:0x0 P:0x0 Possible sense data: 0x5 0x20 0x0.2014-05-12T13:34:05.637Z cpu0:33038)VMW_SATP_ALUA: satp_alua_issueCommandOnPath:651: Path "vmhba2:C0:T1:L4" (UP) command 0xa3 failed with status Timeout. H:0x5 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.

    This issue occurs with the HBA driver when a high disk I/O on the datastore is connected to the ESXi host and multipathing is enabled at the target level instead of the HBA level.

    Workaround: Replace the HBA driver with the latest async HBA driver.
  • Attempts to perform live storage vMotion of virtual machines with RDM disks might fail
    Storage vMotion of virtual machines with RDM disks might fail and virtual machines might be seen in powered off state. Attempts to power on the virtual machine fails with the following error:

    Failed to lock the file

    Workaround: None.
  • Renamed tags appear as missing in the Edit VM Storage Policy wizard
    A virtual machine storage policy can include rules based on datastore tags. If you rename a tag, the storage policy that references this tag does not automatically update the tag and shows it as missing.

    Workaround: Remove the tag marked as missing from the virtual machine storage policy and then add the renamed tag. Reapply the storage policy to all out-of-date entities.

  • A virtual machine cannot be powered on when the Flash Read Cache block size is set to 16KB, 256KB, 512KB, or 1024KB
    A virtual machine configured with Flash Read Cache and a block size of 16KB, 256KB, 512KB, or 1024KB cannot be powered on. Flash Read Cache supports a minimum cache size of 4MB and maximum of 200GB, and a minimum block size of 4KB and maximum block size of 1MB. When you power on a virtual machine, the operation fails and the following messages appear:

    An error was received from the ESX host while powering on VM.

    Failed to start the virtual machine.

    Module DiskEarly power on failed.

    Failed to configure disk scsi0:0.

    The virtual machine cannot be powered on with an unconfigured disk. vFlash cache cannot be attached: msg.vflashcache.error.VFC_FAILURE

    Workaround: Configure virtual machine Flash Read Cache size and block size.

    1. Right-click the virtual machine and select Edit Settings.
    2. On the Virtual Hardware tab, expand Hard disk to view the disk options.
    3. Click Advanced next to the Virtual Flash Read Cache field.
    4. Increase the cache size reservation or decrease the block size.
    5. Click OK to save your changes.
  • A custom extension of a saved resource pool tree file cannot be loaded in the vSphere Web Client
    A DRS error message appears on host summary page.

    When you disable DRS in the vSphere Web Client, you are prompted to save the resource pool structure so that it can be reloaded in the future. The default extension of this file is .snapshot, but you can select a different extension for this file. If the file has a custom extension, it appears as disabled when you try to load it. This behavior is observed only on OS X.

    Workaround: Change the extension to .snapshot to load it in the vSphere Web Client on OS X.

  • DRS error message appears on the host summary page
    The following DRS error message appears on the host summary page:

    Unable to apply DRS resource settings on host. The operation is not allowed in the current state. This can significantly reduce the effectiveness of DRS.

    In some configurations a race condition might result in the creation of an error message in the log that is not meaningful or actionable. This error might occur if a virtual machine is unregistered at the same time that DRS resource settings are applied.

    Workaround: Ignore this error message.

  • Configuring virtual Flash Read Cache for VMDKs larger than 16TB results in an error
    Virtual Flash Read Cache does not support virtual machine disks larger than 16TB. Attempts to configure such disks will fail.

    Workaround: None

  • Virtual machines might power off when the cache size is reconfigured
    If you incorrectly reconfigure the virtual Flash Read Cache on a virtual machine, for example by assigning an invalid value, the virtual machine might power off.

    Workaround: Follow the recommended cache size guidelines in the vSphere Storage documentation.

  • Reconfiguring a virtual machine with virtual Flash Read Cache enabled might fail with the Operation timed out error
    Reconfiguration operations require a significant amount of I/O bandwidth. When you run a heavy load, such operations might time out before they finish. You might also see this behavior if the host has LUNs that are in an all paths down (APD) state.

    Workaround: Fix all host APD states and retry the operation with a smaller I/O load on the LUN and host.

  • DRS does not vMotion virtual machines with virtual Flash Read Cache for load balancing purpose
    DRS does not vMotion virtual machines with virtual Flash Read Cache for load balancing purposes.

    Workaround: DRS does not recommend these virtual machines for vMotion except for the following reasons:

    • To evacuate a host that the user has requested to enter maintenance or standby mode.
    • To fix DRS rule violations.
    • Host resource usage is in red state.
    • One or most hosts is over utilized and virtual machine demand is not being met.
      Note: You can optionally set DRS to ignore this reason.
  • Hosts are put in standby when the active memory of virtual machines is low but consumed memory is high
    ESXi 5.5 introduces a change in the default behavior of DPM designed to make the feature less aggressive, which can help prevent performance degradation for virtual machines when active memory is low but consumed memory is high. The DPM metric is X%*IdleConsumedMemory + active memory. The X% variable is adjustable and is set to 25% by default.

    Workaround: You can revert to the aggressive DPM behavior found in earlier releases of ESXi by setting PercentIdleMBInMemDemand=0 in the advanced options.

  • vMotion initiated by DRS might fail
    When DRS recommends vMotion for virtual machines with a virtual Flash Read Cache reservation, vMotion might fail because the memory (RAM) available on the target host is insufficient to manage the Flash Read Cache reservation of the virtual machines.

    Workaround: Follow the Flash Read Cache configuration recommendations documented in vSphere Storage.
    If vMotion fails, perform the following steps:

    1. Reconfigure the block sizes of the virtual machines on the target host and the incoming virtual machines to reduce the overall target usage of the VMkernel memory on the target host.
    2. Use vMotion to manually migrate the virtual machine to the target host to ensure the condition is resolved.
  • You are unable to view problems that occur during virtual flash configuration of individual SSD devices
    The configuration of virtual flash resources is a task that operates on a list of SSD devices. When the task finishes for all objects, the vSphere Web Client reports it as successful, and you might not be notified of problems with the configuration of individual SSD devices.

    Workaround: Perform one of the following tasks.

    • In the Recent Tasks panel, double-click the completed task.
      Any configuration failures appear in the Related events section of the Task Details dialog box.
    • Alternatively, follow these steps:
      1. Select the host in the inventory.
      2. Click the Monitor tab, and click Events.
  • Unable to obtain SMART information for Micron PCIe SSDs on the ESXi host
    Your attempts to use the esxcli storage core device smart get -d command to display statistics for the Micron PCIe SSD device fail. You get the following error message:
    Error getting Smart Parameters: CANNOT open device

    Workaround: None. In this release, the esxcli storage core device smart command does not support Micron PCIe SSDs.

  • ESXi does not apply the bandwidth limit that is configured for a SCSI virtual disk in the configuration file of a virtual machine
    You configure the bandwidth and throughput limits of a SCSI virtual disk by using a set of parameters in the virtual machine configuration file (.vmx). For example, the configuration file might contain the following limits for a scsi0:0 virtual disk:
    sched.scsi0:0.throughputCap = "80IOPS"
    sched.scsi0:0.bandwidthCap = "10MBps"
    sched.scsi0:0.shares = "normal"

    ESXi does not apply the sched.scsi0:0.bandwidthCap limit to the scsi0:0 virtual disk.

    Workaround: Revert to an earlier version of the disk I/O scheduler by using the vSphere Web Client or the esxcli system settings advanced set command.

    • In the vSphere Web Client, edit the Disk.SchedulerWithReservation parameter in the Advanced System Settings list for the host.
      1. Navigate to the host.
      2. On the Manage tab, select Settings and select Advanced System Settings.
      3. Locate the Disk.SchedulerWithReservation parameter, for example, by using the Filter or Find text boxes.
      4. Click Edit and set the parameter to 0.
      5. Click OK.
    • In the ESXi Shell to the host, run the following console command:
      esxcli system settings advanced set -o /Disk/SchedulerWithReservation -i=0
  • A virtual machine configured with Flash Read Cache cannot be migrated off a host if there is an error in the cache
    A virtual machine with Flash Read Cache configured might have a migration error if the cache is in an error state and is unusable. This error causes migration of the virtual machine to fail.

    Workaround:

    1. Reconfigure the virtual machine and disable the cache.
    2. Perform the migration.
    3. Re-enable the cache after the virtual machine is migrated.

    Alternatively, the virtual machine must be powered off and then powered on to correct the error with the cache.

  • You cannot delete the VFFS volume after a host is upgraded from ESXi 5.5 Beta
    You cannot delete the VFFS volume after a host is upgraded from ESXi 5.5 Beta.

    Workaround: This occurs only when you upgrade from ESXi 5.5 Beta to ESXi 5.5. To avoid this problem, install ESXi 5.5 instead of upgrading. If a you upgrade from ESXi 5.5 Beta, delete the VFFS volume before you upgrade.

  • Expected latency runtime improvements are not seen when virtual Flash Read Cache is enabled on virtual machines with older Windows and Linux guest operating systems
    Virtual Flash Read Cache provides optimal performance when the cache is sized to match the target working set, and when the guest file systems are aligned to at least a 4KB boundary. The Flash Read Cache filters out misaligned blocks to avoid caching partial blocks within the cache. This behavior is typically seen when virtual Flash Read Cache is configured for VMDKs of virtual machines with Windows XP and Linux distributions earlier than 2.6. In such cases, a low cache hit rate with a low cache occupancy is observed, which implies a waste of cache reservation for such VMDKs. This behavior is not seen with virtual machines running Windows 7, Windows 2008, and Linux 2.6 and later distributions, which align their file systems to a 4KB boundary to ensure optimal performance.

    Workaround: To improve the cache hit rate and optimal use of the cache reservation for each VMDK, ensure that the guest operating file system installed on the VMDK is aligned to at least a 4KB boundary.

Virtual SAN

  • New Issue Unmounted Virtual SAN disks and diskgroups displayed as mounted in the vSphere Client UI Operational Status field
    After the Virtual SAN disks or diskgroups are unmounted using the esxcli vsan storage diskgroup unmount CLI command or automatically by the Virtual SAN Device Monitor service when disks show persistently high latencies, the vSphere Client UI incorrectly displays the Operational Status field as Mounted.

    Workaround: Verify the Health field that shows a non-healthy value instead of the Operational Status field.
  • ESXi host with multiple VSAN disk groups might not display the magnetic disk statistics when you run the vsan.disks_stats command
    An ESXi host with multiple VSAN disk groups might not display the magnetic disk (MD) statistics when you run the vsan.disks_stats Ruby vSphere Console (RVC)command. The host displays only the solid-state drive (SSD) information.

    Workaround: None
  • VM directories contain duplicate swap (.vswp) files
    This might occur if virtual machines running on Virtual SAN are not cleanly shutdown, and if you perform a fresh installation of ESXi and vCenter Server without erasing data from Virtual SAN disks. As a result, old swap files (.vswp) are found in the directories for virtual machines that are shut down uncleanly.

    Workaround: None

  • Attempts to add more than seven magnetic disks to a Virtual SAN disk group might fail with incorrect error message
    Virtual SAN disk group supports maximum of one SSD and seven magnetic disks (HDD). Attempts to add an additional magnetic disk might fail with an incorrect error message similar to the following:

    The number of disks is not sufficient.

    Workaround: None
  • Re-scan failure experienced while adding a Virtual SAN disk
    When you add a Virtual SAN disk, re-scan fails due to probe failure for a non-Virtual SAN volume, which causes the operation to fail.

    Workaround: Ignore the error as all the disks are registered correctly.
  • A hard disk drive (HDD) that is removed after its associated solid state drive (SSD) is removed might still be listed as a storage disk claimed by Virtual SAN
    If an SSD and then its associated HDD is removed from a Virtual SAN datastore and you run the esxcli vsan storage list command, the removed HDD is still listed as a storage disk claimed by Virtual SAN. If the HDD is inserted back in a different host, the disk might appear to be part of two different hosts.

    Workaround: For example, if SSD and HDD is removed from ESXi x and inserted into ESXi y, perform the following steps to prevent the HDD from appearing to be a part of both ESXi x and ESXi y:
    1. Insert the SSD and HDD removed from the ESXi x, into ESXi y.
    2. Decommission the SSD from ESXi x.
    3. Run the command esxcfg-rescan -A.
       The HDD and SSD will no longer be listed on ESXi x.
  • The Working with Virtual SAN section of the vSphere Storage documentation indicates that the maximum number of HDD disks per a disk group is six. However, the maximum allowed number of HDDs is seven.
  • After a failure in a Virtual SAN cluster, vSphere HA might report multiple events, some misleading, before restarting a virtual machine
    The vSphere HA master agent makes multiple attempts to restart a virtual machine running on Virtual SAN after it has appeared to have failed. If the virtual machine cannot be immediately restarted, the master agent monitors the cluster state, and makes another attempt when conditions indicate that a restart might be successful. For virtual machines running on Virtual SAN, the vSphere HA master has special application logic to detect when the accessibility of a virtual machine's objects might have changed, and attempts a restart whenever an accessibility change is likely. The master agent makes an attempt after each possible accessibility change, and if it did not successfully power on the virtual machine before giving up and waiting for the next possible accessibility change.

    After each failed attempt, vSphere HA reports an event indicating that the failover was not successful, and after five failed attempts, reports that vSphere HA stopped trying to restart the virtual machine because the maximum number of failover attempts was reached. Even after reporting that the vSphere HA master agent has stopped trying, however, it does try the next time a possible accessibility change occurs.

    Workaround: None.

  • Powering off a Virtual SAN host causes the Storage Providers view in the vSphere Web Client to refresh longer than expected
    If you power off a Virtual San host, the Storage Providers view might appear empty. The Refresh button continues to spin even though no information is shown.

    Workaround: Wait at least 15 minutes for the Storage Providers view to be populated again. The view also refreshes after you power on the host.

  • Virtual SAN reports a failed task as completed
    Virtual SAN might report certain tasks as completed even though they failed internally.

    The following are conditions and corresponding reasons for errors:

    • Condition: Users attempt to create a new disk group or add a new disk to already existing disk group when the Virtual SAN license has expired.
      Error stack: A general system error occurred: Cannot add disk: VSAN is not licensed on this host.
    • Condition: Users attempt to create a disk group with the number of disk higher than the supported number. Or they try to add new disks to already existing disk group so that the total number exceeds the supported number of disks per disk group.
      Error stack: A general system error occurred: Too many disks.
    • Condition: Users attempt to add a disk to the disk group that has errors.
      Error stack: A general system error occurred: Unable to create partition table.

    Workaround: After identifying the reason for a failure, correct the reason and perform the task again.

  • Virtual SAN datastores cannot store host local and system swap files
    Typically, you can place the system swap or host local swap file on a datastore. However, the Virtual SAN datastore does not support system swap and host local swap files. As a result, the UI option that allows you to select the Virtual SAN datastore as the file location for system swap or host local swap is not available.

    Workaround: In Virtual SAN environment, use other supported options to place the system swap and host local swap files.

  • A Virtual SAN virtual machine in a vSphere HA cluster is reported as vSphere HA protected although it has been powered off
    This might happen when you power off a virtual machine with its home object residing on a Virtual SAN datastore, and the home object is not accessible. This problem is seen if a HA master agent election occurs after the object becomes inaccessible.

    Workaround:

    1. Make sure that the home object is accessible again by checking the compliance of the object with the specified storage policy.
    2. Power on the virtual machine then power it off again.

    The status should change to unprotected.

  • Virtual machine object remains in Out of Date status even after Reapply action is triggered and completed successfully
    If you edit an existing virtual machine profile due to the new storage requirements, the associated virtual machine objects, home or disk, might go in Out of Date status.This occurs when your current environment cannot support reconfiguration of virtual machine objects. Using Reapply action does not change the status.

    Workaround: Add additional resources, hosts or disks, to the Virtual SAN cluster and invoke Reapply action again.

  • Automatic disk claiming for Virtual SAN does not work as expected if you license Virtual SAN after enabling it
    If you enable Virtual SAN in automatic mode and then assign a license, Virtual SAN fails to claim disks.

    Workaround: Change the mode to Manual, and then switch back to Automatic. Virtual SAN will properly claim the disks.

  • vSphere High Availability (HA) fails to restart a virtual machine when Virtual SAN network is partitioned
    This occurs when Virtual SAN uses VMkernel adapters for internode communication, which are on the same subnet as other VMkernel adapters in a cluster. Such configuration could cause network failure and disrupt Virtual SAN internode communication, while vSphere HA internode communication remains unaffected.

    In this situation, the HA master agent might detect the failure in a virtual machine, but is unable to restart it. For example, this could occur when the host on which the master agent is running does not have access to the virtual machine's objects.

    Workaround: Make sure that the VMkernel adapters used by Virtual SAN do not share a subnet with the VMkernel adapters used for other purposes.

  • VM directories contain duplicate swap (.vswp) files   
    This might occur if virtual machines running on Virtual SAN are not cleanly shutdown, and if you perform a fresh installation of ESXi and vCenter Server without erasing data from Virtual SAN disks. As a result, old swap files (.vswp) are found in the directories for virtual machines that are shut down uncleanly.

    Workaround: None

  • VMs might become inaccessible due to high network latency
    In a Virtual SAN cluster setup, if the network latency is high, some VMs might become inaccessible on vCenter Server and you will not be able to power on or access the VM.

    Workaround: Run the vsan.check_state -e -r RVC command.
  • VM operations might timeout due to high network latency
    When storage controller with low queue depths are used, high network latency might cause VM operations to time out.

    Workaround: Re-attempt the operations when the network load is lower.
  • VMs might get renamed to a truncated version of their vmx file path
    If the vmx file of a virtual machines is temporarily inaccessible, the VM gets renamed to a truncated version of the vmx file path. For example, the virtual machine might get renamed to /vmfs/volumes/vsan:52f1686bdcb477cd-8e97188e35b99d2e/236d5552-ad93. The truncation might delete half the UUID of the VM home directory making it difficult to map the renamed VM with the original VM, from just the VM name.

    Workaround: Run the vsan.fix_renamed_vms RVC command.

vCenter Server and vSphere Web Client

  • Unable to add ESXi host to Active Directory domain
    You might observe that Active Directory domain name is not displayed in Domain drop-down list under Select Users and Groups option when you attempt to assign permissions. Also, the Authentication Services Settings option might not display any trusted domain controller even when the active directory has trusted domains.

    Workaround:
    1. Restart netlogond, lwiod, and then lsassd daemons.
    2. Login to ESXi host using vSphere Client.
    3. In the Configuration tab and click Authentication Services Settings.
    4. Refresh to view the trusted domains.

Virtual Machine Management Issues

  • Unable to perform cold migration and storage vMotion of a virtual machine if the VMDK file name begins with "core"
    Attempts to perform cold migration and storage vMotion of a virtual machine might fail if the VMDK file name begins with "core" with error message similar to the following:

    A general system error occurred: Error naming or renaming a VM file.

    Error messages similar to the following might be displayed in the vpxd.log file:

    mem> 2014-01-01T11:08:33.150-08:00 [13512 info 'commonvpxLro' opID=8BA11741-0000095D-86-97] [VpxLRO] -- FINISH task-internal-2471 -- -- VmprovWorkflow --
    mem> 2014-01-01T11:08:33.150-08:00 [13512 info 'Default' opID=8BA11741-0000095D-86-97] [VpxLRO] -- ERROR task-internal-2471 -- -- VmprovWorkflow: vmodl.fault.SystemError:
    mem> --> Result:
    mem> --> (vmodl.fault.SystemError){
    mem> --> dynamicType = ,
    mem> --> faultCause = (vmodl.MethodFault) null,
    mem> --> reason = "Error naming or renaming a VM file.",
    mem> --> msg = "",
    mem> --> }

    This issue occurs when the ESXi host incorrectly classifies VMDK files with a name beginning with "core" as a core file instead of the expected disk type.

    Workaround: Ensure that the VMDK file name of the virtual machine does not begin with "core". Also, use the vmkfstools utility to rename the VMDK file to ensure that the file name do not begin with the word "core".
  • Virtual machines with Windows 7 Enterprise 64-bit guest operating systems in the French locale experience problems during clone operations
    If you have a cloned Windows 7 Enterprise 64-bit virtual machine that is running in the French locale, the virtual machine disconnects from the network and the customization specification is not applied. This issue appears when the virtual machine is running on an ESXi 5.1 host and you clone it to ESXi 5.5 and upgrade the VMware Tools version to the latest version available with the 5.5 host.

    Workaround: Upgrade the virtual machine compatibility to ESXi 5.5 and later before you upgrade to the latest available version of VMware Tools.

  • Attempts to increase the size of a virtual disk on a running virtual machine fail with an error
    If you increase the size of a virtual disk when the virtual machine is running, the operation might fail with the following error:

    This operation is not supported for this device type.

    The failure might occur if you are extending the disk to the size of 2TB or larger. The hot-extend operation supports increasing the disk size to only 2TB or less. SATA virtual disks do not support the hot-extend operation no matter what their size is.

    Workaround: Power off the virtual machine to extend the virtual disk to 2TB or larger.

VMware HA and Fault Tolerance Issues
  • New Issue Fault Tolerance (FT) is not supported on Intel Skylake-DT/S, Broadwell-EP, Broadwell-DT, and Broadwell-DE platform
    Fault tolerance is not supported on Intel Skylake-DT/S, Broadwell-EP, Broadwell-DT, and Broadwell-DE platform. Attempts to power on a virtual machine will fail after you enable single-processor Fault Tolerance.

    Workaround: None

  • If you select an ESX/ESXi 4.0 or 4.1 host in a vSphere HA cluster to fail over a virtual machine, the virtual machine might not restart as expected
    When vSphere HA restarts a virtual machine on an ESX/ESXi 4.0 or 4.1 host that is different from the original host the virtual machine was running on, a query is issued that is not answered. The virtual machine is not powered on on the new host until you answer the query manually from the vSphere Client.

    Workaround: Answer the query from the vSphere Client. Alternatively, you can wait for a timeout (15 minutes by default), and vSphere HA attempts to restart the virtual machine on a different host. If the host is running ESX/ESXi 5.0 or later, the virtual machine is restarted.

  • If a vMotion operation without shared storage fails in a vSphere HA cluster, the destination virtual machine might be registered to an unexpected host
    A vMotion migration involving no shared storage might fail because the destination virtual machine does not receive a handshake message that coordinates the transfer of control between the two virtual machines. The vMotion protocol powers off both the source and destination virtual machines. If the source and destination hosts are in the same cluster and if vSphere HA has been enabled, the destination virtual machine might be registered by vSphere HA on another host than the one chosen as the target for the vMotion migration.

    Workaround: If you want to retain the destination virtual machine and you want it to be registered to a specific host, relocate the destination virtual machine to the destination host. This relocation is best done before powering on the virtual machine.

Supported Hardware Issues
  • Sensor values for Fan, Power Supply, Voltage, and Current sensors appear under the Other group of the vCenter Server Hardware Status Tab
    Some sensor values are listed in the Other group instead of the respective categorized group.

    Workaround: None.

  • I/O memory management unit (IOMMU) faults might appear when the debug direct memory access (DMA) mapper is enabled
    The debug mapper places devices in IOMMU domains to help catch device memory accesses to addresses that have not been explicitly mapped. On some HP systems with old firmware, IOMMU faults might appear.

    Workaround: Download firmware upgrades from the HP Web site and apply them.

    • Upgrade the firmware of the HP iLO2 controller.
      Version 2.07, released in August 2011, resolves the problem.
    • Upgrade the firmware of the HP Smart Array.
      For the HP Smart Array P410, version 5.14, released in January 2012, resolves the problem.

VMware Tools Issues

  • Unable to install VMware Tools on Linux guest operating systems by executing the vmware-install.pl -d command if VMware Tools is not installed earlier
    If VMware Tools is not installed on your Linux guest operating system, attempts to perform a fresh installation of VMware Tools by executing the vmware-install.pl -d command might fail.
    This issue occurs in the following guest operating systems:
    • RHEL 7 and later
    • CentOS 7 and later
    • Oracle Linux 7 and later
    • Fedora 19 and later
    • SLES 12 and later
    • SLED 12 and later
    • openSUSE 12.2 and later
    • Ubuntu 14.04 and later
    • Debian 7 and later

    Workaround: There is no workaround that helps the -default (-d) switch work. However, you can install VMware Tools without the - default switch.
    Select Yes when you are prompted with the option Do you still want to proceed with this legacy installer? by the installer.

    Note: This release introduces a new --force-install’(-f) switch to install VMware Tools.
  • File disappears after VMware Tools upgrade
    deployPkg.dll file which is present in C:\Program Files\Vmware\Vmware Tools\ is not found after upgrading VMware Tools. This is observed when it is upgraded from version 5.1 Update 2 to 5.5 Update 1 or later and version 5.5 to 5.5 Update 1 or later.

    Workaround: None
  • User is forcefully logged out while installing or uninstalling VMware Tools by OSP
    While installing or uninstalling VMware Tools packages in a RHEL (Red Hat Linux Enterprise) and CentOS virtual machines that were installed using operating system specific packages (OSP), the current user is forcefully logged out. This issue occurs in RHEL 6.5 64-bit, RHEL 6.5 32-bit, CentOS 6.5 64-bit and CentOS 6.5 32-bit virtual machines.

    Workaround:
    • Use secure shell (SSH) to install or uninstall VMware Tools
      or
    • The user must log in again to install or uninstall the VMware Tools packages

Miscellaneous Issues

  • SRM test recovery operation might fail with an error
    Attempts to perform Site Recovery Manager (SRM) test recovery might fail with error message similar to the following:
    'Error - A general system error occurred: VM not found'.
    When several test recovery operations are performed simultaneously, the probability of encountering the error messages increases.

    Workaround: None. However, this is not a persistent issue and this issue might not occur if you perform the SRM test recovery operation again.
>