VMware ESXi 5.5 Update 3b Release Notes

VMware ESXi™ 5.5 Update 3b | 8 DEC 2015 | 3248547

Last updated: 30 March 2016

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

This release of VMware ESXi contain the following enhancements:

  • Updated Support for SSLv3 protocol is disabled by default.

    Note: In your vSphere environment, you need to update vCenter Server to vCenter Server 5.5 Update 3b before updating ESXi to ESXi 5.5 Update 3b. vCenter Server will not be able to manage ESXi 5.5 Update 3b, if you update ESXi before updating vCenter Server to version 5.5 Update 3b. For more information about the sequence in which vSphere environments need to be updated, refer, KB 2057795

    VMware highly recommends you to update ESXi hosts to ESXi 5.5 Update 3b while managing them from vCenter Server 5.5 Update 3b.

    VMware does not recommend re-enabling SSLv3 due to POODLE vulnerability. If at all you need to enable SSLv3, you need to enable the SSLv3 protocol for all components. For more information, refer KB 2139396.

  • Resolved Issues This release delivers a number of bug fixes that have been documented in the Resolved Issues section.

Earlier Releases of ESXi 5.5

Features and known issues of ESXi 5.5 are described in the release notes for each release. Release notes for earlier releases of ESXi 5.5, are:

Internationalization

VMware vSphere 5.5 Update 3b is available in the following languages:

  • English
  • French
  • German
  • Japanese
  • Korean
  • Simplified Chinese
  • Traditional Chinese

Compatibility and Installation

ESXi, vCenter Server, and vSphere Web Client Version Compatibility

The VMware Product Interoperability Matrix provides details about the compatibility of current and earlier versions of VMware vSphere components, including ESXi, VMware vCenter Server, the vSphere Web Client, and optional VMware products. Check the VMware Product Interoperability Matrix also for information about supported management and backup agents before you install ESXi or vCenter Server.

The vSphere Client and the vSphere Web Client are packaged on the vCenter Server ISO. You can install one or both clients by using the VMware vCenter™ Installer wizard.

For ESXi versions 4.0, 4.1, 5.0, 5.1 and 5.5, VMware highly recommends you to update ESXi hosts to ESXi 5.5 Update 3b while managing them from vCenter Server 5.5 Update 3b.

ESXi, vCenter Server, and VDDK Compatibility

Virtual Disk Development Kit (VDDK) 5.5.3 adds support for ESXi 5.5 Update 3 and vCenter Server 5.5 Update 3 releases.
For more information about VDDK, see http://www.vmware.com/support/developer/vddk/.

ESXi and Virtual SAN Compatibility

Virtual SAN does not support clusters that are configured with ESXi hosts earlier than 5.5 Update 1. Make sure all hosts in the Virtual SAN cluster are upgraded to ESXi 5.5 Update 1 or later, before enabling Virtual SAN. vCenter Server should also be upgraded to 5.5 Update 1 or later.

Hardware Compatibility for ESXi

To view a list of processors, storage devices, SAN arrays, and I/O devices that are compatible with vSphere 5.5 Update 3, use the ESXi 5.5 Update 3 information in the VMware Compatibility Guide.

Device Compatibility for ESXi

To determine which devices are compatible with ESXi 5.5 Update 3b, use the ESXi 5.5 Update 3 information in the VMware Compatibility Guide.

Some devices are deprecated and no longer supported on ESXi 5.5 and later. During the upgrade process, the device driver is installed on the ESXi 5.5.x host. It might still function on ESXi 5.5.x, but the device is not supported on ESXi 5.5.x. For a list of devices that have been deprecated and are no longer supported on ESXi 5.5.x, see the VMware Knowledge Base article Deprecated devices and warnings during ESXi 5.5 upgrade process.

Guest Operating System Compatibility for ESXi

To determine which guest operating systems are compatible with vSphere 5.5 Update 3b, use the ESXi 5.5 Update 3 information in the VMware Compatibility Guide.

Virtual Machine Compatibility for ESXi

Virtual machines that are compatible with ESX 3.x and later (hardware version 4) are supported with ESXi 5.5 Update 3. Virtual machines that are compatible with ESX 2.x and later (hardware version 3) are not supported. To use such virtual machines on ESXi 5.5 Update 3, upgrade the virtual machine compatibility. See the vSphere Upgrade documentation.

vSphere Client Connections to Linked Mode Environments with vCenter Server 5.x

vCenter Server 5.5 can exist in Linked Mode only with other instances of vCenter Server 5.5.

Installation Notes for This Release

Read the vSphere Installation and Setup documentation for guidance about installing and configuring ESXi and vCenter Server.

Although the installations are straightforward, several subsequent configuration steps are essential. Read the following documentation:

Migrating Third-Party Solutions

You cannot directly migrate third-party solutions installed on an ESX or ESXi host as part of a host upgrade. Architectural changes between ESXi 5.1 and ESXi 5.5 result in the loss of third-party components and possible system instability. To accomplish such migrations, you can create a custom ISO file with Image Builder. For information about upgrading your host with third-party customizations, see the vSphere Upgrade documentation. For information about using Image Builder to make a custom ISO, see the vSphere Installation and Setup documentation.

Upgrades and Installations Disallowed for Unsupported CPUs

vSphere 5.5.x supports only CPUs with LAHF and SAHF CPU instruction sets. During an installation or upgrade, the installer checks the compatibility of the host CPU with vSphere 5.5.x. If your host hardware is not compatible, a purple screen appears with a message about incompatibility. You cannot install or upgrade to vSphere 5.5.x.

Upgrades for This Release

For instructions about upgrading vCenter Server and ESX/ESXi hosts, see the vSphere Upgrade documentation.

In your vSphere environment, you need to update vCenter Server to vCenter Server 5.5 Update 3b before updating ESXi to ESXi 5.5 Update 3b.
vCenter Server will not be able to manage ESXi 5.5 Update 3b hosts, if you update ESXi before updating vCenter Server to version 5.5 Update 3b.
For more information about the sequence in which vSphere environments need to be updated, refer KB 2057795.

Supported Upgrade Paths for Upgrade to ESXi 5.5 Update 3b:

Upgrade Deliverables

Supported Upgrade Tools

Supported Upgrade Paths to ESXi 5.5 Update 3b

ESX/ESXi 4.0:
Includes
ESX/ESXi 4.0 Update 1
ESX/ESXi 4.0 Update 2

ESX/ESXi 4.0 Update 3
ESX/ESXi 4.0 Update 4

ESX/ESXi 4.1:
Includes
ESX/ESXi 4.1 Update 1
ESX/ESXi 4.1 Update 2

ESX/ESXi 4.1 Update 3

ESXi 5.0:
Includes
ESXi 5.0 Update 1

ESXi 5.0 Update 2
ESXi 5.0 Update 3

ESXi 5.1
Includes
ESXi 5.1 Update 1
ESXi 5.1 Update 2

ESXi 5.5
Includes
ESXi 5.5 Update 1
ESXi 5.5 Update 2 ESXi 5.5 Update 3a



VMware-VMvisor-Installer-201512001-3248547.x86_64.iso

 

  • VMware vSphere Update Manager
  • CD Upgrade
  • Scripted Upgrade

Yes

Yes

Yes

Yes

Yes

ESXi550-201512001.zip
  • VMware vSphere Update Manager
  • ESXCLI
  • VMware vSphere CLI

No

No

Yes*

Yes*

Yes

Using patch definitions downloaded from VMware portal (online) VMware vSphere Update Manager with patch baseline

No

No

No

No

Yes


Open Source Components for VMware vSphere 5.5 Update 3

The copyright statements and licenses applicable to the open source software components distributed in vSphere 5.5 Update 3 are available at http://www.vmware.com/download/vsphere/open_source.html, on the Open Source tab. You can also download the source files for any GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available for the most recent available release of vSphere.

Product Support Notices

  • vSphere Web Client. Because Linux platforms are no longer supported by Adobe Flash, vSphere Web Client is not supported on the Linux OS. Third party browsers that add support for Adobe Flash on the Linux desktop OS might continue to function.

  • VMware vCenter Server Appliance. In vSphere 5.5, the VMware vCenter Server Appliance meets high-governance compliance standards through the enforcement of the DISA Security Technical Information Guidelines (STIG). Before you deploy VMware vCenter Server Appliance, see the VMware Hardened Virtual Appliance Operations Guide for information about the new security deployment standards and to ensure successful operations.

  • vCenter Server database. vSphere 5.5 removes support for IBM DB2 as the vCenter Server database.

  • VMware Tools. Beginning with vSphere 5.5, all information about how to install and configure VMware Tools in vSphere is merged with the other vSphere documentation. For information about using VMware Tools in vSphere, see the vSphere documentation. Installing and Configuring VMware Tools is not relevant to vSphere 5.5 and later.

  • VMware Tools. Beginning with vSphere 5.5, VMware Tools do not provide ThinPrint features.
    Note: If you are planning on upgrading your ESXi hosts to ESXi 5.5 Update 3b (with bundled VMware Tools 10.0.0) or later versions, and also using the older versions of Horizon View Agents, you should be aware of incompatibility. For resolution on incompatibility and general guidelines, refer KBs 2144438 and 2144518.

  • vSphere Data Protection. vSphere Data Protection 5.1 is not compatible with vSphere 5.5 because of a change in the way vSphere Web Client operates. vSphere Data Protection 5.1 users who upgrade to vSphere 5.5 must also update vSphere Data Protection to continue using vSphere Data Protection.

Patches Contained in this Release

This release contains all bulletins for ESXi that were released earlier to the release date of this product. See the VMware Download Patches page for more information about the individual bulletins.

Patch Release ESXi550-Update03b contains the following individual bulletins:

ESXi550-201512401-BG: Updates ESXi 5.5 esx-base vib

ESXi550-201512402-BG: Updates ESXi 5.5 ehci-ehci-hcd vib

ESXi550-201512403-BG: Updates ESXi 5.5 tools-light vib

ESXi550-201512404-BG: Updates ESXi 5.5 lsi-msgpt vib

Patch Release ESXi550-Update03b Security-only contains the following individual bulletins:

ESXi550-201512101-SG: Updates ESXi 5.5 esx-base vib

ESXi550-201512102-SG: Updates ESXi 5.5 tools-light vib

Patch Release ESXi550-Update03b contains the following image profiles:

ESXi-5.5.0-20151204001-standard
ESXi-5.5.0-20151204001-no-tools

Patch Release ESXi550-Update03b Security-only contains the following image profiles:

ESXi-5.5.0-20151201001s-standard
ESXi-5.5.0-20151201001s-no-tools

For information on patch and update classification, see KB 2014447.

Resolved Issues

This section describes resolved issues in this release:

Backup Issues

  • Virtual machine capability flag changeTrackingSupported might get reset unexpectedly when reloading the virtual machine
    The Changed Block Tracking (CBT) might intermittently reset the changeTrackingSupported flag value from true to false unexpectedly when hostd service reloads a virtual machine. As a result your CBT might not be supported on the virtual machine and hence CBT cannot be used during the virtual machine backup process. Incremental backup might fail, an error message similar to the following is displayed:
    unable to take a backup of virtual machine.

    This issue is resolved in this release.

CIM and API Issues

  • ServerView CIM Provider fails to monitor hardware status if Emulex CIM Provider exists on the same ESXi host
    When ServerView CIM Provider and Emulex CIM Provider co-exist on the same ESXi host, the Emulex CIM Provider (sfcb-emulex_ucn) might fail to respond resulting in failure to monitor hardware status.

    This issue is resolved in this release.

  • Hardware monitoring might fail when using 3rd party software ServerView RAID Manager
    You might experience hardware monitoring failure when you use 3rd party software ServerView RAID Manager. The sfcb-vmware_aux stops responding due to race condition.

    This issue is resolved in this release.

  • ESXi 5.x host might get disconnected from the vCenter Server when inodes get exhausted
    An ESXi 5.x host might get disconnected from the vCenter Server when inodes get exhausted by small footprint CIM broker daemon (sfcbd) service. After the ESXi 5.x host enters this state, it connected be reconnected to the vCenter Server.

    A log similar to the following is reported in /var/log/hostd.log indicating that the ESXi 5.x host is out of space:
    VmkCtl Locking (/etc/vmware/esx.conf) : Unable to create or open a LOCK file. Failed with reason: No space left on device
    VmkCtl Locking (/etc/vmware/esx.conf) : Unable to create or open a LOCK file. Failed with reason: No space left on device

    A message similar to the following is written in /var/log/vmkernel.log indicating that the ESXi 5.x host is out of inodes:
    cpu4:1969403)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process python because the visorfs inode table is full.
    cpu11:1968837)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process hostd because the visorfs inode table is full.
    cpu5:1969403)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process python because the visorfs inode table is full.
    cpu11:1968837)WARNING: VisorFSObj: 893: Cannot create file /etc/vmware/esx.conf.LOCK for process hostd because the visorfs inode table is full.

    This issue is resolved in this release.

  • False alarms might appear in the Hardware Status tab of the vSphere Client
    After upgrading Integrated Lights Out (iLO) firmware on HP DL980 G7, false alarms appear in the Hardware Status tab of the vSphere Client.
    Error messages similar to the following might be logged in the /var/log/syslog.log file:

    2014-10-17T08:50:58Z sfcb-vmware_raw[68712]: IpmiIfruInfoAreaLength: Reading FRU for 0x0 at 0x8 FAILED cc=0xffffffff
    2014-10-17T08:50:58Z sfcb-vmware_raw[68712]: IpmiIfcFruChassis: Reading FRU Chassis Info Area length for 0x0 FAILED
    2014-10-17T08:50:58Z sfcb-vmware_raw[68712]: IpmiIfcFruBoard: Reading FRU Board Info details for 0x0 FAILED cc=0xffffffff
    2014-10-17T08:50:58Z sfcb-vmware_raw[68712]: IpmiIfruInfoAreaLength: Reading FRU for 0x0 at 0x70 FAILED cc=0xffffffff
    2014-10-17T08:50:58Z sfcb-vmware_raw[68712]: IpmiIfcFruProduct: Reading FRU product Info Area length for 0x0 FAILED
    2014-10-17T08:51:14Z sfcb-vmware_raw[68712]: IpmiIfcSelReadEntry: data length mismatch req=19,resp=3
    2014-10-17T08:51:15Z sfcb-vmware_raw[68712]: IpmiIfcSelReadEntry: EntryId mismatch req=0001,resp=0002
    2014-10-17T08:51:17Z sfcb-vmware_raw[68712]: IpmiIfcSelReadEntry: EntryId mismatch req=0002,resp=0003
    2014-10-17T08:51:19Z sfcb-vmware_raw[68712]: IpmiIfcSelReadEntry: EntryId mismatch req=0003,resp=0004
    2014-10-17T08:51:19Z sfcb-vmware_raw[68712]: IpmiIfcSelReadEntry: EntryId mismatch req=0004,resp=0005
    2014-10-17T08:51:20Z sfcb-vmware_raw[68712]: IpmiIfcSelReadEntry: EntryId mismatch req=0005,resp=0006
    2014-10-17T08:51:21Z sfcb-vmware_raw[68712]: IpmiIfcSelReadEntry: EntryId mismatch req=0006,resp=0007

    This issue is resolved in this release.

Networking Issues

  • Virtual machines using VMXNET3 virtual adapter might fail
    Virtual machines using VMXNET3 virtual adapter might fail when attempting to boot from iPXE (open source boot firmware).

    This issue is resolved in this release.

  • Setting the resource pool to user defined respool might fail
    Attempts to set the resource pool to user defined respool might fail, this happens because the QoS priority tag in User-defined network resource pools is not taking effect.

    This issue is resolved in this release.


  • Network connectivity lost when applying host profile during Auto Deploy
    When applying host profile during Auto Deploy, you might lose network connectivity because the VXLAN Tunnel Endpoint (VTEP) NIC gets tagged as management vmknic

    This issue is resolved in this release.

  • Fibre Channel over Ethernet (FCoE) link might go down when collecting ESXi log bundle
    When collecting ESXi log bundle, the lldpnetmap command enables LLDP; however, the LLDP can be only set on Both mode and the LLDP packets are sent out by the ESXi host. The packets might cause the FCoE link to go down.

    This issue is resolved in this release.

Security Issues

  • Update to OpenSSL
    The OpenSSL is updated to version openssl-1.0.1p.

  • Update the Python package
    The Python third-party library is updated to version 2.7.9.

  • Support disabled for Diffie Hellman cipher
    Support for Diffie Hellman cipher suite is disabled by default for all the services.

  • Support disabled for EXPORT_RSA cipher
    Support for EXPORT_RSA cipher is disabled by default.

  • SSL/TLS configuration is persistent
    All service configuration for SSL/TLS is now persistent for both stateless and stateful ESXi using Hostprofile except hostd.

Server Configuration Issues

  • Rebooting an ESXi host after it's upgraded to ESXi 5.x might fail causing a very high WAN treaffic
    A message similar to the following is written to the syslog.log file at /var/log/ on the ESXi 5.x host if multiple instances of the lsassd, netlogond or lwiod daemons are running at the same time:

    lsassd[<value>]: <value>:Terminating on fatal IPC exception

    This issue might occur during the ESXi host upgrade process. For more information refer, KB 2051707.

    This issue is resolved in this release.

  • ESXi host server reboots unexpectedly followed by Uncorrectable Machine Check Exception error
    After upgrading the ESXi host with 1.5 TB of memory from 5.1 to 6.0 on a HP Server with AMD Processor, the host might unexpectedly stop responding or reboot. You will also see Uncorrectable Machine Check Exceptions (UMCEs) similar to the following in the Integrated Management log file.

    Critical","CPU","##/##/2014 15:41","##/##/2014 15:41","1","Uncorrectable Machine Check Exception (Board 0, Processor 3, APIC ID 0x00000060, Bank 0x00000004, Status 0xF6000000'00070F0F, Address 0x00000050'61EA3B28, Misc 0x00000000'00000000)",
    Mode of failure: Unexpectedly reboot. IML displays UMCE occurred.


    This issue is resolved in this release.

  • Hostd service might stop responding due to unavailibity of memory
    The hostd service might stop responding and fail with an error. This happens due to unavailibity of memory in vmkctl, which causes hostd to take take up more memory. An error message similar to the following might be displayed:
    Memory exceeds hard limit

    This issue is resolved in this release.

Storage Issues

  • The Dell Force10 S5000 Switches running 9.6 firmware might have difficulty in establishing sessions with Dell Compellent S40 Storage
    Attempts to establish session between Dell Force10 S5000 Switches running 9.6 firmware and Dell Compellent S40 Storage server might fail. This happens when the ESXi 5.x servers are connected to Fibre Channel over Ethernet (FCoE) storage, and the initiator does not log back in to the fabric when connectivity is re-established. This might cause many valid or invalid sessions on the environments with multiple servers connecting to storage via FCoE.

    This issue is resolved in this release.

  • The esxtop utility reports incorrect statistics for DAVG/cmd and KAVG/cmd on VAAI supported LUNs
    The esxtop utility reports incorrect statistics on VAAI supported LUNs for the average device latency per command and the average ESXi VMkernel latency per command) due to an incorrect calculation.
    Note:This issue might also impact the ESXi stats in vCenter Server and VRops

    This issue is resolved in this release.

  • vSphere might not detect all drives in the system even if they are displayed in the BIOS
    vSphere might not detect all the 18 drives in the system due to the lsi_msgpt3 driver being unable to detect a single drive per HBA if there are multiple HBAs in a system.

    This issue has been resolved.

  • VMFS volume is locked
    VMFS volume on an ESXi host might remain locked due to failed metadata operations. An error message similar to the following is observed in vmkernel.log file:
    WARNING: LVM: 12976: The volume on the device naa.50002ac002ba0956:1 locked, possibly because some remote host encountered an error during a volume operation and could not recover.

    This issue is resolved in this release.

  • Ignore VDS vmknics which are disabled in host profiles
    When you disable VDS vmknics from host profile, they are not ignored during compliance check and also while applying the profile to any host. This can cause NSX preparation for stateless ESXi hosts to fail.

    This issue is resolved in this release.

Guest Operating System Issues

  • Microsoft Cluster Service Validation might fail on ESXi 5.5 Update 3b host
    Microsoft Cluster Service (MSCS) validation and failover operations might fail for MSCS 2012 Release 2.

    This issue is resolved in this release.

  • Incorrect warning may appear when booting guest OS X 10.9
    You might receive a spurious warning when you start a virtual machine running OS X 10.9. A message similar to the following might be displayed:
    Your Mac OS guest might run unreliably with more than one virtual core. It is recommended that you power off the virtual machine, and set its number of virtual cores to one before powering it on again. If you continue, your guest might panic and you might lose data.

    The warning was intended to appear only for older versions of OS X and does not apply to OS X 10.9. The OS X 10.9 version requires at least two processor cores in order to operate reliably. The spurious warning should be ignored.

    This issue is resolved in this release.

  • Virtual machines running SAP randomly fail
    Virtual machines that run SAP might randomly fail into the vmx.zdump with an error message similar to the following when executing too many VMware Tools stats commands inside the VM.
    CoreDump error line 2160, error Cannot allocate memory.

    This issue is resolved in this release.

Virtual SAN Issues

  • Attempts to copy data to a virtual machine on the Virtual SAN datastore might fail
    Migrating data from a physical NAS to a virtual machine's file server on Virtual SAN might fail. An error message similar to the following might be displayed:
    File not found

    This issue is resolved in this release.

  • After adding back a re-installed ESXi host to a Virtual SAN cluster, might cause the disks and other virtual machine components to remain Absent/Unhealthy
    When an ESXi host is re-installed, its host UUID changes. When such an ESXi host is added back to the Virtual SAN cluster, the disks and other virtual machine components that belong to this host might continue to show up as Absent or Unhealthy.

    This issue is resolved in this release.

  • Attempts to consolidate disks in VMware Virtual SAN might fail
    Attempts to consolidate disks in VMware Virtual SAN might fail even when the vSanDatastore or the disk drive has sufficient space. An error message similar to the following is displayed:
    An error occurred while consolidating disks: msg.disklib.NOSPACE

    This issue is resolved in this release.

  • Diskgroup validation might fail due to invalid metadata of SSD/MD disk partition
    Attempts to remove a disk from a VMware Virtual SAN diskgroup might result in a purple diagnostic screen as the diskgroup validation fails due to invalid metadata of the SSD/MD disk partition.

    This issue is resolved in this release.

  • Orphaned LSOM object retained after a APD disk error is observed on the SSD
    Orphaned LSOM object might be retained after an All Paths Down(APD) disk error is observed on the Solid State Disk (SSD).

    This issue is resolved in this release.

  • Unable to extract the VSI nodes when hostd service is not functional
    When hostd service is not functional, you are unable to extract the VSI nodes which might result in analyzing issues in the field. To resolve this issue vsanObserver has been added, that allows the observer to run without depending on the hostd service.

    This issue is resolved in this release.

  • Rebooting an ESXi host might fail with a purple diagnostic screen
    Attempts to reboot an ESXi host in maintenance mode might fail with a purple diagnostic screen An error message similar to the following might be displayed in the vmkwarning.log file:
    2015-03-03T08:40:37.994Z cpu4:32783)WARNING: LSOM: LSOMEventNotify:4571: VSAN device 523eca86-a913-55d4-915e-f89bdc9fab46 is under permanent error.
    2015-03-03T08:40:37.994Z cpu1:32967)WARNING: LSOMCommon: IORETRYCompleteSplitIO:577: Throttled: max retries reached Maximum kernel-level retries exceeded
    2015-03-03T08:40:39.006Z cpu6:32795)WARNING: LSOMCommon: IORETRYParentIODoneCB:1043: Throttled: split status Maximum kernel-level retries exceeded
    2015-03-03T08:40:39.006Z cpu6:32795)WARNING: PLOG: PLOGElevWriteMDDone:255: MD UUID 523eca86-a913-55d4-915e-f89bdc9fab46 write failed Maximum kernel-level retries exceeded
    2015-03-03T08:41:44.217Z cpu1:34228)WARNING: LSOM: LSOMEventNotify:4571: VSAN device 52ed79c6-b64e-3f60-289f-5870e19a85f0 is under permanent error.

    This issue is resolved in this release.

vCenter Server and vSphere Web Client Issues

  • Attempts to connect an ESXi host to vSphere Web Client might fail
    An ESXi host might stop responding and disconnect from the vCenter Server. Due to this the host cannot connect with the vSphere Web Client directly. This happens due to insufficient memory allocation for likewise components.

    This issue is resolved in this release.

  • vCenter Server might stop responding when ESXi host loses connectivity to remote Syslog server
    When an ESXi host loses connectivity to the remote Syslog server, the events GeneralHostWarningEvent and AlarmStatusChangedEvent are indefinitely logged in too many alert messages, resulting in the vpx_event and vpx_event_arg tables to fill up the vCenter database. The issue causes extreme vCenter latency and the vCenter Server to stop responding.

    This issue is resolved in this release.

  • Attempts to use VMware Paravirtual SCSI (PVSCSI) controller after vMotion might impact the performance of the virtual machine
    You might experience a drop in virtual machine performance after you perform vMotion or suspend/resume a virtual machine.

    This issue is resolved in this release.

Virtual Machine Management Issues

  • Guest operating system might fail to power ON when an Samsung NVMe XS1715 SSD controller device is configured as a pass through device
    Virtual machine might fail to power ON or stops responding with guest OS if NVMe XS1715 SSD controller is attached in passthrough mode. An error message similar to the following is displayed:
    PCI passthrough device ID(0x-57e0) is invalid

    This issue is resolved in this release.

  • Attempts to resume a virtual machine from suspended state or resume a virtual machine during vmotion/svmotion might fail
    Attempts to resume a virtual machine from suspended state or resume a virtual machine during vmotion/svmotion might fail. An error message similar to the following might be displayed:
    The virtual machine cannot be powered on
    You can also check the vmware.log file and search for an error with the extension msg.checkpoint.PASizeMismatch.

    This issue is resolved in this release.

  • ESXi 5.5 host might fail and virtual machine might stop responding
    The ESXi 5.5 host might fail with a purple diagnostic screen after virtual machine stops responding, due to racy window in swap code breaking large pages. An error message similar to the following might be displayed:
    #PF Exception 14 in world 32856:helper14 IP 0x41801faf6560 addr 0x410e86868680
    2015-04-03T13:03:17.648Z cpu22:32856)0x41238161dc10:[0x41801faf6560]Alloc_Dealloc@vmkernel#nover+0x12c stack: 0x41238161dc60, 0x41801fa1
    2015-04-03T13:03:17.648Z cpu22:32856)0x41238161dc80:[0x41801fc6874d]MemSched_WorldCleanup@vmkernel#nover+0x205 stack: 0x41238161dd00, 0x
    2015-04-03T13:03:17.648Z cpu22:32856)0x41238161df30:[0x41801fae317e]WorldCleanup@vmkernel#nover+0x1ce stack: 0x0, 0x412381627000, 0x4123
    2015-04-03T13:03:17.648Z cpu22:32856)0x41238161dfd0:[0x41801fa6133a]helpFunc@vmkernel#nover+0x6b6 stack: 0x0, 0x0, 0x0, 0x0, 0x0
    2015-04-03T13:03:17.648Z cpu22:32856)0x41238161dff0:[0x41801fc56872]CpuSched_StartWorld@vmkernel#nover+0xfa stack: 0x0, 0x0, 0x0, 0x0,

    This issue is resolved in this release.

  • Host might stop responding with purple diagnostic screen when migrating or checkpointing the virtual machines
    When virtual machines are migrated or check-pointed along with VMM swapping, host might stop responding due to excessive logging similar to the following:
    WARNING: Swap: vm 53052: 5285: Swap mode not exclusive. isShared=1 isExclusive=0
    WARNING: Swap: vm 53052: 5285: Swap mode not exclusive. isShared=1 isExclusive=0
    WARNING: Swap: vm 53052: 5285: Swap mode not exclusive. isShared=1 isExclusive=0
    WARNING: Swap: vm 53052: 5285: Swap mode not exclusive. isShared=1 isExclusive=0

    This issue is resolved in this release.

  • The esxcli virtual machine process list command might still display old virtual machine name if the virtual machine was renamed in a Powered On state
    After you rename a Powered On virtual machine, if you run the esxcli vm process list command to get the list of running virtual machines from the host, the list might display the old virtual machine name.

    This issue is resolved in this release.

  • Installing Linux with e1000e network adapter on a virtual machine might fail
    Attempts to install Linux might fail when you add e1000e network adapter to a virtual machine and power on the virtual machine.
    A log similar to the following is written to vmkernel.log file:
    [ 1498.266938] Call Trace:
    [ 1498.266950] [] timecounter_init+0x1a/0x30
    [ 1498.266973] [] e1000e_config_hwtstamp+0x247/0x420 [e1000e]
    [ 1498.266994] [] e1000e_reset+0x285/0x620 [e1000e]
    [ 1498.267012] [] e1000_probe+0xbaa/0xee0 [e1000e]
    [ 1498.267021] [] local_pci_probe+0x45/0xa0
    [ 1498.267029] [] ? pci_match_device+0xc5/0xd0
    [ 1498.267036] [] pci_device_probe+0xf9/0x150
    [ 1498.267046] [] driver_probe_device+0x87/0x390
    [ 1498.267054] [] ? driver_probe_device+0x390/0x390
    [ 1498.267062] [] __device_attach+0x3b/0x40
    [ 1498.267070] [] bus_for_each_drv+0x6b/0xb0
    [ 1498.267077] [] device_attach+0x88/0xa0

    This issue is resolved in this release.

  • Limiting the IOPS value for a virtual machine disk might result in reduced IOPS
    Limiting the IOPS value for a virtual machine disk might result in reduced IOPS that are earlier than the configured Read/Write operations limit. This issue occurs when the size of the Read/Write operation (IO) is equal to the ESX IO scheduler's Cost Unit size. Due to this, IO scheduler considers an IO as multiple IOs. This leads to throttling of the IOs.

    This issue is resolved in this release.

High Availability and Fault Tolerance Issues

  • Virtual machines fail to start after High Availability failover
    After an ESXi host failure, when HA attempts to start the affected VMs on other hosts, some of the VMs might stop responding while booting.

    This issue is resolved in this release.

vMotion and Storage vMotion Issues

  • When vmk10 or higher is enabled for vMotion, on reboot vmk1 might get enabled for vMotion
    Enabling vMotion on vmk10 or higher might cause vmk1 to have vMotion enabled on reboot of ESXi host. This issue can cause excessive traffic over vmk1 and result in network problems.

    This issue is resolved in this release.

VMware Tools Issues

  • Auto upgrading might fail for VMware Tools
    VMware Tools auto-upgrade might fail for a virtual machine running on VMware ESXi 5.5 Update 3b. An error message similar to the following is displayed:
    vix error code = 21009

    Note: The issue occurs if the following guest files exist on the Virtual Machine:
    Microsoft Windows VM:
    C:\Windows\\Temp\\vmware-SYSTEM\\VMwareToolsUpgrader.exe
    Red Hat Enterprise Linux VM:
    /tmp/vmware-root

    This issue is resolved in this release.

  • VMware Tools version 10.0.0 included
    This release includes the VMware Tools version 10.0.0. Refer to the VMware Tools 10.0.0 Release Notes to see the issues resolved in this release.

Known Issues

The known issues existing in ESXi 5.5 are grouped as follows:

New known issues documented in this release are highlighted as New Issue.

Installation and Upgrade Issues

  • New Issue Attempts to add an ESXi 5.5 Update 3b host to a legacy version of vCenter Server fail
    Attempts to add ESXi 5.5 Update 3b host to vCenter Server 5.5 Update 3 or legacy version fail with host communication error in the user interface.

    Workaround: You need to update to latest vCenter Server 5.5 Update 3b version and then update to ESXi 5.5 Update 3b. If you don’t update to vCenter Server 5.5 Update 3b, then you need to enable SSLv3 on ESXi for all services. Enabling SSLv3 on ESXi will cause POODLE vulnerability.

    For more information refer, KB 2139396

  • The VMware Tools service user processes might not run on Linux OS after installing the latest VMware Tools package
    On Linux OS, you might encounter VMware Tools upgrade or installation issues or the VMware Tools service (vmtoolsd) user processes might not run after installing the latest VMware Tools package. The issue occurs if the your glibc version is older than version 2.5, like SLES10sp4.

    Workaround: Upgrade the Linux glibc to version 2.5 or above.

  • Attempts to get all image profiles might fail while running the Get-EsxImageProfile command in vSphere PowerCLI
    When you run the Get-EsxImageProfile command using vSphere PowerCLI to get all image profiles, an error similar to the following is displayed:

    PowerCLI C:\Windows\system32> Get-EsxImageProfile
    Get-EsxImageProfile : The parameter 'name' cannot be an empty string.
    Parameter name: name
    At line:1 char:20
    + Get-EsxImageProfile <<<<
    + CategoryInfo : NotSpecified: (:) [Get-EsxImageProfile], ArgumentException
    + FullyQualifiedErrorId : System.ArgumentException,VMware.ImageBuilder.Commands.GetProfiles


    Workaround: Run the Get-EsxImageProfile -name "ESXi-5.x*" command, which includes the -name option and display all image profiles created during the PowerCLI session.

    For example, running the command Get-EsxImageProfile -name "ESXi-5.5.*" displays all 5.5 image profiles similar to the following:

    PowerCLI C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI> Get-EsxmageProfile -name "ESXi-5.5.*"

    Name Vendor Last Modified Acceptance Level
    ---- ------ ------------- ----------------
    ESXi-5.5.0-20140701001s-no-... VMware, Inc. 8/23/2014 6:... PartnerSupported
    ESXi-5.5.0-20140302001-no-t... VMware, Inc. 8/23/2014 6:... PartnerSupported
    ESXi-5.5.0-20140604001-no-t... VMware, Inc. 8/23/2014 6:... PartnerSupported
    ESXi-5.5.0-20140401020s-sta... VMware, Inc. 8/23/2014 6:... PartnerSupported
    ESXi-5.5.0-20131201001s-sta... VMware, Inc. 8/23/2014 6:... PartnerSupported
  • Simple Install fails on Windows Server 2012
    Simple Install fails on Windows Server 2012 if the operating system is configured to use a DHCP IP address

    Workaround: Configure the Windows 2012 Server to use a static IP address.

  • If you use preserve VMFS with Auto Deploy Stateless Caching or Auto Deploy Stateful Installs, no core dump partition is created
    When you use Auto Deploy for Stateless Caching or Stateful Install on a blank disk, an MSDOS partition table is created. However, no core dump partition is created.

    Workaround: When you enable the Stateless Caching or Stateful Install host profile option, select Overwrite VMFS, even when you install on a blank disk. When you do so, a 2.5GB coredump partition is created.

  • During scripted installation, ESXi is installed on an SSD even though the --ignoressd option is used with the installorupgrade command
    In ESXi 5.5, the --ignoressd option is not supported with the installorupgrade command. If you use the --ignoressd option with the installorupgrade command, the installer displays a warning that this is an invalid combination. The installer continues to install ESXi on the SSD instead of stopping the installation and displaying an error message.

    Workaround: To use the --ignoressd option in a scripted installation of ESXi, use the install command instead of the installorupgrade command.

  • Delay in Auto Deploy cache purging might apply a host profile that has been deleted
    After you delete a host profile, it is not immediately purged from the Auto Deploy. As long as the host profile is persisted in the cache, Auto Deploy continues to apply the host profile. Any rules that apply the profile fail only after the profile is purged from the cache.

    Workaround: You can determine whether any rules use deleted host profiles by using the Get-DeployRuleSet PowerCLI cmdlet. The cmdlet shows the string deleted in the rule's itemlist. You can then run the Remove-DeployRule cmdlet to remove the rule.

  • Applying host profile that is set up to use Auto Deploy with stateless caching fails if ESX is installed on the selected disk
    You use host profiles to set up Auto Deploy with stateless caching enabled. In the host profile, you select a disk on which a version of ESX (not ESXi) is installed. When you apply the host profile, an error that includes the following text appears.
    Expecting 2 bootbanks, found 0

    Workaround: Select a different disk to use for stateless caching, or remove the ESX software from the disk. If you remove the ESX software, it becomes unavailable.

  • Installing or booting ESXi version 5.5.0 fails on servers from Oracle America (Sun) vendors
    When you perform a fresh ESXi version 5.5.0 installation or boot an existing ESXi version 5.5.0 installation on servers from Oracle America (Sun) vendors, the server console displays a blank screen during the installation process or when the existing ESXi 5.5.0 build boots. This happens because servers from Oracle America (Sun) vendors have a HEADLESS flag set in the ACPI FADT table, even though they are not headless platforms.

    Workaround: When you install or boot ESXi 5.5.0, pass the boot option ignoreHeadless="TRUE".

  • If you use ESXCLI commands to upgrade an ESXi host with less than 4GB physical RAM, the upgrade succeeds, but some ESXi operations fail upon reboot
    ESXi 5.5 requires a minimum of 4GB of physical RAM. The ESXCLI command-line interface does not perform a pre-upgrade check for the required 4GB of memory. You successfully upgrade a host with insufficient memory with ESXCLI, but when you boot the upgraded ESXi 5.5 host with less than 4GB RAM, some operations might fail.

    Workaround: None. Verify that the ESXi host has more than 4GB of physical RAM before the upgrade to version 5.5.

  • After upgrade from vCenter Server Appliance 5.0.x to 5.5, vCenter Server fails to start if an external vCenter Single Sign-On is used
    If the user chooses to use an external vCenter Single Sign-On instance while upgrading the vCenter Server Appliance from 5.0.x to 5.5, the vCenter Server fails to start after the upgrade. In the appliance management interface, the vCenter Single Sign-On is listed as not configured.

    Workaround: Perform the following steps:

    1. In a Web browser, open the vCenter Server Appliance management interface (https://appliance-address:5480).
    2. On the vCenter Server/Summary page, click the Stop Server button.
    3. On the vCenter Server/SSO page, complete the form with the appropriate settings, and click Save Settings.
    4. Return to the Summary page and click Start Server.

  • When you use ESXCLI to upgrade an ESXi 4.x or 5.0.x host to version 5.1 or 5.5, the vMotion and Fault Tolerance Logging (FT Logging) settings of any VMKernel port group are lost after the upgrade
    If you use the command esxcli software profile update <options> to upgrade an ESXi 4.x or 5.0.x host to version 5.1 or 5.5, the upgrade succeeds, but the vMotion and FT Logging settings of any VMkernel port group are lost. As a result, vMotion and FT Logging are restored to the default setting (disabled).

    Workaround: Perform an interactive or scripted upgrade, or use vSphere Update Manager to upgrade hosts. If you use the esxcli command, apply vMotion and FT Logging settings manually to the affected VMkernel port group after the upgrade.

  • When you upgrade vSphere 5.0.x or earlier to version 5.5, system resource allocation values that were set manually are reset to the default value
    In vSphere 5.0.x and earlier, you modify settings in the system resource allocation user interface as a temporary workaround. You cannot reset the value for these settings to the default without completely reinstalling ESXi. In vSphere 5.1 and later, the system behavior changes, so that preserving custom system resource allocation settings might result in values that are not safe to use. The upgrade resets all such values.

    Workaround: None.

  • IPv6 settings of virtual NIC vmk0 are not retained after upgrade from ESX 4.x to ESXi 5.5
    When you upgrade an ESX 4.x host with IPv6 enabled to ESXi 5.5 by using the --forcemigrate option, the IPv6 address of virtual NIC vmk0 is not retained after the upgrade.

    Workaround: None.

Networking Issues

  • Unable to use PCNet32 network adapter with NSX opaque network
    When PCNet32 flexible network adapter is configured with NSX opaque network backing, the adapter disconnects while powering on the VM.
  • Workaround: None

  • Upgrading to ESXi 5.5 might change the IGMP configuration of TCP/IP stack for multicast group management
    The default IGMP version of the management interfaces is changed from IGMP V2 to IGMP V3 for ESXi 5.5 hosts for multicast group management. As a result, when you upgrade to ESXi 5.5, the management interface might revert back to IGMP V2 from IGMP V3 if it receives an IGMP query of a previous version and you might notice IGMP version mismatch error messages.

    Workaround: Edit the default IGMP version by modifying the TCP/IP IGMP rejoin interval in the Advanced Configuration option.
  • Static routes associated with vmknic interfaces and dynamic IP addresses might fail to appear after reboot
    After you reboot the host, static routes that are associated with VMkernel network interface (vmknic) and dynamic IP address might fail to appear.
    This issue occurs due to a race condition between DHCP client and restore routes command. The DHCP client might not finish acquiring an IP address for vmknics when the host attempts to restore custom routes during the reboot process. As a result, the gateway might not be set up and the routes are not restored.

    Workaround: Run the esxcfg-route –r command to restore the routes manually.
  • An ESXi host stops responding after being added to vCenter Server by its IPv6 address
    When you add an ESXi host to vCenter Server by IPv6 link-local address of the form fe80::/64, within a short time the host name becomes dimmed and the host stops responding to vCenter Server.

    Workaround: Use a valid IPv6 address that is not a link-local address.

  • The vSphere Web Client lets you configure more virtual functions than are supported by the physical NIC and does not display an error message
    In the SR-IOV settings of a physical adapter, you can configure more virtual functions than are supported by the adapter. For example, you can configure 100 virtual functions on a NIC that supports only 23, and no error message appears. A message prompts you to reboot the host so that the SR-IOV settings are applied. After the host reboots, the NIC is configured with as many virtual functions as the adapter supports, or 23 in this example. The message that prompts you to reboot the host persists when it should not appear.

    Workaround: None

  • On an SR-IOV enabled ESXi host, virtual machines associated with virtual functions might not start
    When SR-IOV is enabled on an ESXi host 5.1 or later with Intel ixgbe NICs, if several virtual functions are enabled in the environment, some virtual machines might fail to start.
    The vmware.log file contains messages similar to the following:
    2013-02-28T07:06:31.863Z| vcpu-1| I120: Msg_Post: Error
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.log.error.unrecoverable] VMware ESX unrecoverable error: (vcpu-1)
    2013-02-28T07:06:31.863Z| vcpu-1| I120+ PCIPassthruChangeIntrSettings: 0a:17.3 failed to register interrupt (error code 195887110)
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.haveLog] A log file is available in "/vmfs/volumes/5122262e-ab950f8e-cd4f-b8ac6f917d68/VMLibRoot/VMLib-RHEL6.2-64-HW7-default-3-2-1361954882/vmwar
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.requestSupport.withoutLog] You can request support.
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.requestSupport.vmSupport.vmx86]
    2013-02-28T07:06:31.863Z| vcpu-1| I120+ To collect data to submit to VMware technical support, run "vm-support".
    2013-02-28T07:06:31.863Z| vcpu-1| I120: [msg.panic.response] We will respond on the basis of your support entitlement.

    Workaround: Reduce the number of virtual functions associated with the affected virtual machine before starting it.

  • On an Emulex BladeEngine 3 physical network adapter, a virtual machine network adapter backed by a virtual function cannot reach a VMkernel adapter that uses the physical function as an uplink
    Traffic does not flow between a virtual function and its physical function. For example, on a switch backed by the physical function, a virtual machine that uses a virtual function on the same port cannot contact a VMkernel adapter on the same switch. This is a known issue of the Emulex BladeEngine 3 physical adapters. For information, contact Emulex.

    Workaround: Disable the native driver for Emulex BladeEngine 3 devices on the host. For more information, see VMware KB 2044993.

  • The ESXi Dump Collector fails to send the ESXi core file to the remote server
    The ESXi Dump Collector fails to send the ESXi core file if the VMkernel adapter that handles the traffic of the dump collector is configured to a distributed port group that has a link aggregation group (LAG) set as the active uplink. An LACP port channel is configured on the physical switch.

    Workaround: Perform one of the following workarounds:

    • Use a vSphere Standard Switch to configure the VMkernel adapter that handles the traffic for the ESXi Dump Collector with the remote server.
    • Use standalone uplinks to handle the traffic for the distributed port group where the VMkernel adapter is configured.
  • If you change the number of ports that a vSphere Standard Switch or vSphere Distributed Switch has on a host by using the vSphere Client, the change is not saved, even after a reboot
    If you change the number of ports that a vSphere Standard Switch or vSphere Distributed Switch has on an ESXi 5.5 host by using the vSphere Client, the number of ports does not change even after you reboot the host.

    When a host that runs ESXi 5.5 is rebooted, it dynamically scales up or down the ports of virtual switches. The number of ports is based on the number of virtual machines that the host can run. You do not have to configure the number of switch ports on such hosts.

    Workaround: None in the vSphere Client.

Server Configuration Issues

  • NIC hardware might stop responding with an hardware error message
    The NIC hardware might occassionally stop responding under certain circumstances with the following error message in the driver logs:

    Detected Hardware Unit Hang

    The issue is observed with some new e1000e devices like 82579, i217, i218 and i219.

    Workaround: The NIC hardware resets itself after the issue occurs.

  • Menu navigation problem is experienced When Direct Control User Interface is accessed from a serial console
    When Direct Control User Interface is accessed from a serial console, the Up and Down arrow keys do not work while navigating to the menu and the user is forcefully logged out of the DCUI configuration screen.

    Workaround: Stop the DCUI process. The DCUI process will be restarted automatically.

  • Host profiles might incorrectly appear as compliant after ESXi hosts are upgrade to 5.5 Update 2 followed by changes in host configuration
    If an ESXi host that is compliant with an host profile is updated to ESXi 5.5 Update 2 followed by some changes in host configuration and you re-check the compliance of the host with the host profile, the profile is incorrectly reported to be compliant.

    Workaround:
    • In vSPhere Client, navigate to the host profile that has the issue and run Update profile From Reference Host.
    • In vSPhere Web Client, navigate to host Profile that has the issue, click Copy settings from host, select the host from which you want to copy the configuration settings and click OK.
  • Host Profile remediation fails with vSphere Distributed Switch
    Remediation errors might occur when applying a Host Profile with a vSphere Distributed Switch and a virtual machine with Fault Tolerance is in a powered off state on a host that uses the distributed switch in that Host Profile.

    Workaround: Move the powered off virtual machines to another host in order for the Host Profile to succeed.

  • Host profile receives firewall settings compliance errors when you apply ESX 4.0 or ESX 4.1 profile to ESXi 5.5.x host
    If you extract a host profile from an ESX 4.0 or ESX 4.1 host and attempt to apply it to an ESXi 5.5.x host, the profile remediation succeeds. The compliance check receives firewall settings errors that include the following:

    Ruleset LDAP not found
    Ruleset LDAPS not found
    Ruleset TSM not found
    Ruleset VCB not found
    Ruleset activeDirectorKerberos not found

    Workaround: No workaround is required. This is expected because the firewall settings for an ESX 4.0 or ESX 4.1 host are different from those for an ESXi 5.5.x host.

  • Changing BIOS device settings for an ESXi host might result in invalid device names
    Changing a BIOS device setting on an ESXi host might result in invalid device names if the change causes a shift in the <segment:bus:device:function> values assigned to devices. For example, enabling a previously-disabled integrated NIC might shift the <segment:bus:device:function> values assigned to other PCI devices, causing ESXi to change the names assigned to these NICs. Unlike previous versions of ESXi, ESXi 5.5 attempts to preserve devices names through <segment:bus:device:function> changes if the host BIOS provides specific device location information. Due to a bug in this feature, invalid names such as vmhba1 and vmnic32 are sometimes generated.

    Workaround: Rebooting the ESXi host once or twice might clear the invalid device names and restore the original names. Do not run an ESXi host with invalid device names in production.

Storage Issues

  • ESXi hosts with HBA drivers might stop responding when the VMFS heartbeats to the datastores timeout
    ESXi hosts with HBA drivers might stop responding when the VMFS heartbeats to the datastores timeout with messages similar to the following:

    mem>2014-05-12T13:34:00.639Z cpu8:1416436)VMW_SATP_ALUA: satp_alua_issueCommandOnPath:651: Path "vmhba2:C0:T1:L10" (UP) command 0xa3 failed with status Timeout. H:0x5 D:0x0 P:0x0 Possible sense data: 0x5 0x20 0x0.2014-05-12T13:34:05.637Z cpu0:33038)VMW_SATP_ALUA: satp_alua_issueCommandOnPath:651: Path "vmhba2:C0:T1:L4" (UP) command 0xa3 failed with status Timeout. H:0x5 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.

    This issue occurs with the HBA driver when a high disk I/O on the datastore is connected to the ESXi host and multipathing is enabled at the target level instead of the HBA level.

    Workaround: Replace the HBA driver with the latest async HBA driver.
  • Attempts to perform live storage vMotion of virtual machines with RDM disks might fail
    Storage vMotion of virtual machines with RDM disks might fail and virtual machines might be seen in powered off state. Attempts to power on the virtual machine fails with the following error:

    Failed to lock the file

    Workaround: None.
  • Renamed tags appear as missing in the Edit VM Storage Policy wizard
    A virtual machine storage policy can include rules based on datastore tags. If you rename a tag, the storage policy that references this tag does not automatically update the tag and shows it as missing.

    Workaround: Remove the tag marked as missing from the virtual machine storage policy and then add the renamed tag. Reapply the storage policy to all out-of-date entities.

  • A virtual machine cannot be powered on when the Flash Read Cache block size is set to 16KB, 256KB, 512KB, or 1024KB
    A virtual machine configured with Flash Read Cache and a block size of 16KB, 256KB, 512KB, or 1024KB cannot be powered on. Flash Read Cache supports a minimum cache size of 4MB and maximum of 200GB, and a minimum block size of 4KB and maximum block size of 1MB. When you power on a virtual machine, the operation fails and the following messages appear:

    An error was received from the ESX host while powering on VM.

    Failed to start the virtual machine.

    Module DiskEarly power on failed.

    Failed to configure disk scsi0:0.

    The virtual machine cannot be powered on with an unconfigured disk. vFlash cache cannot be attached: msg.vflashcache.error.VFC_FAILURE

    Workaround: Configure virtual machine Flash Read Cache size and block size.

    1. Right-click the virtual machine and select Edit Settings.
    2. On the Virtual Hardware tab, expand Hard disk to view the disk options.
    3. Click Advanced next to the Virtual Flash Read Cache field.
    4. Increase the cache size reservation or decrease the block size.
    5. Click OK to save your changes.
  • A custom extension of a saved resource pool tree file cannot be loaded in the vSphere Web Client
    A DRS error message appears on host summary page.

    When you disable DRS in the vSphere Web Client, you are prompted to save the resource pool structure so that it can be reloaded in the future. The default extension of this file is .snapshot, but you can select a different extension for this file. If the file has a custom extension, it appears as disabled when you try to load it. This behavior is observed only on OS X.

    Workaround: Change the extension to .snapshot to load it in the vSphere Web Client on OS X.

  • DRS error message appears on the host summary page
    The following DRS error message appears on the host summary page:

    Unable to apply DRS resource settings on host. The operation is not allowed in the current state. This can significantly reduce the effectiveness of DRS.

    In some configurations a race condition might result in the creation of an error message in the log that is not meaningful or actionable. This error might occur if a virtual machine is unregistered at the same time that DRS resource settings are applied.

    Workaround: Ignore this error message.

  • Configuring virtual Flash Read Cache for VMDKs larger than 16TB results in an error
    Virtual Flash Read Cache does not support virtual machine disks larger than 16TB. Attempts to configure such disks will fail.

    Workaround: None

  • Virtual machines might power off when the cache size is reconfigured
    If you incorrectly reconfigure the virtual Flash Read Cache on a virtual machine, for example by assigning an invalid value, the virtual machine might power off.

    Workaround: Follow the recommended cache size guidelines in the vSphere Storage documentation.

  • Reconfiguring a virtual machine with virtual Flash Read Cache enabled might fail with the Operation timed out error
    Reconfiguration operations require a significant amount of I/O bandwidth. When you run a heavy load, such operations might time out before they finish. You might also see this behavior if the host has LUNs that are in an all paths down (APD) state.

    Workaround: Fix all host APD states and retry the operation with a smaller I/O load on the LUN and host.

  • DRS does not vMotion virtual machines with virtual Flash Read Cache for load balancing purpose
    DRS does not vMotion virtual machines with virtual Flash Read Cache for load balancing purposes.

    Workaround: DRS does not recommend these virtual machines for vMotion except for the following reasons:

    • To evacuate a host that the user has requested to enter maintenance or standby mode.
    • To fix DRS rule violations.
    • Host resource usage is in red state.
    • One or most hosts is over utilized and virtual machine demand is not being met.
      Note: You can optionally set DRS to ignore this reason.
  • Hosts are put in standby when the active memory of virtual machines is low but consumed memory is high
    ESXi 5.5 introduces a change in the default behavior of DPM designed to make the feature less aggressive, which can help prevent performance degradation for virtual machines when active memory is low but consumed memory is high. The DPM metric is X%*IdleConsumedMemory + active memory. The X% variable is adjustable and is set to 25% by default.

    Workaround: You can revert to the aggressive DPM behavior found in earlier releases of ESXi by setting PercentIdleMBInMemDemand=0 in the advanced options.

  • vMotion initiated by DRS might fail
    When DRS recommends vMotion for virtual machines with a virtual Flash Read Cache reservation, vMotion might fail because the memory (RAM) available on the target host is insufficient to manage the Flash Read Cache reservation of the virtual machines.

    Workaround: Follow the Flash Read Cache configuration recommendations documented in vSphere Storage.
    If vMotion fails, perform the following steps:

    1. Reconfigure the block sizes of the virtual machines on the target host and the incoming virtual machines to reduce the overall target usage of the VMkernel memory on the target host.
    2. Use vMotion to manually migrate the virtual machine to the target host to ensure the condition is resolved.
  • You are unable to view problems that occur during virtual flash configuration of individual SSD devices
    The configuration of virtual flash resources is a task that operates on a list of SSD devices. When the task finishes for all objects, the vSphere Web Client reports it as successful, and you might not be notified of problems with the configuration of individual SSD devices.

    Workaround: Perform one of the following tasks.

    • In the Recent Tasks panel, double-click the completed task.
      Any configuration failures appear in the Related events section of the Task Details dialog box.
    • Alternatively, follow these steps:
      1. Select the host in the inventory.
      2. Click the Monitor tab, and click Events.
  • Unable to obtain SMART information for Micron PCIe SSDs on the ESXi host
    Your attempts to use the esxcli storage core device smart get -d command to display statistics for the Micron PCIe SSD device fail. You get the following error message:
    Error getting Smart Parameters: CANNOT open device

    Workaround: None. In this release, the esxcli storage core device smart command does not support Micron PCIe SSDs.

  • ESXi does not apply the bandwidth limit that is configured for a SCSI virtual disk in the configuration file of a virtual machine
    You configure the bandwidth and throughput limits of a SCSI virtual disk by using a set of parameters in the virtual machine configuration file (.vmx). For example, the configuration file might contain the following limits for a scsi0:0 virtual disk:
    sched.scsi0:0.throughputCap = "80IOPS"
    sched.scsi0:0.bandwidthCap = "10MBps"
    sched.scsi0:0.shares = "normal"

    ESXi does not apply the sched.scsi0:0.bandwidthCap limit to the scsi0:0 virtual disk.

    Workaround: Revert to an earlier version of the disk I/O scheduler by using the vSphere Web Client or the esxcli system settings advanced set command.

    • In the vSphere Web Client, edit the Disk.SchedulerWithReservation parameter in the Advanced System Settings list for the host.
      1. Navigate to the host.
      2. On the Manage tab, select Settings and select Advanced System Settings.
      3. Locate the Disk.SchedulerWithReservation parameter, for example, by using the Filter or Find text boxes.
      4. Click Edit and set the parameter to 0.
      5. Click OK.
    • In the ESXi Shell to the host, run the following console command:
      esxcli system settings advanced set -o /Disk/SchedulerWithReservation -i=0
  • A virtual machine configured with Flash Read Cache cannot be migrated off a host if there is an error in the cache
    A virtual machine with Flash Read Cache configured might have a migration error if the cache is in an error state and is unusable. This error causes migration of the virtual machine to fail.

    Workaround:

    1. Reconfigure the virtual machine and disable the cache.
    2. Perform the migration.
    3. Re-enable the cache after the virtual machine is migrated.

    Alternatively, the virtual machine must be powered off and then powered on to correct the error with the cache.

  • You cannot delete the VFFS volume after a host is upgraded from ESXi 5.5 Beta
    You cannot delete the VFFS volume after a host is upgraded from ESXi 5.5 Beta.

    Workaround: This occurs only when you upgrade from ESXi 5.5 Beta to ESXi 5.5. To avoid this problem, install ESXi 5.5 instead of upgrading. If a you upgrade from ESXi 5.5 Beta, delete the VFFS volume before you upgrade.

  • Expected latency runtime improvements are not seen when virtual Flash Read Cache is enabled on virtual machines with older Windows and Linux guest operating systems
    Virtual Flash Read Cache provides optimal performance when the cache is sized to match the target working set, and when the guest file systems are aligned to at least a 4KB boundary. The Flash Read Cache filters out misaligned blocks to avoid caching partial blocks within the cache. This behavior is typically seen when virtual Flash Read Cache is configured for VMDKs of virtual machines with Windows XP and Linux distributions earlier than 2.6. In such cases, a low cache hit rate with a low cache occupancy is observed, which implies a waste of cache reservation for such VMDKs. This behavior is not seen with virtual machines running Windows 7, Windows 2008, and Linux 2.6 and later distributions, which align their file systems to a 4KB boundary to ensure optimal performance.

    Workaround: To improve the cache hit rate and optimal use of the cache reservation for each VMDK, ensure that the guest operating file system installed on the VMDK is aligned to at least a 4KB boundary.

Virtual SAN

  • Unmounted Virtual SAN disks and diskgroups displayed as mounted in the vSphere Client UI Operational Status field
    After the Virtual SAN disks or diskgroups are unmounted using the esxcli vsan storage diskgroup unmount CLI command or automatically by the Virtual SAN Device Monitor service when disks show persistently high latencies, the vSphere Client UI incorrectly displays the Operational Status field as Mounted.

    Workaround: Verify the Health field that shows a non-healthy value instead of the Operational Status field.
  • ESXi host with multiple VSAN disk groups might not display the magnetic disk statistics when you run the vsan.disks_stats command
    An ESXi host with multiple VSAN disk groups might not display the magnetic disk (MD) statistics when you run the vsan.disks_stats Ruby vSphere Console (RVC)command. The host displays only the solid-state drive (SSD) information.

    Workaround: None
  • VM directories contain duplicate swap (.vswp) files
    This might occur if virtual machines running on Virtual SAN are not cleanly shutdown, and if you perform a fresh installation of ESXi and vCenter Server without erasing data from Virtual SAN disks. As a result, old swap files (.vswp) are found in the directories for virtual machines that are shut down uncleanly.

    Workaround: None

  • Attempts to add more than seven magnetic disks to a Virtual SAN disk group might fail with incorrect error message
    Virtual SAN disk group supports maximum of one SSD and seven magnetic disks (HDD). Attempts to add an additional magnetic disk might fail with an incorrect error message similar to the following:

    The number of disks is not sufficient.

    Workaround: None
  • Re-scan failure experienced while adding a Virtual SAN disk
    When you add a Virtual SAN disk, re-scan fails due to probe failure for a non-Virtual SAN volume, which causes the operation to fail.

    Workaround: Ignore the error as all the disks are registered correctly.
  • A hard disk drive (HDD) that is removed after its associated solid state drive (SSD) is removed might still be listed as a storage disk claimed by Virtual SAN
    If an SSD and then its associated HDD is removed from a Virtual SAN datastore and you run the esxcli vsan storage list command, the removed HDD is still listed as a storage disk claimed by Virtual SAN. If the HDD is inserted back in a different host, the disk might appear to be part of two different hosts.

    Workaround: For example, if SSD and HDD is removed from ESXi x and inserted into ESXi y, perform the following steps to prevent the HDD from appearing to be a part of both ESXi x and ESXi y:
    1. Insert the SSD and HDD removed from the ESXi x, into ESXi y.
    2. Decommission the SSD from ESXi x.
    3. Run the command esxcfg-rescan -A.
       The HDD and SSD will no longer be listed on ESXi x.
  • The Working with Virtual SAN section of the vSphere Storage documentation indicates that the maximum number of HDD disks per a disk group is six. However, the maximum allowed number of HDDs is seven.
  • After a failure in a Virtual SAN cluster, vSphere HA might report multiple events, some misleading, before restarting a virtual machine
    The vSphere HA master agent makes multiple attempts to restart a virtual machine running on Virtual SAN after it has appeared to have failed. If the virtual machine cannot be immediately restarted, the master agent monitors the cluster state, and makes another attempt when conditions indicate that a restart might be successful. For virtual machines running on Virtual SAN, the vSphere HA master has special application logic to detect when the accessibility of a virtual machine's objects might have changed, and attempts a restart whenever an accessibility change is likely. The master agent makes an attempt after each possible accessibility change, and if it did not successfully power on the virtual machine before giving up and waiting for the next possible accessibility change.

    After each failed attempt, vSphere HA reports an event indicating that the failover was not successful, and after five failed attempts, reports that vSphere HA stopped trying to restart the virtual machine because the maximum number of failover attempts was reached. Even after reporting that the vSphere HA master agent has stopped trying, however, it does try the next time a possible accessibility change occurs.

    Workaround: None.

  • Powering off a Virtual SAN host causes the Storage Providers view in the vSphere Web Client to refresh longer than expected
    If you power off a Virtual San host, the Storage Providers view might appear empty. The Refresh button continues to spin even though no information is shown.

    Workaround: Wait at least 15 minutes for the Storage Providers view to be populated again. The view also refreshes after you power on the host.

  • Virtual SAN reports a failed task as completed
    Virtual SAN might report certain tasks as completed even though they failed internally.

    The following are conditions and corresponding reasons for errors:

    • Condition: Users attempt to create a new disk group or add a new disk to already existing disk group when the Virtual SAN license has expired.
      Error stack: A general system error occurred: Cannot add disk: VSAN is not licensed on this host.
    • Condition: Users attempt to create a disk group with the number of disk higher than the supported number. Or they try to add new disks to already existing disk group so that the total number exceeds the supported number of disks per disk group.
      Error stack: A general system error occurred: Too many disks.
    • Condition: Users attempt to add a disk to the disk group that has errors.
      Error stack: A general system error occurred: Unable to create partition table.

    Workaround: After identifying the reason for a failure, correct the reason and perform the task again.

  • Virtual SAN datastores cannot store host local and system swap files
    Typically, you can place the system swap or host local swap file on a datastore. However, the Virtual SAN datastore does not support system swap and host local swap files. As a result, the UI option that allows you to select the Virtual SAN datastore as the file location for system swap or host local swap is not available.

    Workaround: In Virtual SAN environment, use other supported options to place the system swap and host local swap files.

  • A Virtual SAN virtual machine in a vSphere HA cluster is reported as vSphere HA protected although it has been powered off
    This might happen when you power off a virtual machine with its home object residing on a Virtual SAN datastore, and the home object is not accessible. This problem is seen if a HA master agent election occurs after the object becomes inaccessible.

    Workaround:

    1. Make sure that the home object is accessible again by checking the compliance of the object with the specified storage policy.
    2. Power on the virtual machine then power it off again.

    The status should change to unprotected.

  • Virtual machine object remains in Out of Date status even after Reapply action is triggered and completed successfully
    If you edit an existing virtual machine profile due to the new storage requirements, the associated virtual machine objects, home or disk, might go in Out of Date status.This occurs when your current environment cannot support reconfiguration of virtual machine objects. Using Reapply action does not change the status.

    Workaround: Add additional resources, hosts or disks, to the Virtual SAN cluster and invoke Reapply action again.

  • Automatic disk claiming for Virtual SAN does not work as expected if you license Virtual SAN after enabling it
    If you enable Virtual SAN in automatic mode and then assign a license, Virtual SAN fails to claim disks.

    Workaround: Change the mode to Manual, and then switch back to Automatic. Virtual SAN will properly claim the disks.

  • vSphere High Availability (HA) fails to restart a virtual machine when Virtual SAN network is partitioned
    This occurs when Virtual SAN uses VMkernel adapters for internode communication, which are on the same subnet as other VMkernel adapters in a cluster. Such configuration could cause network failure and disrupt Virtual SAN internode communication, while vSphere HA internode communication remains unaffected.

    In this situation, the HA master agent might detect the failure in a virtual machine, but is unable to restart it. For example, this could occur when the host on which the master agent is running does not have access to the virtual machine's objects.

    Workaround: Make sure that the VMkernel adapters used by Virtual SAN do not share a subnet with the VMkernel adapters used for other purposes.

  • VM directories contain duplicate swap (.vswp) files   
    This might occur if virtual machines running on Virtual SAN are not cleanly shutdown, and if you perform a fresh installation of ESXi and vCenter Server without erasing data from Virtual SAN disks. As a result, old swap files (.vswp) are found in the directories for virtual machines that are shut down uncleanly.

    Workaround: None

  • VMs might become inaccessible due to high network latency
    In a Virtual SAN cluster setup, if the network latency is high, some VMs might become inaccessible on vCenter Server and you will not be able to power on or access the VM.

    Workaround: Run the vsan.check_state -e -r RVC command.
  • VM operations might timeout due to high network latency
    When storage controller with low queue depths are used, high network latency might cause VM operations to time out.

    Workaround: Re-attempt the operations when the network load is lower.
  • VMs might get renamed to a truncated version of their vmx file path
    If the vmx file of a virtual machines is temporarily inaccessible, the VM gets renamed to a truncated version of the vmx file path. For example, the virtual machine might get renamed to /vmfs/volumes/vsan:52f1686bdcb477cd-8e97188e35b99d2e/236d5552-ad93. The truncation might delete half the UUID of the VM home directory making it difficult to map the renamed VM with the original VM, from just the VM name.

    Workaround: Run the vsan.fix_renamed_vms RVC command.

vCenter Server and vSphere Web Client

  • Unable to add ESXi host to Active Directory domain
    You might observe that Active Directory domain name is not displayed in Domain drop-down list under Select Users and Groups option when you attempt to assign permissions. Also, the Authentication Services Settings option might not display any trusted domain controller even when the active directory has trusted domains.

    Workaround:
    1. Restart netlogond, lwiod, and then lsassd daemons.
    2. Login to ESXi host using vSphere Client.
    3. In the Configuration tab and click Authentication Services Settings.
    4. Refresh to view the trusted domains.

Virtual Machine Management Issues

  • Unable to perform cold migration and storage vMotion of a virtual machine if the VMDK file name begins with "core"
    Attempts to perform cold migration and storage vMotion of a virtual machine might fail if the VMDK file name begins with "core" with error message similar to the following:

    A general system error occurred: Error naming or renaming a VM file.

    Error messages similar to the following might be displayed in the vpxd.log file:

    mem> 2014-01-01T11:08:33.150-08:00 [13512 info 'commonvpxLro' opID=8BA11741-0000095D-86-97] [VpxLRO] -- FINISH task-internal-2471 -- -- VmprovWorkflow --
    mem> 2014-01-01T11:08:33.150-08:00 [13512 info 'Default' opID=8BA11741-0000095D-86-97] [VpxLRO] -- ERROR task-internal-2471 -- -- VmprovWorkflow: vmodl.fault.SystemError:
    mem> --> Result:
    mem> --> (vmodl.fault.SystemError){
    mem> --> dynamicType = ,
    mem> --> faultCause = (vmodl.MethodFault) null,
    mem> --> reason = "Error naming or renaming a VM file.",
    mem> --> msg = "",
    mem> --> }

    This issue occurs when the ESXi host incorrectly classifies VMDK files with a name beginning with "core" as a core file instead of the expected disk type.

    Workaround: Ensure that the VMDK file name of the virtual machine does not begin with "core". Also, use the vmkfstools utility to rename the VMDK file to ensure that the file name do not begin with the word "core".
  • Virtual machines with Windows 7 Enterprise 64-bit guest operating systems in the French locale experience problems during clone operations
    If you have a cloned Windows 7 Enterprise 64-bit virtual machine that is running in the French locale, the virtual machine disconnects from the network and the customization specification is not applied. This issue appears when the virtual machine is running on an ESXi 5.1 host and you clone it to ESXi 5.5 and upgrade the VMware Tools version to the latest version available with the 5.5 host.

    Workaround: Upgrade the virtual machine compatibility to ESXi 5.5 and later before you upgrade to the latest available version of VMware Tools.

  • Attempts to increase the size of a virtual disk on a running virtual machine fail with an error
    If you increase the size of a virtual disk when the virtual machine is running, the operation might fail with the following error:

    This operation is not supported for this device type.

    The failure might occur if you are extending the disk to the size of 2TB or larger. The hot-extend operation supports increasing the disk size to only 2TB or less. SATA virtual disks do not support the hot-extend operation no matter what their size is.

    Workaround: Power off the virtual machine to extend the virtual disk to 2TB or larger.

VMware HA and Fault Tolerance Issues
  • Fault Tolerance (FT) is not supported on Intel Skylake-DT/S, Broadwell-EP, Broadwell-DT, and Broadwell-DE platform
    Fault tolerance is not supported on Intel Skylake-DT/S, Broadwell-EP, Broadwell-DT, and Broadwell-DE platform. Attempts to power on a virtual machine will fail after you enable single-processor Fault Tolerance.

    Workaround: None

  • If you select an ESX/ESXi 4.0 or 4.1 host in a vSphere HA cluster to fail over a virtual machine, the virtual machine might not restart as expected
    When vSphere HA restarts a virtual machine on an ESX/ESXi 4.0 or 4.1 host that is different from the original host the virtual machine was running on, a query is issued that is not answered. The virtual machine is not powered on on the new host until you answer the query manually from the vSphere Client.

    Workaround: Answer the query from the vSphere Client. Alternatively, you can wait for a timeout (15 minutes by default), and vSphere HA attempts to restart the virtual machine on a different host. If the host is running ESX/ESXi 5.0 or later, the virtual machine is restarted.

  • If a vMotion operation without shared storage fails in a vSphere HA cluster, the destination virtual machine might be registered to an unexpected host
    A vMotion migration involving no shared storage might fail because the destination virtual machine does not receive a handshake message that coordinates the transfer of control between the two virtual machines. The vMotion protocol powers off both the source and destination virtual machines. If the source and destination hosts are in the same cluster and if vSphere HA has been enabled, the destination virtual machine might be registered by vSphere HA on another host than the one chosen as the target for the vMotion migration.

    Workaround: If you want to retain the destination virtual machine and you want it to be registered to a specific host, relocate the destination virtual machine to the destination host. This relocation is best done before powering on the virtual machine.

Supported Hardware Issues
  • Sensor values for Fan, Power Supply, Voltage, and Current sensors appear under the Other group of the vCenter Server Hardware Status Tab
    Some sensor values are listed in the Other group instead of the respective categorized group.

    Workaround: None.

  • I/O memory management unit (IOMMU) faults might appear when the debug direct memory access (DMA) mapper is enabled
    The debug mapper places devices in IOMMU domains to help catch device memory accesses to addresses that have not been explicitly mapped. On some HP systems with old firmware, IOMMU faults might appear.

    Workaround: Download firmware upgrades from the HP Web site and apply them.

    • Upgrade the firmware of the HP iLO2 controller.
      Version 2.07, released in August 2011, resolves the problem.
    • Upgrade the firmware of the HP Smart Array.
      For the HP Smart Array P410, version 5.14, released in January 2012, resolves the problem.

VMware Tools Issues

  • User is forcefully logged out while installing or uninstalling VMware Tools by OSP
    While installing or uninstalling VMware Tools packages in a RHEL (Red Hat Linux Enterprise) and CentOS virtual machines that were installed using operating system specific packages (OSP), the current user is forcefully logged out. This issue occurs in RHEL 6.5 64-bit, RHEL 6.5 32-bit, CentOS 6.5 64-bit and CentOS 6.5 32-bit virtual machines.

    Workaround:
    • Use secure shell (SSH) to install or uninstall VMware Tools
      or
    • The user must log in again to install or uninstall the VMware Tools packages

Miscellaneous Issues

  • New Issue ESXi does not get automatically added to vCenter Server inventory
    If you update a previous version of vCenter Server and vSphere Update Manager to ESXi to 5.5 Update 3b, then after remediation task, ESXi does not get automatically added to VC inventory. Remediation process never gets completed and ESXi connection status in VC inventory is shown as disconnected.

    Workaround: When ESXi is rebooted after remediate process is started, enable SSLv3 on ESXi (which is disabled by default).
    This will make sure ESXi gets added to VC inventory automatically in few minutes and Remediation as completed. For more information refer, KB 2139396

  • New Issue Connection failure between ESXi 5.5 Update 3b and View Composer earlier than 6.2 version
    You cannot connect View Composer earlier than 6.2 version to ESXi 5.5 Update 3b in default state.

    Workaround: You can enable SSLv3 on ESXi 5.5 Update 3b. For more information refer, KB 2139396

  • SRM test recovery operation might fail with an error
    Attempts to perform Site Recovery Manager (SRM) test recovery might fail with error message similar to the following:
    'Error - A general system error occurred: VM not found'.
    When several test recovery operations are performed simultaneously, the probability of encountering the error messages increases.

    Workaround: None. However, this is not a persistent issue and this issue might not occur if you perform the SRM test recovery operation again.