VMware ESXi 6.0 Update 1a Release Notes

Updated on: 6 OCT 2015

ESXi 6.0 Update 1a | 6 OCT 2015 | ISO Build 3073146

Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

This release of VMware ESXi contains the following enhancements:

  • I/O Filter: vSphere APIs for I/O Filtering (VAIO) provide a framework that allows third parties to create software components called I/O filters. The filters can be installed on ESXi hosts and can offer additional data services to virtual machines by processing I/O requests that move between the guest operating system of a virtual machine and virtual disks.

  • Exclusive affinity to additional system contexts associated with a low-latency VM: This release introduces a new VMX option sched.cpu.latencySensitivity.sysContexts to address issues on vSphere 6.0 where most system contexts are still worldlets. The Scheduler utilizes the sched.cpu.latencySensitivity.sysContexts option for each virtual machine to automatically identify a set of system contexts that might be involved in the latency-sensitive workloads. For each of these system contexts, exclusive affinity to one dedicated physical core is provided. The VMX option sched.cpu.latencySensitivity.sysContexts denotes how many exclusive cores a low-latency VM can get for the system contexts.

  • ESXi Authentication for Active Directory:ESXi is modified to only support AES256-CTS/AES128-CTS/RC4-HMAC encryption for Kerberos communication between ESXi and Active Directory.

  • Support for SSLv3: Support for SSLv3 has been disabled by default. For further details, see Knowledge Base article 2121021.

  • Dying Disk Handling (DDH): The Dying Disk Handling feature provides latency monitoring framework in the kernel, a daemon to detect high latency periods, and a mechanism to unmount individual disks and diskgroups.

  • Stretched Clusters: Virtual SAN 6.0 Update 1 supports stretched clusters that span geographic locations to protect data from site failures or loss of network connection.

Earlier Releases of ESXi 6.0

Features and known issues of ESXi 6.0 are described in the release notes for each release. Release notes for earlier releases of ESXi 6.0, are:

Internationalization

VMware ESXi 6.0 is available in the following languages:

  • English
  • French
  • German
  • Japanese
  • Korean
  • Simplified Chinese
  • Traditional Chinese

Components of VMware vSphere 6.0, including vCenter Server, ESXi, the vSphere Web Client, and the vSphere Client do not accept non-ASCII input.

Compatibility

ESXi, vCenter Server, and vSphere Web Client Version Compatibility

The VMware Product Interoperability Matrix provides details about the compatibility of current and earlier versions of VMware vSphere components, including ESXi, VMware vCenter Server, the vSphere Web Client, and optional VMware products. Check the VMware Product Interoperability Matrix also for information about supported management and backup agents before you install ESXi or vCenter Server.

The vSphere Web Client is packaged with the vCenter Server. You can install the vSphere Client from the VMware vCenter autorun menu that is part of the modules ISO file.

Hardware Compatibility for ESXi

To view a list of processors, storage devices, SAN arrays, and I/O devices that are compatible with vSphere 6.0, use the ESXi 6.0 information in the VMware Compatibility Guide.

Device Compatibility for ESXi

To determine which devices are compatible with ESXi 6.0, use the ESXi 6.0 information in the VMware Compatibility Guide.

Some devices are deprecated and no longer supported on ESXi 6.0. During the upgrade process, the device driver is installed on the ESXi 6.0 host. The device driver might still function on ESXi 6.0, but the device is not supported on ESXi 6.0. For a list of devices that are deprecated and no longer supported on ESXi 6.0, see KB 2087970.

Third-Party Switch Compatibility for ESXi

VMware now supports Cisco Nexus 1000V with vSphere 6.0. vSphere requires a minimum NX-OS release of 5.2(1)SV3(1.4). For more information about Cisco Nexus 1000V, see the Cisco Release Notes. As in previous vSphere releases, Ciscso Nexus 1000V AVS mode is not supported.

Guest Operating System Compatibility for ESXi

To determine which guest operating systems are compatible with vSphere 6.0, use the ESXi 6.0 information in the VMware Compatibility Guide.

Virtual Machine Compatibility for ESXi

Virtual machines that are compatible with ESX 3.x and later (hardware version 4) are supported with ESXi 6.0. Virtual machines that are compatible with ESX 2.x and later (hardware version 3) are not supported. To use such virtual machines on ESXi 6.0, upgrade the virtual machine compatibility. See the vSphere Upgrade documentation.

Installation and Upgrades for This Release

Installation Notes for This Release

Read the vSphere Installation and Setup documentation for guidance about installing and configuring ESXi and vCenter Server.

Although the installations are straightforward, several subsequent configuration steps are essential. Read the following documentation:

vSphere 6.0 Recommended Deployment Models

VMware recommends only two deployment models:

  • vCenter Server with embedded Platform Services Controller. This model is recommended if one or more standalone vCenter Server instances are required to be deployed in a data center. Replication between these vCenter Server with embedded Platform Services Controller models are not recommended.

  • vCenter Server with external Platform Services Controller. This model is recommended only if multiple vCenter Server instances need to be linked or want to have reduced footprint of Platform Services Controller in the data center. Replication between these vCenter Server with external Platform Services Controller models are supported.

Read the vSphere Installation and Setup documentation for guidance on installing and configuring vCenter Server.

Also, read KB 2108548 for guidance on installing and configuring vCenter Server.

vCenter Host OS Information

Read the Knowledge Base article KB 2091273.

Backup and Restore for vCenter Server and the vCenter Server Appliance Deployments that Use an External Platform Services Controller

Although statements in the vSphere Installation and Setup documentation restrict you from attempting to backup and restore vCenter Server and vCenter Server Appliance deployments that use an external Platform Services Controller, you can perform this task by following the steps in KB 2110294.

Migration from Embedded Platform Services Controller to External Platform Services Controller

vCenter Server with embedded Platform Services Controller cannot be migrated automatically to vCenter Server with external Platform Services Controller. Testing of this migration utility is not complete.

Before installing vCenter Server, determine your desired deployment option. If more than one vCenter Servers are required for replication setup, always deploy vCenter with external Platform Services Controller.

Migrating Third-Party Solutions

For information about upgrading with third-party customizations, see the vSphere Upgrade documentation. For information about using Image Builder to make a custom ISO, see the vSphere Installation and Setup documentation.

Upgrades and Installations Disallowed for Unsupported CPUs

vSphere 6.0 supports only processors available after June (third quarter) 2006. Comparing the processors supported by vSphere 5.x, vSphere 6.0 no longer supports the following processors:

  • AMD Opteron 12xx Series
  • AMD Opteron 22xx Series
  • AMD Operton 82xx Series

During an installation or upgrade, the installer checks the compatibility of the host CPU with vSphere 6.0. If your host hardware is not compatible, a purple screen appears with an incompatibility information message, and the vSphere 6.0 installation process stops.

Upgrade Notes for This Release

For instructions about upgrading vCenter Server and ESX/ESXi hosts, see the vSphere Upgrade documentation.

Open Source Components for VMware vSphere 6.0

The copyright statements and licenses applicable to the open source software components distributed in vSphere 6.0 are available at http://www.vmware.com. You need to log in to your My VMware account. Then, from the Downloads menu, select vSphere. On the Open Source tab, you can also download the source files for any GPL, LGPL, or other similar licenses that require the source code or modifications to source code to be made available for the most recent available release of vSphere.

Product Support Notices

  • vCenter Server database. Oracle 11g and 12c as an external database for vCenter Server Appliance has been deprecated in the vSphere 6.0 release. VMware continues to support Oracle 11g and 12c as an external database in vSphere 6.0. VMware will drop support for Orace 11g and 12c as an external database for vCenter Server Appliance in a furture major release.

  • vSphere Web Client. The Storage Reports selection from an object's Monitor tab is no longer available in the vSphere 6.0 Web Client.

  • vSphere Client. The Storage Views tab is no longer available in the vSphere 6.0 Client.

Patches Contained in this Release

This release contains all bulletins for ESXi that were released earlier to the release date of this product. See the VMware Download Patches page for more information about the individual bulletins.

Patch Release ESXi600-Update01 contains the following individual bulletins:

ESXi600-201510401-BG: Updates ESXi 6.0 esx-base vib

Patch Release ESXi600-Update01 contains the following image profiles:

ESXi-6.0.0-20151004001-standard
ESXi-6.0.0-20151004001-no-tools

For information on patch and update classification, see KB 2014447.

Resolved Issues

The resolved issues are grouped as follows.

CIM and API Issues
  • ServerView CIM Provider fails to monitor hardware status if Emulex CIM Provider is installed on the same ESXi host
    When ServerView CIM Provider and Emulex CIM Provider are installed on the same ESXi host, the Emulex CIM Provider (sfcb-emulex_ucn) might fail to respond resulting in failure to monitor hardware status.

    This issue is resolved in this release.
Guest Operating System Issues
  • An EFI-booted Linux guest might fail to respond to keyboard and mouse input
    A Linux guest OS booted on EFI firmware might fail to respond to the keyboard and mouse input if any motion of the mouse occurs during the short window of EFI boot time.

    This issue is resolved in this release.

Upgrade and Installation Issues
  • Applying host profile with stateless caching enabled on stateless ESXi host might take a long time to complete
    Applying host profile on stateless ESXi host with large number of storage LUNs might take long time to reboot when you enable stateless caching with esx as the first disk argument. This happens when you manually apply host profile or during the reboot of the host.

    This issue is resolved in this release.

  • Unable to install or upgrade to VMware Tools version 9.10.0 on Dutch version of Windows Server 2008 R2
    Attempts to install or upgrade to VMware Tools version 9.10.0 available in ESXi 6.0 might fail on Dutch version of Windows Server 2008 R2. An error message similar to the following is displayed:

    VMware Tools Setup Wizard ended prematurely

    This issue is resolved in this release.

  • VIB stage operation might cause VIB installation or configuration change to be lost after an ESXi host reboot
    When some VIBs are installed on the system, esxupdate constructs a new image in /altbootbank and changes the /altbootbank boot.cfg bootstate to be updated. When a live installable VIB is installed, the system saves the configuration change to /altbootbank. The stage operation deletes the contents of /altbootbank unless you perform a remediate operation after the stage operation. The VIB installation might be lost if you reboot the host after a stage operation.

    This issue is resolved in this release.

Networking Issues
  • Attempts to add a vmnic to an ESXi Host on VDS fail with Unsupported address family error
    After you upgrade from ESXi 5.5 to 6.0, attempts to add a vmnic to a VMware ESXi host connected to a vSphere Distributed Switch (VDS) might fail. The issue occurs when ipfix is enabled and IPv6 is disabled.

    In the /var/log/vmkernel.log file on the affected ESXi host, you see entries similar to:

    cpu10:xxxxx opID=xxxxxxxx)WARNING: Ipfix: IpfixActivate:xxx: Activation failed for 'DvsPortset-1': Unsupported address family
    cpu10:xxxxx opID=xxxxxxxx)WARNING: Ipfix: IpfixDVPortParamWrite:xxx: Configuration failed for switch DvsPortset-1 port xxxxxxxx : Unsupported address family
    cpu10:xxxxx opID=xxxxxxxx)WARNING: NetDVS: xxxx: failed to init client for data com.vmware.etherswitch.port.ipfix on port xxx
    cpu10:xxxxx opID=xxxxxxxx)WARNING: NetPort: xxxx: failed to enable port 0x4000002: Unsupported address family
    cpu10:xxxxx opID=xxxxxxxx)NetPort: xxxx: disabled port 0x4000002
    cpu10:xxxxx opID=xxxxxxxx)Uplink: xxxx: vmnic2: Failed to enable the uplink port 0x4000002: Unsupported address family

    This issue is resolved in this release.

  • Virtual machines using VMXNET3 virtual adapter might fail when attempting to boot from iPXE
    Virtual machines using VMXNET3 virtual adapter might fail when attempting to boot from iPXE (open source boot firmware).

    This issue is resolved in this release.

  • Failover not initiated if an uplink is disconnected or shutdown when using load balancing based on physical NIC load
    When using load balancing based on physical NIC load on VDS 6.0, if one of the uplinks is disconnected or shut down, failover is not initiated.

    This issue is resolved in this release.

  • When vmk10 or higher is enabled for vMotion, on reboot vmk1 might get enabled for vMotion
    Enabling vMotion on vmk10 or higher might cause vmk1 to have vMotion enabled on reboot of the ESXi host. This issue can cause excessive traffic over vmk1 and result in network issues.

    This issue is resolved in this release.

  • Virtual machine Network performance data metrics not available for VM configured with VMXNET3 connected to a standard vSwitch
    You are unable to view the real time performance graph for Network of a virtual machine configured with VMXNET3 adapter in the VMware vSphere Client 6.0 as the option is not available in the Switch to drop-down list.

    This issue is resolved in this release.

  • Network connectivity lost when applying host profile during Auto Deploy
    When applying host profile during Auto Deploy, you might lose network connectivity because the VXLAN Tunnel Endpoint (VTEP) NIC gets tagged as management vmknic.

    This issue is resolved in this release.

  • Nested ESXi might lose connectivity and e1000e virtual NIC might get reset
    Nested ESXi might intermittently lose connectivity and e1000e virtual NIC might get reset. An All Paths Down (APD) condition to NFS volumes might also be observed. An error message similar to the following is written to the vmkernel.log file

    packets completion seems stuck, issuing reset

    This issue is resolved in this release.

  • New Network connectivity issues after upgrade from ESXi 5.x to ESXi 6.0
    After you upgrade from ESXi 5.x to ESXi 6.0, you might encounter the following issues:

    • The ESXi 6.0 host might randomly lose network connectivity

    • The ESXi 6.0 host becomes non-responsive and unmanageable until reboot

    • After reboot, the issue is temporarily resolved for a period of time, but occurs again after a random interval

    • Transmit timeouts are often logged by the NETDEV WATCHDOG service in the ESXi host. You may see entries similar to the following in the /var/log/vmkernel.log file:

      cpu0:33245)WARNING: LinNet: netdev_watchdog:3678: NETDEV WATCHDOG: vmnic0: transmit timed out
      cpu0:33245)WARNING: at vmkdrivers/src_92/vmklinux_92/vmware/linux_net.c:3707/netdev_watchdog() (inside vmklinux)

    • The issue can impact multiple network adapter types across multiple hardware vendors. The exact logging that occurs during a transmit timeout may vary from card to card.

    This issue is resolved in this release.

Storage Issues
  • ESXi host might fail with a purple diagnostic screen when multiple vSCSI filters are attached to a VM disk
    An ESXi 6.0 host might fail with a purple diagnostic screen when multiple vSCSI filters are attached to a VM disk. The purple diagnostic screen or backtrace contains entries similar to the following:

    cpu24:nnnnnn opID=nnnnnnnn)@BlueScreen: #PF Exception 14 in world 103492:hostd-worker IP 0x41802c2c094d addr 0x30
    PTEs:0xnnnnnnnn;0xnnnnnnnnnn;0x0;
    cpu24:nnnnnn opID=nnnnnnnn)Code start: 0xnnnnnnnnnnnn VMK uptime: 21:06:32:38.296
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSCSIFilter_GetFilterPrivateData@vmkernel#nover+0x1 stack: 0xnnnnnnn
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSCSIFilter_IssueInternalCommand@vmkernel#nover+0xc3 stack: 0xnnnnnn
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]CBRC_FileSyncRead@<None>#<None>+0xb1 stack: 0x0
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]CBRC_DigestRecompute@<None>#<None>+0xnnn stack: 0xnnnn
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]CBRC_FilterDigestRecompute@<None>#<None>+0x36 stack: 0x20
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]VSI_SetInfo@vmkernel#nover+0x322 stack: 0xnnnnnnnnnnnn
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]UWVMKSyscallUnpackVSI_Set@<None>#<None>+0xef stack: 0x41245111df10
    YYYY-MM-DD TIME cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UWVMKSyscallHandler@<None>#<None>+0x243 stack: 0xnnnnnnnnnnnn
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]User_UWVMKSyscallHandler@vmkernel#nover+0x1d stack: 0xnnnnnnnn
    cpu24:nnnnnn opID=nnnnnnnn)0xnnnnnnnnnnnn:[0xnnnnnnnnnnnn]gate_entry@vmkernel#nover+0x64 stack: 0x0

    This issue is resolved in this release.

  • ESXi host stops responding and loses connection to vCenter Server during storage hiccups on Non-ATS VMFS datastores
    An ESXi host might stop responding and the virtual machines become inaccessible. Also, the ESXi host might lose connection to vCenter Server due to a deadlock during storage hiccups on Non-ATS VMFS datastores.

    This issue is resolved in this release.

  • Not-shared Storage and Used Storage not reflecting the expected values for virtual machines
    When multiple virtual machines share storage space, the vSphere Client summary page might display incorrect values for the following:

    • Not-shared Storage in the VM Summary Page

    • Provisioned Space in the data Summary Page

    • Used Space in the VM tab of the host


    This issue is resolved in this release.

  • vSphere might not detect all drives in the system even if they are displayed in the BIOS
    vSphere might not detect all the 18 drives in the system due to the lsi_msgpt3 driver being unable to detect a single drive per HBA if there are multiple HBAs in a system.

    This issue is resolved in this release.

  • Serial Attached SCSI (SAS) drives greater than 2 TB not detected by lsi-mr3 driver
    The lsi_mr3 driver does not detect SAS drive greater than 2 TB. Error messages similar to the following are logged:

    cpu35:33396)WARNING: ScsiDeviceIO: 8469: Plugin entry point isSSD() failed on device naa.5000c50057b932a7 from plugin NMP: Failure
    cpu35:33396)ScsiDevice: 3025: Failing registration of device 'naa.5000c50057b932a7': failed to get device I/O error attributes.
    cpu35:33396)ScsiEvents: 545: Event Subsystem: Device Events, Destroyed!
    cpu35:33396)WARNING: NMP: nmp_RegisterDevice:673: Registration of NMP device with primary uid 'naa.5000c50057b932a7' failed. I/O error

    The lsi_mr3 driver is updated in this release to resolve this issue.

  • VMFS volume is locked
    VMFS volume on an ESXi host might remain locked due to failed metadata operations. An error message similar to the following is observed in vmkernel.log file:

    WARNING: LVM: 12976: The volume on the device naa.50002ac002ba0956:1 locked, possibly because some remote host encountered an error during a volume operation and could not recover.

    This issue is resolved in this release.

  • Attempts to modify storage policies for virtual machine created from linked clone might fail
    Attempts to modify storage policies of Powered On virtual machine created from linked clones might fail in vCenter Server with an error message similar to the following:

    The scheduling parameter change failed.

    This issue is resolved in this release.

  • LUNs attached to ESXi 6.0 hosts might remain in APD Timeout state after paths have recovered
    When an All Paths Down (APD) event occurs, LUNs connected to ESXi might remain inaccessible after paths to the LUNs recover. You see the following events in sequence in the /var/log/vmkernel.log:

    1. Device enters APD.
    2. Device exits APD.
    3. Heartbeat recovery and filesystem operations on the device fail due to not found.
    4. The APD timeout expires despite the fact that the device exited APD previously.

    This issue is resolved in this release.

  • Unnecessary rescan triggered by Virtual SAN might cause the ESXi host and virtual machines to stop responding
    Unnecessary periodic device and file system rescan triggered by Virtual SAN might cause the ESXi host and virtual machines within the environment to randomly stop responding.

    This issue is resolved in this release.

  • Storage performance might be slow on virtual machines running on VSA-provisioned NFS storage
    Slow NFS storage performance is observed on virtual machines running on VSA-provisioned NFS storage. This is due to the delayed acknowledgements from the ESXi machine to NFS Read responses.

    This issue is resolved in this release by disabling delayed acks for LRO TCP packets.

  • Cloning VMs across different storage containers incorrectly sets the source VMId as the cloned VMs initial VVOL VMId
    When you clone a virtual machine across different storage containers, the VMId of the source Virtual Volume (VVOL) is taken as the initial value for the cloned VVOL VMID.

    This issue is resolved in this release.

  • WRITE SAME command is disabled on local drives
    WRITE SAME command is disabled for local drives in ESXi 6.0 Update 1 as some of disk drives may not completely implement the Write Same functionality resulting in erroneous behavior. Alternatively, you can use esxcli storage core device vaai status command to enable or disable VAAI on local drives. See Knowledge Base article 2131056 for details.

Backup Issues
  • Snapshot quiescing fails on Linux VMs
    When you perform a quiesced snapshot of a Linux virtual machine, the VM might fail after the snapshot operation. The following error messages are logged in the vmware.log file:

    <YYYY-MM-DD>T<TIME>Z| vmx| I120: SnapshotVMXTakeSnapshotComplete: done with snapshot 'smvi_UUID': 0
    <YYYY-MM-DD>T<TIME>Z| vmx| I120: SnapshotVMXTakeSnapshotComplete: Snapshot 0 failed: Failed to quiesce the virtual machine (40).
    <YYYY-MM-DD>T<TIME>Z| vmx| I120: GuestRpcSendTimedOut: message to toolbox timed out.
    <YYYY-MM-DD>T<TIME>Z| vmx| I120: Vix: [18631 guestCommands.c:1926]: Error VIX_E_TOOLS_NOT_RUNNING in MAutomationTranslateGuestRpcError(): VMware Tools are not running in the guest

    This issue is resolved in this release.

Security Issues
  • Update to the Python package
    The Python third-party library is updated to version 2.7.9.
  • Update to the libPNG library
    The libPNG library is updated to libpng-1.6.16.
  • Update to the OpenSSL library
    The ESXi userworld OpenSSL library is updated to version openssl-1.0.1m.
  • Update to the libxml2 library
    The ESXi userworld libxml2 library is updated to version 2.9.2.

  • Update to the VMware Tools libPNG and libxml2 libraries
    The VMware Tools libPNG and libxml2 libraries are updated to versions 1.2.52 and 2.9.2 respectively.
  • Update to the VMware Tools OpenSSL library
    The VMware Tools OpenSSL library is updated to version openssl-0.9.8zg.
Server Configuration Issues
  • You can set the debug log and enable trace log for host profile engine
    The new Host Profile Plugin is now available to collect the DEBUG log and enable Trace Log of ESXi host profile engine when the host is booted through Active Directory.

  • Attempts to restart a large number of virtual machine might fail when using NVIDIA GRID vGPU device
    When resetting a large number of virtual machines at the same time with NVIDIA GRID vGPU device, there might be a reboot failure for some of the virtual machines. A reboot error similar to the following might be displayed:

    VMIOP: no graphics device is available for vGPU grid_k100.

    This issue is resolved in this release.

  • Setting CPU limit for VM impact to other VMs
    When you set the CPU limit of a uni-processor virtual machine, the overall ESXi utilization might decrease due to a defect in the ESXi scheduler. This happens when the ESXi scheduler considers the cpu-limited VMs as runnable (when they were not running) while making cpu-load estimations. Hence leading to incorrect load balancing decision. For more details, see Knowledge Based article 2096897.

    This issue is resolved in this release.

  • Attempts to create a FIFO and write data on it might result in a purple diagnostic screen
    When you create a FIFO and attempt to write data to the /tmp/dpafifo, a purple diagnostic screen might displayed under certain conditions.

    This issue is resolved in this release.
  • Advanced error reporting disabled when using passthrough devices
    While the PCI information on a device is collected for passthrough, the error reporting for that device is disabled.

    This issue is resolved in this release by providing VMkernel boot option pcipDisablePciErrReporting to enable PCI passthrough devices to report errors. By default the option is set to TRUE implying error reporting is disabled.
  • Virtual machine might not display a warning message when the CPU is not fully reserved
    When you create a virtual machine with sched.cpu.latencySensitivity set to high and Power on, the exclusive affinity for the vCPUs might not get enabled if the VM does not have a full CPU reservation.

    In earlier releases, the VM did not display a warning message when the CPU is not fully reserved.

    This issue is resolved in this release.
  • Command esxcli system snmp set -p updated
    The SNMP agent has been configured to listen on a custom port using the command esxcli system snmp set -p <port>. In ESXi 6.0 Update 1, a set of tcp/udp ports from 32768 to 40959 have been set aside for third-party use. This range is no longer allowed to be used by other ports.

    After upgrade to ESXi 6.0 Update 1, the SNMP agent will not start and display a range check error if a custom port is set in this range.
  • Host profiles become non-compliant with simple change to SNMP syscontact or syslocation
    Host Profiles become non-compliant with a simple change to SNMP syscontact or syslocation. The issue occurs as the SNMP host profile plugin applies only a single value to all hosts attached to the host profile. An error message similar to the following might be displayed:

    SNMP Agent Configuration differs

    This issue is resolved in this release by enabling per-host value settings for certain parameters like syslocation, syscontact, v3targets,v3users and engineid.
Virtual Machine Management Issues
  • Performance counters for vSphere Flash Read Cache might not be available on a virtual machine
    The vFlash cache metric counters such as FlashCacheIOPs, FlashCacheLatency, FlashCacheThroughput might not be available when CBT is enabled on a virtual disk. Error messages similar to the following might be logged in the stats.log file:

    xxxx-xx-xxTxx:xx:xx.200Z [xxxxxxxx error 'Statssvc.vim.PerformanceManager'] CollectVmVdiskStats : Failed to get VFlash Cache stats for vscsi id scsi0:0 for vm 3
    xxxx-xx-xxTxx:xx:xx.189Z [xxxxxxxx error 'Statssvc.vim.PerformanceManager'] GetVirtualDiskVFCStats: Failed to get VFlash Cache stat values for vmdk scsi0:0. Exception VFlash Cache filename not found!

    This issue is resolved in this release.

  • Unable to launch the VMs with higher display resolution and multiple monitors setup
    Attempts to launch virtual machines with higher display resolution and multiple monitors setup from VDI using PCOIP solutions might fail. The VMs fail on launch and go into a Power Off state. In the /var/log/vmkwarning.log, you see entries similar to:

    cpu3:xxxxxx)WARNING: World: vm xxxxxx: 12276: vmm0:VDI-STD-005:vmk: vcpu-0:p2m update buffer full
    cpu3:xxxxxx)WARNING: VmMemPf: vm xxxxxx: 652: COW copy failed: pgNum=0x3d108, mpn=0x3fffffffff
    cpu3:xxxxxx)WARNING: VmMemPf: vm xxxxxx: 2626: PhysPageFault failed Failure: pgNum=0x3d108, mpn=0x3fffffffff
    cpu3:xxxxxx)WARNING: UserMem: 10592: PF failed to handle a fault on mmInfo at va 0x60ee6000: Failure. Terminating...
    cpu3:xxxxxx)WARNING: World: vm xxxxxx: 3973: VMMWorld group leader = 255903, members = 1
    cpu7:xxxxxx)WARNING: World: vm xxxxxx: 3973: VMMWorld group leader = 255903, members = 1
    cpu0:xxxxx)WARNING: World: vm xxxxxx: 9604: Panic'd VMM world being reaped, but no core dumped.


    This issue is resolved in this release.

VMware HA and Fault Tolerance Configuration Issues
  • Migrating a secondary virtual machine might fail under heavy workload
    Attempts to migrate a secondary VM enabled with fault tolerance might fail and the VM might become unresponsive under heavy workload.

    This issue is resolved in this release.
Miscellaneous Issues
  • Excessive logging of VmkAccess messages in the vmkernel log
    On HP systems with ESXi 6.0, you might see excessive logging of VmkAccess messages in vmkernel.log for the following system commands that are executed during runtime:

    • esxcfg-scsidevs
    • localcli storage core path list
    • localcli storage core device list


    Excessive log messages similar to the following are logged in the VmkAccess logs:

    cpu7:36122)VmkAccess: 637: localcli: access denied:: dom:appDom(2), obj:forkExecSys(88), mode:syscall_allow(2)
    cpu7:36122)VmkAccess: 922: VMkernel syscall invalid (1025)
    cpu7:36122)VmkAccess: 637: localcli: access denied:: dom:appDom(2), obj:forkExecSys(88), mode:syscall_allow(2)
    cpu7:36122)VmkAccess: 922: VMkernel syscall invalid (1025)
    cpu0:36129)VmkAccess: 637: esxcfg-scsidevs: access denied:: dom:appDom(2), obj:forkExecSys(88), mode:syscall_allow(2)

    This issue is resolved in this release.
  • New Python package import fails
    The Python package import might fail on the newer Python version 2.7.9. The issue occurs as the newer version of Python is unable to locate the module pkg_resources.py resulting in the import pkg_resources statement to fail.

    This issue is resolved in this release.
VMware Tools Issues
  • Update to the FUSE library
    The FUSE library is updated to libfuse 2.9.4.

Known Issues

The known issues existing in ESXi 6.0 are grouped as follows:

New known issues documented in this release are highlighted as New Issue.

Installation Issues
  • New Issue The VMware Tools service user processes might not run on Linux OS after installing the latest VMware Tools package
    On Linux OS, you might encounter VMware Tools upgrade/installation issues or the VMware Tools service (vmtoolsd) user processes might not run after installing the latest VMware Tools package. The issue occurs if the your glibc version is older than version 2.5, like SLES10sp4.

    Workaround: Upgrade the Linux glibc to version 2.5 or above.

Upgrade Issues

Review also the Installation Issues section of the release notes. Many installation issues can also impact your upgrade process.

  • New Issue SSLv3 remains enabled on Auto Deploy after upgrade from earlier release of ESXi 6.0 to ESXi 6.0 Update 1
    When you upgrade from an earlier release of ESXi 6.0 to ESXi 6.0 Update 1, the SSLv3 protocol remains enabled on Auto Deploy.

    Workaround: Perform to the following steps to disable SSLv3 using PowerCLI commands:

    1. Run the following command to Connect to vCenter Server:

      PowerCLI C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI> Connect-VIServer -Server <FQDN_hostname or IP Address of vCenter Server>

    2. Run the following command to check the current sslv3 status:

      PowerCLI C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI> Get-DeployOption

    3. Run the following command to disable sslv3:

      PowerCLI C:\Program Files (x86)\VMware\Infrastructure\vSphere PowerCLI> Set-DeployOption disable-sslv3 1

    4. Restart the Auto Deploy service to update the change.

  • Fibre Channel host bus adapter device number might change after ESXi upgrade from 5.5.x to 6.0

    During ESXi upgrade from 5.5.x to 6.0, the Fibre Channel host bus adapter device number changes occasionally. The device number might change to another number if you use the esxcli storage core adapter list command.

    For example, the device numbers for a Fibre Channel host bus adapter might look similar to the following before ESXi upgrade:

    HBA Name
    ––––––––
    vmhba3
    vmhba4
    vmhba5
    vmhba66

    The device numbers from the Fibre Channel host bus adapter might look similar to the following after an ESXi upgrade 6.0:

    HBA Name
    ––––––––
    vmhba64
    vmhba65
    vmhba5
    vmhba6

    The example illustrates the random change that might occur if you use the esxcli storage core adapter list command: the device alias numbers vmhba2 and vmhba3 change to vmhba64 and vmhba65, while device numbers vmhba5 and vmhba6 are not changed. However, if you used the esxcli hardware pci list command, the device numbers do not change after upgrade.

    This problem is external to VMware and may not affect you. ESXi displays device alias names but it does not use them for any operations. You can use the host profile to reset the device alias name. Consult VMware product documentation and knowledge base articles.

    Workaround: None.

  • Active Directory settings are not retained post-upgrade
    The Active Directory settings configured in the ESXi host before upgrade are not retained when the host is upgraded to ESXi 6.0.

    Workaround: Add the host to the Active Directory Domain after upgrade if the pre-upgrade ESXi version is 5.1 or later. Do not add the host to the Active Directory Domain after upgrade if the pre-upgrade ESXi version is ESXi 5.0.x.

  • After ESXi upgrade to 6.0 hosts that were previously added to the domain are no longer joined to the domain
    When upgrading to from vSphere 5.5 to vSphere 6.0 for the first time, the Active Directory configuration is not retained.

    Workaround: After upgrade, rejoin the hosts to the vCenter Server domain:

    1. Add the hosts to vCenter Server.

    2. Join the hosts to domain (for example, example.com)

    3. Upgrade all the hosts to ESXi 6.0.

    4. Manually join one recently upgraded host to domain.

    5. Extract the host profile and disabled all other profiles except Authentication.

    6. Apply the manually joined host profile to the other recently upgraded hosts.

  • Previously running VMware ESXi Dump Collector service resets to default Disabled setting after upgrade of vCenter Server for Windows
    The upgrade process installs VMware Vsphere ESXi Dump Collector 6.0 as part of a group of optional services for vCenter Server. You must manually enable the VMware vSphere ESXi Dump Collector service to use it as part of vCenter Server 6.0 for Windows.

    Workaround: Read the VMware documentation or search the VMware Knowledge Base for information on how to enable and run optional services in vCenter Server 6.0 for Windows.

    Enable the VMware vSphere ESXi Dump Collector service in the operating system:

    1. In the Control Panel menu, select Administrative Tools and double-click on Services.

    2. Right click VMware vSphere ESXi Dump Collector and Edit Startup Type.

    3. Set the Start-up Type to Automatic.

    4. Right Click VMware vSphere ESXi Dump Collector and Start.

    The Service Start-up Type is set to automatic and the service is in a running state.

vCenter Single Sign-On and Certificate Management Issues
  • Cannot connect to VM console after SSL certificate upgrade of ESXi host
    A certificate validation error might result if you upgrade the SSL certificate that is used by an ESXi host, and you then attempt to connect to the VM console of any VM running when the certificate was replaced. This is because the old certificate is cached, and any new console connection is rejected due to the mismatch.
    The console connection might still succeed, for example, if the old certificate can be validated through other means, but is not guaranteed to succeed. Existing virtual machine console connections are not affected, but you might see the problem if the console was running during the certificate replacement, was stopped, and was restarted.

    Workaround: Place the host in maintenance mode or suspend or power off all VMs. Only running VMs are affected. As a best practice, perform all SSL certificate upgrades after placing the host in maintenance mode.

Networking Issues

  • Certain vSphere functionality does not support IPv6
    You can enable IPv6 for all nodes and components except for the following features:

    • IPv6 addresses for ESXi hosts and vCenter Server that are not mapped to fully qualified domain names (FQDNs) on the DNS server.
      Workaround: Use FQDNs or make sure the IPv6 addresses are mapped to FQDNs on the DNS servers for reverse name lookup.

    • Virtual volumes

    • PXE booting as a part of Auto Deploy and Host Profiles
      Workaround: PXE boot an ESXi host over IPv4 and configure the host for IPv6 by using Host Profiles.

    • Connection of ESXi hosts and the vCenter Server Appliance to Active Directory
      Workaround: Use Active Directory over LDAP as an identity source in vCenter Single Sign-On.

    • NFS 4.1 storage with Kerberos
      Workaround: Use NFS 4.1 with AUTH_SYS.

    • Authentication Proxy

    • Connection of the vSphere Management Assistant and vSphere Command-Line Interface to Active Directory.
      Workaround: Connect to Active Directory over LDAP.

    • Use of the vSphere Client to enable IPv6 on vSphere features
      Workaround: Use the vSphere Web Client to enable IPv6 for vSphere features.

  • Recursive panic might occur when using ESXi Dump Collector
    Recursive kernel panic might occur when the host is in panic state while it displays the purple diagnostic screen and write the core dump over the network to the ESXi Dump Collector. A VMkernel zdump file might not be available for troubleshooting on the ESXi Dump Collector in vCenter Server.

    In the case of a recursive kernel panic, the purple diagnostic screen on the host displays the following message:
    2014-09-06T01:59:13.972Z cpu6:38776)Starting network coredump from host_ip_address to esxi_dump_collector_ip_address.
    [7m2014-09-06T01:59:13.980Z cpu6:38776)WARNING: Net: 1677: Check what type of stack we are running on [0m
    Recursive panic on same CPU (cpu 6, world 38776, depth 1): ip=0x418000876a27 randomOff=0x800000:
    #GP Exception 13 in world 38776:vsish @ 0x418000f0eeec
    Secondary panic trap frame registers:
    RAX:0x0002000001230121 RCX:0x000043917bc1af80 RDX:0x00004180009d5fb8 RBX:0x000043917bc1aef0
    RSP:0x000043917bc1aee8 RBP:0x000043917bc1af70 RSI:0x0002000001230119 RDI:0x0002000001230121
    R8: 0x0000000000000038 R9: 0x0000000000000040 R10:0x0000000000010000 R11:0x0000000000000000
    R12:0x00004304f36b0260 R13:0x00004304f36add28 R14:0x000043917bc1af20 R15:0x000043917bc1afd0
    CS: 0x4010 SS: 0x0000 FS: 0x4018 GS: 0x4018 IP: 0x0000418000f0eeec RFG:0x0000000000010006
    2014-09-06T01:59:14.047Z cpu6:38776)Backtrace for current CPU #6, worldID=38776, rbp=0x43917bc1af70
    2014-09-06T01:59:14.056Z cpu6:38776)0x43917bc1aee8:[0x418000f0eeec]do_free_skb@com.vmware.driverAPI#9.2+0x4 stack: 0x0, 0x43a18b4a5880,
    2014-09-06T01:59:14.068Z cpu6:38776)Recursive panic on same CPU (cpu 6, world 38776): ip=0x418000876a27 randomOff=0x800000:
    #GP Exception 13 in world 38776:vsish @ 0x418000f0eeec
    Halt$Si0n5g# PbC8PU 7.

    Recursive kernel panic might occur when the VMkernel panics while heavy traffic is passing through the physical network adapter that is also configured to send the core dumps to the collector on vCenter Server.

    Workaround: Perform either of the following workarounds:

    • Dedicate a physical network adapter to core dump transmission only to reduce the impact from system and virtual machine traffic.

    • Disable the ESXi Dump Collector on the host by running the following ESXCLI console command:
      esxcli system coredump network set --enable false

Storage Issues

NFS Version 4.1 Issues

  • Virtual machines on an NFS 4.1 datastore fail after the NFS 4.1 share recovers from an all paths down (APD) state
    When the NFS 4.1 storage enters an APD state and then exits it after a grace period, powered on virtual machines that run on the NFS 4.1 datastore fail. The grace period depends on the array vendor.
    After the NFS 4.1 share recovers from APD, you see the following message on the virtual machine summary page in the vSphere Web Client:
    The lock protecting VM.vmdk has been lost, possibly due to underlying storage issues. If this virtual machine is configured to be highly available, ensure that the virtual machine is running on some other host before clicking OK.
    After you click OK, crash files are generated and the virtual machine powers off.

    Workaround: None.

  • NFS 4.1 client loses synchronization with server when trying to create new sessions
    After a period of interrupted connectivity with the server, the NFS 4.1 client might lose synchronization with the server when trying to create new sessions. When this occurs, the vmkernel.log file contains a throttled series of warning messages noting that an NFS41 CREATE_SESSION request failed with NFS4ERR_SEQ_MISORDERED.

    Workaround: Perform the following sequence of steps.

    1. Attempt to unmount the affected file systems. If no files are open when you unmount, this operation succeeds and the NFS client module cleans up its internal state. You can then remount the file systems that were unmounted and resume normal operation.

    2. Take down the NICs connecting to the mounts' IP addresses and leave them down long enough for several server lease times to expire. Five minutes should be sufficient. You can then bring the NICs back up. Normal operation should resume.

    3. If the preceding steps fail, reboot the ESXi host.

  • NFS 4.1 client loses synchronization with an NFS server and connection cannot be recovered even when session is reset
    After a period of interrupted connectivity with the server, the NFS 4.1 client might lose synchronization with the server and the synchronized connection with the server cannot be recovered even if the session is reset. This problem is caused by an EMC VNX server issue. When this occurs, the vmkernel.log file contains a throttled series of warning messages noting that NFS41: NFS41ProcessSessionUp:2111: resetting session with mismatched clientID; probable server bug

    Workaround: To end the session, unmount all datastores and then remount them.

  • ONTAP Kerberos volumes become inaccessible or experience VM I/O failures
    A NetApp server does not respond when it receives RPCSEC_GSS requests that arrive out of sequence. As a result, the corresponding I/O operation stalls unless it is terminated and the guest OS can stall or encounter I/O errors. Additionally, according to RFC 2203, the client can only have a number of outstanding requests equal to seq_window (32 in case of ONTAP) according to RPCSEC_GSS context and it must wait until the lowest of these outstanding requests is completed by the server. Therefore, the server never replies to the out-of-sequence RPCSEC_GSS request, and the client stops sending requests to the server after it reaches the maximum seq_window number of outstanding requests. This causes the volume to become inaccessible.

    Workaround: None. Check the latest Hardware Compatibility List (HCL) to find a supported ONTAP server that has resolved this problem.

  • You cannot create a larger than 1 TB virtual disk on NFS 4.1 datastore from EMC VNX
    NFS version 4.1 storage from EMC VNX with firmware version 7.x supports only 32-bit file formats. This prevents you from creating virtual machine files that are larger than 1 TB on the NFS 4.1 datastore.

    Workaround: Update the EMC VNX array to version 8.x.

  • NFS 4.1 datastores backed by EMC VNX storage become inaccessible during firmware upgrades
    When you upgrade EMC VNX storage to a new firmware, NFS 4.1 datastores mounted on the ESXi host become inaccessible. This occurs because the VNX server changes its major device number after the firmware upgrade. The NFS 4.1 client on the host does not expect the major number to change after it has established connectivity with the server, and causes the datastores to be permanently inaccessible.

    Workaround: Unmount all NFS 4.1 datastores exported by the VNX server before upgrading the firmware.

  • When ESXi hosts use different security mechanisms to mount the same NFS 4.1 datastore, virtual machine failures might occur
    If different ESXi hosts mount the same NFS 4.1 datastore using different security mechanisms, AUTH_SYS and Kerberos, virtual machines placed on this datastore might experience problems and failure. For example, your attempts to migrate the virtual machines from host1 to host2 might fail with permission denied errors. You might also observe these errors when you attempt to access a host1 virtual machine from host2.

    Workaround: Make sure that all hosts that mount an NFS 4.1 volume use the same security type.

  • Attempts to copy read-only files to NFS 4.1 datastore with Kerberos fail
    The failure might occur when you attempt to copy data from a source file to a target file. The target file remains empty.

    Workaround: None.

  • When you create a datastore cluster, uniformity of NFS 4.1 security types is not guaranteed
    While creating a datastore cluster, vSphere does not verify and enforce the uniformity of NFS 4.1 security types. As a result, datastores that use different security types, AUTH_SYS and Kerberos, might be a part of the same cluster. If you migrate a virtual machine from a datastore with Kerberos to a datastore with AUTH_SYS, the security level for the virtual machine becomes lower.
    This issue applies to such functionalities as vMotion, Storage vMotion, DRS, and Storage DRS.

    Workaround: If Kerberos security is required for your virtual machines, make sure that all NFS 4.1 volumes that compose the same cluster use only the Kerberos security type. Do not include NFS 3 datastores, because NFS 3 supports only AUTH_SYS.

Virtual Volumes Issues

  • Failure to create virtual datastores due to incorrect certificate used by Virtual Volumes VASA provider
    Occasionally, a self-signed certificate used by the Virtual Volumes VASA provider might incorrectly define the KeyUsage extension as critical without setting the keyCertSign bit. In this case, the provider registration succeeds. However, you are not able to create a virtual datastore from storage containers reported by the VASA provider.

    Workaround: Self-signed certificate used by the VASA provider at the time of provider registration should not define KeyUsage extension as critical without setting the keyCertSign bit.

General Storage Issues

  • New Issue vSphere Web Client incorrectly displays Storage Policy as attached when new VM is created from an existing disk
    When you use the vSphere Web Client to create a new VM from an existing disk and specify a storage policy when setting up the disk. The filter appears to be attached when you select the new VM --> click on VM policies --> Edit VM storage policies, however the filter is not actually attached. You can check the .vmdk file or the vmkfstools --iofilterslist <vmdk-file>' to verify if the filter is attached or not.

    Workaround: After you create the new VM, but before you power it on, add the filter to the vmdk by clicking on VM policies --> Edit VM storage policies.

  • New Issue Installing I/O Filters on IPv6 setup does not publish its capabilities to VPXD
    After successful installation of I/O Filter through VIM API, the installed filter is not able to publish the filter capabilities to VPXD. You are unable to attach the filter profile to any disks as there are no capabilities published to the VMware vSphere Storage Policy Based Management (SPBM).

    Workaround: None.

  • New Issue NFS Lookup operation returns NFS STALE errors
    When you deploy large number of VMs in the NFS datastore, the VM deployment fails with an error message similar to the following due to a race condition:

    Stale NFS file handle

    Workaround: Restart the Lookup operation. See Knowledge Based article 2130593 for details.

  • Attempts to create a VMFS datastore on Dell EqualLogic LUNs fail when QLogic iSCSI adapters are used
    You cannot create a VMFS datastore on a Dell EqualLogic storage device that is discovered through QLogic iSCSI adapters.
    When your attempts fail, the following error message appears on vCenter Server: Unable to create Filesystem, please see VMkernel log for more details: Connection timed out. The VMkernel log contains continuous iscsi session blocked and iscsi session unblocked messages. On the Dell EqualLogic storage array, monitoring logs show a protocol error in packet received from the initiator message for the QLogic initiator IQN names.

    This issue is observed when you use the following components:

    • Dell EqualLogic array firmware : V6.0.7

    • QLogic iSCSI adapter firmware versions : 3.00.01.75

    • Driver version : 5.01.03.2-7vmw-debug

    Workaround: Enable the iSCSI ImmediateData adapter parameter on QLogic iSCSI adapter. By default, the parameter is turned off. You cannot change this parameter from the vSphere Web Client or by using esxcli commands. To change this parameter, use the vendor provided software, such as QConvergeConsole CLI.

  • ESXi host with Emulex OneConnect HBA fails to boot
    When an ESXi host has the Emulex OneConnect HBA installed, the host might fail to boot. This failure occurs due to a problem with the Emulex firmware.

    Workaround: To correct this problem, contact Emulex to get the latest firmware for your HBA.

    If you continue to use the old firmware, follow these steps to avoid the boot failure:

    1. When ESXi is loading, press Shift+O before booting the ESXi kernel.

    2. Leave the existing boot option as is, and add a space followed by dmaMapperPolicy=false.

  • Flash Read Cache does not accelerate I/Os during APD
    When the flash disk configured as a virtual flash resource for Flash Read Cache is faulty or inaccessible, or the disk storage is unreachable from the host, the Flash Read Cache instances on that host are invalid and do not work to accelerate I/Os. As a result, the caches do not serve stale data after connectivity is re-established between the host and storage. The connectivity outage might be temporary, all paths down (APD) condition, or permanent, permanent device loss (PDL). This condition persists until the virtual machine is power-cycled.

    Workaround: The virtual machine can be power-cycled to restore I/O acceleration using Flash Read Cache.

  • All Paths Down (APD) or path-failovers might cause system failure
    In a shared SAS environment, APD or path-failover situations might cause system failure if the disks are claimed by the lsi_msgpt3 driver and they are experiencing heavy I/O activity.

    Workaround: None

  • Frequent use of SCSI abort commands can cause system failure
    With heavy I/O activity, frequent SCSI abort commands can cause a very slow response from the MegaRAID controller. If an unexpected interrupt occurs with resource references that were already released in a previous context, system failure might result.

    Workaround: None

  • iSCSI connections fail and datastores become inaccessible when IQN changes
    This problem might occur if you change the IQN of an iSCSI adapter while iSCSI sessions on the adapter are still active.

    Workaround: When you change the IQN of an iSCSI adapter, no session should be active on that adapter. Remove all iSCSI sessions and all targets on the adapter before changing the IQN.

  • nvmecli online and offline operations might not always take effect
    When you perform the nvmecli device online -A vmhba* operation to bring a NVMe device online, the operation appears to be successful. However, the device might still remain in offline state.

    Workaround: Check the status of NVMe devices by running the nvmecli device list command.

Virtual SAN Issues
  • New Issue Adding a host to a Virtual SAN cluster triggers an installer error
    When you add an ESXi host to a cluster with HA and Virtual SAN health service enabled, you might encounter either one or both of the following errors due to a VIB installation race condition:

    • In the task view, the Configuring vSphere HA task might fail with an error message similar to the following:

      Cannot install the vCenter Server agent service. ‘Unknown installer error’

    • The Enable agent task might fail with an error message similar to the following:

      Cannot complete the operation, see event log for details status.

    Workaround:

    • To fix the HA configuration failure, reboot the host and reconfigure HA as shown here:

      Hosts and Cluster view -> click cluster name -> Manage tab -> vSphere HA

    • To fix the enable agent task failure, go to the cluster view and retry the enablement of the VSAN health service as shown here:

      Hosts and Cluster view -> click cluster name -> Manage tab -> Health under Virtual SAN category, and click Retry button on top

Server Configuration Issues
  • Remediation fails when applying a host profile from a stateful host to a host provisioned with Auto Deploy
    When applying a host profile from a statefully deployed host to a host provisioned with Auto Deploy (stateless host) with no local storage, the remediation attempt fails with one of the following error messages:

    • The vmhba device at PCI bus address sxxxxxxxx.xx is not present on your host. You must shut down and then insert a card into PCI slot yy. The type of card should exactly match the one in the reference host.

    • No valid coredump partition found.

    Workaround: Disable the plug-in that is causing the issue (for example, the Device Alias Configuration or Core Dump Configuration) from the host profile, and then remediate the host profile.

  • Applying host profile with static IP to a host results in compliance error
    If you extract a host profile from a host with a DHCP network configuration, and then edit the host profile to have a static IP address, a compliance error occurs with the following message when you apply it to another host:

    Number of IPv4 routes did not match.

    Workaround: Before extracting the host profile from the DHCP host, configure the host so that it has a static IP address.

  • When you hot-add a virtual network adapter that has network resources overcommitted, the virtual machine might be powered off
    On a vSphere Distributed Switch that has Network I/O Control enabled, a powered on virtual machine is configured with a bandwidth reservation according to the reservation for virtual machine system traffic on the physical network adapter on the host. You hot-add a network adapter to the virtual machine setting network bandwidth reservation that is over the bandwidth available on the physical network adapters on the host.

    When you hot-add the network adapter, the VMkernel starts a Fast Suspend and Resume (FSR) process. Because the virtual machine requests more network resources than available, the VMkernel exercises the failure path of the FSR process. A fault in this failure path causes the virtual machine to power off.

    Workaround: Do not configure bandwidth reservation when you add a network adapter to a powered on virtual machine.

VMware HA and Fault Tolerance Issues
  • New Issue Legacy Fault Tolerance (FT) not supported on Intel Skylake-DT/S, Broadwell-EP, Broadwell-DT, and Broadwell-DE platform
    Legacy FT is not supported on Intel Skylake-DT/S, Broadwell-EP, Broadwell-DT, and Broadwell-DE platform. Attempts to power on a virtual machine will fail after you enable single-processor, Legacy Fault Tolerance.

    Workaround: None.

Guest Operating System Issues
  • Attempts to enable passthrough mode on NVMe PCIe SSD devices might fail after hot plug
    To enable passthrough mode on an SSD device from the vSphere Web Client, you select a host, click the Manage tab, click Settings, navigate to the Hardware section, click PCI Devices > Edit, select a device from a list of active devices that can be enabled for passthrough, and click OK. However, when you hot plug a new NVMe device to an ESXi 6.0 host that does not have a PCIe NVMe drive, the new NVMe PCIe SSD device cannot be enabled for passthrough mode and does not appear in the list of available passthrough devices.

    Workaround: Restart your host. You can also run the command on your ESXi host.

    1. Log in as a root user.

    2. Run the command
      /etc/init.d/hostd start

Supported Hardware Issues
  • When you run esxcli to get the disk location, the result is not correct for Avago controllers on HP servers

    When you run esxcli storage core device physical get, against an Avago controller on an HP server, the result is not correct.

    For example, if you run:
    esxcli storage core device physical get -d naa.5000c5004d1a0e76
    The system returns:
    Physical Location: enclosure 0, slot 0

    The actual label of that slot on the physical server is 1.

    Workaround: Check the slot on your HP server carefully. Because the slot numbers on the HP server start at 1, you have to increase the slot number that the command returns for the correct result.

CIM and API Issues
  • New Issue The sfcb-vmware_raw might fail
    The sfcb-vmware_raw might fail as the maximum default plugin resource group memory allocated is not enough.

    Workaround: Add UserVars CIMOemPluginsRPMemMax for memory limits of sfcbd plugins using the following command and restart the sfcbd for the new plugins value to take effect:

    esxcfg-advcfg -A CIMOemPluginsRPMemMax --add-desc 'Maximum Memory for plugins RP' --add-default XXX --add-type int --add-min 175 --add-max 500

    XXX being the memory limit you want to allocate. This value should be within the minimum (175) and maximum (500) values.

Miscellaneous Issues
  • New Issue Virtual SAN observer supports SSLv3 encryption
    SSLv3 is disabled by default in ESXi 6.0 Update 1, however, the Virtual SAN observer still supports SSLv3.

    Workaround: None.

VMware Tools Issues
  • New Issue Compilation vmxnet module within open-vm-tools 9.10 fails with kernel version 3.3.0 or later
    When you compile and install open-vm-tools 9.10, you might encounter multiple errors as the vmxnet.c fails to compile with kernel version 3.3.0 or later. This issue is resolved for open-vm-tools 9.10.2 and you can install the open-vm-tools 9.10.2 with any kernel version.

    Workaround: To install open-vm-tools 9.10, edit the vmxnet from ./configure or --without-kernel-modules.

>