VMware Cloud Foundation 2.1.2 Release Notes

VMware Cloud Foundation 2.1.2 | 06 APR 2017 | Build 5022920

The Cloud Foundation 2.1.2 is a minor patch release, resulting in abbreviated release notes. The content of the Cloud Foundation 2.1.1 release notes also applies to version 2.1.2.

What's in the Release Notes

The release notes cover the following topics:

What's New

The VMware Cloud Foundation 2.1.2 release includes the following:

  • Upgrade to VMware Cloud Foundation 2.1.2.
  • Upgrade to VMware NSX for vSphere 6.2.6. See the NSX for vSphere 6.2.6 Release Notes.
  • Upgrade to VMware vSphere (ESXi) 6.0 Update 3. See the VMware ESXi 6.0 Update 3 Release Notes.
  • Upgrade from VMware vSphere (ESXi) 6.0 Update 3 to ESXi600-201703401-SG security patch. See Knowledge Base Article KB2149570.
  • VMware vRealize Operation 6.2.1 Apache Strut Vulnerability security patch is now available and supported for manual application. Because there are no upgrade dependencies, the patch can be applied at any time. See Knowledge Base Article KB2149591.

Cloud Foundation is a software stack that deploys the VMware SDDC Software Stack. For information about what is new in those products, as well as their known issues and resolved issues, see the release notes for those software versions. You can locate their release notes from their documentation landing pages at pubs.vmware.com.

Installation and Upgrades Information

You can install Cloud Foundation 2.1.2 by upgrading from your existing 2.1.1 deployment.

For instructions on upgrading to Cloud Foundation 2.1.2, see Lifecycle Management in the Administering VMware Cloud Foundation guide.

Supported Upgrade Paths

To upgrade, install Cloud Foundation 2.1.1 and upgrade to 2.1.2. The following upgrade paths are supported in sequential manner:

  • 2.0 to 2.1 to 2.1.1 to 2.1.2
  • 2.1 to 2.1.1 to 2.1.2
  • 2.1.1 to 2.1.2

LCM Upgrade Bundles

The Cloud Foundation 2.1.2 software BOM is identical to the Cloud Foundation 2.1.1 software BOM, except for upgrade bundles for vSphere (ESXi) 6.0 Update 3, NSX for vSphere 6.2.6 and ESXi 6.0 ESXi600-201703401-SG releases. These upgrade bundles are hosted on the VMware Depot site and available via the Lifecycle Management feature in the SDDC Manager. See Lifecycle Management in the Administering VMware Cloud Foundation guide.

Software Component Date Build Number
VMware vSphere (ESXi) 6.0 Update 3 24 FEB 2017 5050593
VMware NSX for vSphere 6.2.6 02 FEB 2017 4977495
VMware vSphere (ESXi) 6.0 Update 3 to ESXi600-201703401-SG security patch. 28 MAR 2017 5224934

Prerequisites for Upgrading ESXi and NSX

Take the following prerequisites before upgrading ESXi and NSX on the Management or Workload Domain.

  • Verify that no other domain operations are running. See Monitoring Capabilities in the Cloud Foundation Environment in the Administering VMware Cloud Foundation guide.
  • For ESXi:
    1. Verify that no ESXi host is outside of the domain cluster in vCenter.
    2. Verify that all ESXi hosts within the cluster are in a Healthy state. If a host is not healthy, and therefore in maintenance mode, the upgrade will fail.
    3. Back up the ESXi configuration by running the backup command ./sos --backup from the SDDC Manager VM.

      The backup is stored under /var/tmp.

  • For NSX:
    1. Back up the NSX configuration. See Back Up NSX Data.
    2. If you are upgrading a workload domain, disable the anti-affinity rule within the vCenter cluster.
      1. Log in to the vCenter Server of the domain.
      2. In the left navigation pane, right-click the cluster and click Edit Setting.
      3. In the left navigation pane, click Rules.
      4. Deselect the NSX-Controller anti-affinity rule.
      5. Click OK.

Back Up NSX Data

For the NSX upgrade to succeed, valid backup files must be present and available.

For the NSX backup files to be accessible by Cloud Foundation, you must ensure the backups are created with the correct setting values.

  1. Using SSH, log in to the SDDC Manager VM and discover the correct values for IP/hostname, username, and password settings.

    /home/vrack/bin/vrmcli.sh --lookup-password

  2. Following the backup procedures described in the Back Up NSX Manager Data procedures, using the discovered setting values.
    1. Use the discovered username and password settings to login to the NSX Manager Virtual Appliance.
    2. Complete the settings shown in the Back Up NSX Manager Data procedures as described in the following table.

      Setting Value Notes
      IP/Hostname <ip-address> Enter the discovered IP address for "EVO-Rack_LCM_Backup_Repository-<uuid>" VM that resides in the Management domain vCenter.
      Transfer Protocol SFTP Select this value from the drop-down list.
      Port 22 Enter this port number.
      Username <username> Enter the discovered value.
      Password <password> Enter the discovered value.
      Backup Directory /backup Enter this value.
      Filename Prefix nsx_<type>_<domain-number> Enter a name that is specific to the backup. For example, for the NSX management domain backup, specify nsx_mgmt_dmn01, nsx_mgmt_dmn02, and so on as necessary. Similarly, for VDI or VI domain backups, specify nsx_vdi_dmn01, nsx_vdi_dmn02, nsx_vi_dmn01, nsx_vi_dmn02, and so on.
      Passphrase nsxmgr_backup Enter this passphrase.

LCM Cannot Patch Hosts Not in Domain

For LCM to patch a host, you must bring the host into the domain by expanding the domain in the SDDC Manager.

Upgrading ESXi Hosts Added in Domain Expansion

When an upgraded domain is expanded, the additional host runs a lesser ESXi version, forming a mixed mode cluster. As a best practice, it is recommended to put the host in maintenance mode in vCenter, then immediately perform an LCM upgrade. This procedure rejoins the host to the cluster, thereby by preventing any Virtual SAN cluster incompatibility issues.

Known Issues

This section lists item only relevant to version 2.1.2. For complete information on known issues in Cloud Foundation, see the Cloud Foundation 2.1.1 release notes.

Life Cycle Management (LCM) Known Issues

  • Bundle versions in interface do not match product version
    The Bundle details page within SDDC Manager shows bundle version numbers 2.1.2, 2.1.3, 2.1.4. These values reflect internal versioning and not the Cloud Foundation product release version.

    Workaround: Ignore bundle versions on the Bundle details page.

  • NSX upgrade fails with RuntimeException
    During an NSX upgrade, the update process fails after the NSX Manager upgrade, skipping the update of Controllers and Edge. The upgrade log shows the following message:
    Upgrade element resourceType: NSX_CONTROLLER resourceId: 3b4e23c4-7177-4444-9901-9fe7a02a30ae:controller-cluster status changed to SKIPPED

    Workaround: Go to the LCM Inventory page (SDDC Manager > Lifecycle Management > Inventory) and check if the domain state has a status of Failed. If so, click Resolve and re-apply the same update.

  • LCM update logs saved in two folders
    LCM update logs are being saved in two similarly named folders:

    • /home/vrack/lcm/upgrades
    • /home/vrack/lcm/upgrade

    Workaround: Review logs in both folders.

  • ESXi and vCenter update on a host might fail in the task of exiting maintenance mode
    Sometimes during an ESXi and vCenter update process, a host might fail to exit maintenance mode, which results in a failed update status. During an update, the system puts a host into maintenance mode to perform the update on that host, and then tells the host to exit maintenance mode after its update is completed. At that point in time, a separate issue on the host might prevent the host from exiting maintenance mode.

    Workaround: Attempt to exit the host from maintenance mode through the vSphere Web Client.

    • Locate the host in vSphere and right-click it.
    • Select Maintenance Mode > Exit Maintenance Mode.

      This action will list any issues preventing the host from exiting maintenance mode.

    • Address the issues until you can successfully bring the host out of maintenance mode.
    • Return to the SDDC Manager client and retry the update.
  • Lifecycle Management page shows all available update bundles independent of the Cloud Foundation release in your environment
    The Lifecycle Management Repository page displays all available updates, regardless of your specific release.

    Workaround: None. Proceed to download the bundles as indicated. Lifecycle Management evaluates and determines the necessary update bundles after they have been downloaded and will apply only the bundles appropriate for your product versions.

  • LCM Inventory page shows a failed domain, but no failed components
    The LCM Inventory page shows a failed domain, but does not show any failed components.

    Workaround: Log in to vCenter for the domain and check that all hosts in the domain have the lcm-bundle-repo available. Add the lcm-bundle-repo if necessary.

  • NSX hostprep resolve required after a vCenter or ESXi update
    After a vCenter or ESXi update, the Installation Status of one or more hosts in an NSX host prep cluster that is associated with the vCenter's domain may show up as "Not Ready."

    Workaround: You can manually restore the cluster to "Ready" status.

    • Navigate to the Host Preparation tab. (Networking & Security > Installation)
    • Click Resolve.

    You may need to repeat this procedure several times.

  • After using the LCM capability to update the vCenter Server software, one or more hosts in an NSX host prep cluster might have 'Not Ready' status, which results in the NSX audit failing and prevents future updates from being scheduled
    Due to an underlying issue with ESX Agent Manager (EAM), after updating the vCenter Server software, the installation status of one or more hosts in the NSX host prep cluster associated with the updated vCenter Server cluster might be in 'Not Ready' status. You can examine the NSX host prep status of the hosts by using the vSphere Web Client to log into the vCenter Server and navigating to the Networking & Security > Installation > Host Preparation tab, and seeing the status for each host in the Installation Status column on that tab.

    Workaround: On the Host Preparation tab in the vSphere Web Client, use the Resolve action in the Installation Status column's menu to manually resolve the cluster. The displayed status will change as the operation proceeds on each host in the cluster. If any host continues to show 'Not Ready' status, use the Resolve action again. You might need to perform this operation a few times.

  • On the Lifecycle Management Update screen, when you expand the section for a failed VMware Software upgrade to see the status of the underlying tasks, the task at which the process failed has a green check mark icon
    The VMware Software upgrade process involves performing a number of tasks. When one task fails, the screen shows that the overall VMware Software upgrade process failed. Due to this issue, when you expand the section in the user interface to view the list of tasks, the task at which the process failed has a green check mark icon next to it instead of the red failure icon.

    Workaround: None. You can tell which task caused the failure of the overall process because all of the tasks in the list after the failed task have gray icons, indicating the process failed before reaching those subsequent tasks.

  • ESXi upgrade fails with message Cluster not in good condition
    Virtual SAN clustering on the host was not enabled resulting in the upgrade failure.

    Workaround:

    1. Open the vCenter Web Client.
    2. Navigate to Hosts and clusters >ClusterName >Monitor >Issues.
    3. Fix the Virtual SAN issue.
    4. Reschedule the update.