Introduction

vSphere 7.0 with Kubernetes is going to be released soon! The announcement just went live today during the “Modern Apps Virtual Event”. A couple of weeks ago I followed multiple sessions regarding the new VMware and vSphere product releases and updates and witnessed all its beauty. Today the NDA has lifted and I get to tell everybody what has changes!

A small side note to begin with, the below blog does not contain everything that has changed, that would make the blogpost way too long, and it’s already immensely long (prepare for a long read). The below links will contain the release notes once those pages go live.

Please use the below Table of Contens to browse through the blogpost. This will make navigating a lot easier for you. This entire blogpost consists out of three pages! You can switch pages at the very bottom of this post, or by clicking on one of the three links below:

Introduction on VMware Tanzu

Let’s start of by looking back at the VMware vision, it’s clear that it’s an always changing vision. To recap, this vision is “Deploying any application on any device and on any cloud”. To this date this means that VMware provides their customers with the tools to build a consistent infrastructure across as many clouds as customers want and provide them with a consistent easy way of doing operations across those clouds. This also meant that VMware embraced the fact that customers could deploy workloads across AWS, Google Cloud and even Azure.

VMware Vision VMworld 2019

So, when we look at the “any application” part of the vision, some additional explaining is needed nowadays. This is because during the last couple of years the definition for an “application” has changed quite a lot. More and more applications are not only made out of a couple a virtual machines anymore. Applications are transforming to the next level, which people like to call “modern applications”. These applications don’t only exist out of a couple of VM’s but also out of containers with the help of Kubernetes clusters, vGPU’s, Serverless functions, public cloud (native) services and more. Modern applications can now be looked at as distributed systems instead of a singular system or application, which were almost always maintained by independent teams. The below slide shows this quite well.

Definition of an application has changed

You could say that modern applications are a composition of services across clouds. These “modern applications” however, give a completely different set of challenges when comparing them to traditional applications. Traditional applications (a couple of virtual machines) can be deployed with ease within their own respective cloud silo, such as a public, on-premises or edge cloud. But like I said above, the modern applications consist out of more than just a couple of virtual machines. How do you integrate, connect and operate these compositions of services across clouds? How do VI-Admins provide their developers the tools that they need to quickly develop these modern applications? How do VI-Admins secure these services? Well, currently this is not easy. You can’t just slap a Kubernetes or dedicated container-based platform against these problems and assume all will be OK.

And because VMware thinks the number of modern applications, or containerized systems will keep expanding and by 2022 surpassing the number of traditional applications they saw the need to launch a new product portfolio called VMware Tanzu. With VMware Tanzu, VMware will natively integrate container management and deployment services with the help of Kubernetes, right into vSphere. What even makes it better is that the Kubernetes capabilities will be accessible through a familiar set of tools and scripts (the Kubernetes declarative syntax), so that developers can use these natively integrated capabilities. The idea behind this is that it will make it easy for developers and IT-admins to manage and deploy modern applications that span multiply technology stacks.

All of this is particularly helpful since building a Kubernetes cluster is relatively easy. But when you need to manage a Kubernetes cluster, in-place update, upgrade or even add nodes things get messy and difficult. With Tanzu, VMware is trying to simplify the difficult parts and provide a simple alternative. You can even manage all of your Kubernetes clusters, whether it be across vSphere, VMware PKS (which is being rebranded), public or private clouds and even DIY, from a single point of control with VMware Tanzu Mission Control. VMware Tanzu Mission Control will allow you to lay down the groundworks in regards of management ease. Configure and deploy network, ingress or backup policies with just a few clicks on all of your Kubernetes clusters across different clouds! The VMware Tanzu portfolio will consist out of the following products:

  • VMware Tanzu Kubernetes Grid, which will be integrated within VMware Cloud Foundation.
  • VMware Tanzu Mission Control, which will be consumable through a VMware SaaS offering at first.
  • VMware Tanzu Application Catalog.

More on the VMware Tanzu portfolio and services can be found HERE, HERE and HERE.

Announced enhancements/features and new product versions

Not only was the availability for the VMware Tanzu portfolio announced today, but there were also loads of new features in several VMware and vSphere products that I get to talk about! Please use the Table of Contents on top of this blogpost to navigate through this large blogpost.

VMware Cloud Foundation (VCF) 4 with Kubernetes

Release 3.9 of VMware Cloud Foundation (VCF) already brought us a ton new enhancement and additions. It gave us the ability to manage multiple SDDC instances with Multi-Instance Management, support for Fibre Channel as principle storage, more API calls and updates vSphere components! Yet there is always more to bring.

This release marks the release of version 4.0 which includes vSphere with Kubernetes! But let us first revisit the VCF architecture and reason it exists for those that are not familiar with VCF. VMware Cloud Foundation delivers us a unified software-defined-datacenter (SDDC) platform based upon the VMware Validated Designs (VVD’s) which brings together compute virtualization (VMware ESXi), network virtualization(VMware NSX-V/T), storage virtualization (VMware vSAN) and cloud management with monitoring (VMware vRealize Suite) into one single platform that can be deployed either on-premises or run as a service in a public cloud like VMware Cloud on AWS. The entire stack can be managed, with lifecycle management, from the SDDC manager software to provide consistent and simple operations. The deployment and configurations of these platforms are also completely automated to reduce mistakes and increase operator efficiency. The end result is a simple automated hybrid cloud SDDC platform that is based on the well-known VMware vSphere products and processes.

VMware Cloud Foundation Architecture

Ok, now that that’s out of the way, let’s go back to VCF 4.0! This release further enhances the entire VMware SDDC stack by also integrating container management into vSphere to provide flexibility, agility and security for the modern applications I talked about earlier. It’s now possible to use this stack to automate the provisioning and maintenance of Kubernetes clusters with Tanzu Kubernetes Grid Services. This addition will enable customers to accelerate their adaption on cloud native technologies within their enterprises.

VMware Cloud Foundation 4.0 with Kubernetes Architecture

Together with Tanzu Mission Control enterprises can bring together multiple Kubernetes clusters even if they are running across multiple environments, albeit on on-premises, public or private cloud environments. Next to this you can deploy and manage policies across all of those managed Kubernetes clusters right from one management console. All of the before mentioned tooling can easily be used through and with Kubernetes APIs so that developers receive the agility they’ve wanted for some time now!

Tanzu Mission Control example

Ok great, so what else is new? Well with the new release VCF 4.0 also supports the following new products (BOM):

  • SDDC Manager 4.0 which includes a building VMware Imaging Appliance (VIA)
  • vSphere 7.0 with vCenter and ESXi 7.0
  • vSAN 7.0
  • NSX-T
  • Workspace One Access 3.3.2
  • The vRealize 8.1 suit including vRA, vROPS, vRLI and vRSLCM
  • vRealize Network Insight 5.2

Next to this the VCF Management Domain footprint has been reduced by embedding the PSC’s (makes sense since they are now deprecated), embedding NSX Manager and Controllers and deploying NSX Edge Clusters at day 0/x. The Workspace ONE Access cluster now has Native HA support and some additional Admin and Operator roles have been defined for the API and UI to further increase flexibility of the platform. I’ve also heard that NSX-T will be the only choice you can make starting from this release. This will obviously push NSX-T to more and more customers which is needed since NSX-V will be End-of-support 16/01/2022!

More on VCF 4.0 can be found HERE.

VMware vCenter Server 7.0

With the new release of vSphere 7.0 suite, vCenter also got updated to version 7.0. The previous version (6.7 U3) brought some minor tweaks and improvements such as DDNS support, PNID change support, AMD Epyc Gen2 CPU support and more, but version 7.0 is a different story! VMware vCenter Server 7.0 brings a new slate of functionalities specifically targeted at vSphere with Kubernetes and finally implementing features that VI-admins have wanted for years.

vCenter Server General updates

As always VMware increased the vCenter Server maximums in vSphere 7. Check the table below for a direct comparison:

vSphere 6.7vSphere 7
vCenter Server (standalone)Hosts per vCenter2.0002.500
Powered-on VM’s per vCenter Server25.00030.000
Linked Mode vCenter Servers (15 per SSO domain)Hosts5.00015.000
Powered-on VMs50.000150.000
vCenter Server LatencyVC -> VC100ms150ms
VC -> ESXi Host150ms150ms
vSphere Client -> VC100ms100ms
vCenter Server 7.0 Maximums

As you can see not much has actually changed besides the amount of Powered-on VM’s and hosts a vCenter SSO Domain can hold. But I thought it was worth a mention.

  • Another small update is that vSphere Portgroups are now secure “out-of-the-box”. What this means is that the Security settings are all disabled by default. Previously this was not the case. If you ask me this should’ve been done years ago. But great that VMware fixed this in this release.
  • VM Hardware version 17:
    • Watchdog timer devices. A watchdog timer helps reset the VM if the guest OS is no longer responding. This is especially important for clusters database or filesystem applications.
    • Precision Time Protocol (PTP) support. With PTP you can achieve sub-millisecond clock accuracy. To use PTP a service on the ESXi host has to be enabled and a Precision Clock device has to be added to the virtual machine.
  • Support for vSGX/Secure Enclaves on Intel CPU’s. Intel Software Guard Extensions (SGX) allows applications to work with hardware to create a secure enclave that cannot be viewed by the Guest OS or hypervisor. This is rather new, but it can be used by organizations that want to have pieces of code saved in a encrypted portion of memory.

External Platform Services Controllers Deprecated

Previously already announced by VMware, the External Platform Services Controller (PSC) is now really deprecated. You can no longer deploy a vCenter with an External PSC during the deployment.

VMware vCenter Migration options

Starting with the release of vSphere 7 you can automatically execute the following two options, that used to be a manual process with the previous versions:

  • Converging an External PSC configuration to an Embedded PSC configuration.
  • Converting a Windows VMware vCenter server to the VMware vCenter Appliance.

This is great! No longer do we need to use the (sometimes) complicated CLI on the vCenter Appliance to do these jobs. Just load up the vCenter Server Appliance installer and merge/migrate!

vCenter Server Profiles

With the release of vSphere 7 VMware releases a new feature called “vCenter Server Profiles. With this new feature you can create a consistent configuration file that can be used across multiple vCenters. This configuration profile can be:

  • Exported in a .json format from and imported into a vCenter Server through REST APIs calls.
    • “List” configuration
    • “Export” configuration
    • “Validate” configuration
    • “Import” configuration
  • Maintained with version control between vCenter Servers.
  • Used for an easy revert to a known good vCenter Server configuration.

These vCenter Profiles can be consumed with DCLI, PowerCLI, Ansible, Puppet, Chef and more automation tools. You can propagate the profiles across a maximum of 100 vCenter Servers at this time.

There is also a validation mechanism in place to ensure that the configuration that you would like to import is valid. If this validation returns an invalid status you will see the error which is causing this, so that it can be fixed. I can imagine this functionality be particularly useful for organizations that want to maintain a consistent state of their configuration over all of their vCenter Servers.

vCenter Server Profiles

Improved vCenter Server Certificate Management

A couple of releases ago VMware added the possibility to view and manage your vCenter Server certificates right from within the VMware vSphere UI. As we all know this looks something like this:

Certificate Management in vCenter Server 6.7 U3

If you like me only ever change the vCenter Server machine-ssl certificate to throw an internal or external approved certificate on it, the Solution certs won’t get touched. This is something VMware recognized and changed. The vsphere-webclient, vpxd and vpxd-extensions certs are now longer visible or manageble from the vSphere UI. Makes sense since these are back-end services anyway. The UI has been simplified down to the bare essentials and now looks like this:

Certificate Management in vCenter Server 7.0

You can easily replace any certificate from the UI now, but also programmatically with APIs! A great addition to the vSphere UI if you ask me!

vCenter Server Multi-Homing support

With this new release Multi-Homed vCenters are finally supported. Like we all know and William Lam pointed out in his blog, this was never actually supported by VMware, even though you can easily add a new NIC inside the VCSA VAMI nowadays. Well I have good news for you! Starting with this release, it is officially supported.

The only remark you should know about is that the first adapters (NIC1) is reserved for vCenter HA (VCHA) and that there is a limit of 4 NIC’s per vCenter Server. So everybody that used a second (or third/fourth) NIC in a previous version of vSphere for a dedicated backup network, or external access network can now breath and relax because it’s supported!

vCenter Server 7.0 Multi-Homing support

vCenter Server Content Library

The vCenter Server Content Library also received some love this update. You can now use advanced versioning on templates, check out/in templates and revert templates to previous versions. Some UI elements have also changed to better reflect the versioning possibilities. You can now also edit Advanced configuration for a Content Library to increase efficiency on transfers and change settings in regards to the Auto-Sync Frequencies.

vCenter Server Update planner

Another big change within vCenter is that there now is something called a vCenter Server Update Planner. This is a completely new feature build right into the vSphere client. This new feature will help organizations see two things without going out of the vSphere Client:

  • Pre-Update Checks
    • With this check you can select your target vCenter Server version from a list of available updates and receive a Pre-Update Check report that you can use to plan the upgrade.
  • Interoperability Matrix
    • Before doing upgrades on a VMware vSphere platform it is extremely useful to check the compatibility with the other vSphere products that are being used on the platform. We always had to go to the online Interoperability Matrix. Starting with vCenter Server 7.0 we don’t have to anymore. There is now a build in Interoperability matrix right inside the vSphere Client that automatically detects installed vSphere products and shows their compatible versions, all with a link to the Release Notes.
vCenter Server Update Planner – Interoperability Matrix

This new feature will save VI-Admins precious time by instantly displaying the versions that we can upgrade to within our own vSphere environments. The last great thing about this is that you can also do “What-If” upgrades. Which means it will check what will happen or what will have to be checked once you decide to upgrade the environment. This is a great way to provide easy pre-upgrade test results to your colleagues.

vSphere vSphere LifeCycle Manager (vLCM)

vSphere Update Manager (VUM) received quite the change in this release. VUM is getting replaced by vSphere LifeCycle Manager (vLCM). This new tool aims to deliver the VI-Admins a new way of upgrading their vSphere environment. You can now finally patch, update or upgrade ESXi servers at scale with RESTful APIs to automate lifecycle management and use a desired state image while you’re at it.

These desired state images are now cluster wide, and are called Cluster Images. The thought behind these Cluster Images is that you only have to maintain one single image for the entire cluster. Previously in VUM you could potentially use more than one Baseline/Image inside a cluster, which in turn might provide you with inconsistencies that you don’t need or want. These Cluster Images consist out of the following three parts:

  • ESXi Base Image
    • The installation software required to install the ESXi hypervisor.
  • Vendor Add-Ons
    • Vendor specific driver Add-Ons.
  • Firmware and Drivers Add-Ons
    • Host firmware and drivers.
  • Components
    • Separate .VIB based features.

Yes, you are not dreaming! You can now use vLCM to patch host firmware and ESXi patches in one maintenance window. At the moment this only works in conjunction with Dell OpenManage and HPE Oneview though. You don’t need to have two maintenance windows, or go to different tools to update both the firmware and ESXi hypervisor. It all works from within vSphere. Another cool thing that vLCM introduces is that it will check the VMware Compatibility Guide (VCG) and Hardware Compatibility List (HCL) by using the new build in recommendations engine during the Remediation Pre-Check phase. This will remove the risk of unsupported drivers or firmware within your environment. #lovethisfeature

vLCM also provides a better insight during remediations by displaying a detailed log file in the vSphere UI. This helps VI-Admins better understand the progress vLCM has made. Because of this detailed status report, you can now choose to “Skip Remaining Hosts” if you feel the need for this on the same page. vLCM can also detect Compliancy drifts and act upon them by remediating hosts that have lost compliancy to the desired state cluster image, but to me that’s not actually that different from the VUM Compliancy checks and remediations.

Another cool thing vLCM can do is that you can Export and Import the desired state Cluster Image to other collectively managed clusters, or to other vCenter Servers. You can export the images in the following three formats:

  • JSON
    • Download the Cluster Image as a json file. This only contains metadata about the image but no software packages.
  • ISO
    • You can download an installable ISO image based upon the Cluster Image.
  • ZIP (Offline Bundle)
    • Download a ZIP Offline Bundle that contains all components and software packages. You can upload this into an Update Manager’s depot or use this for ROBO sites where you don’t want to transfer images from remote vCenter Server Appliances (VCSA) to local sites because of network throughput constraints.
vLCM export cluster image

As far as I know of, vLCM is not enabled by default. You can choose to use VUM on a cluster by disabling the cluster setting called “Manage image setup and updates on all hosts collectively.”. If you wish to transition to the new collectively managed cluster image setting you just have to enable this on existing clusters. Just edit the Cluster and check the tickbox.

Enable vLCM on a cluster

The last thing I want to say on vLCM is that VMware also made sure it works with Auto Deploy. You can create an Auto Deploy Deploy Rule which can be connected to all hosts or a pattern, select the Cluster Image that you created earlier and you’re done! Auto Deploy automatically creates a new Image Profile based on the Cluster Image that fresh ESXi hosts can use. Don’t forget to activate the rule though, you won’t be able to use it if you don’t.

vCenter Server Namespaces

When you use the vSphere with Kubernetes capabilities, you will receive a new grouping construct within vSphere. The need for this new grouping construct, next to vApps and Resource Pools comes from the fact that modern applications consist out of more than virtual machines. The new construct is based upon the Kubernetes Namespace model. A Namespace is a collection of resource objects. It’s basically a supersized vApp/Resourcepool combination which can hold more than virtual machines, such as serverless functions, Kubernetes environments, containers, disks etc. On this Namespace you can set QoS services, Limit’s, Encryption, Security, Availability and access control policies.

This new construct also simplifies the vSphere inventory a lot. Instead of having thousands of virtual machines in the vSphere inventory you get a couple of Namespaces that hold all of the services for a modern application. A great example for this simplification is that VI-admins can for example use vMotion on a Namespace and with that action they could potentially move hundreds of virtual machines in a single click. How this looks like in the UI can be seen from the below screenshot:

vCenter Server Namespace construct
vCenter Server Namespaces

Future vMotion (vMotion 2.0)

During this update vMotion received a large update! I’ve actually talked about these improvements before last November. Like I said back then, VMware has been working on a complete overhaul for vMotion for a while now. It hasn’t actually changed all that much since it was released back in the days. So, it’s about time some major issues are fixed! vMotion has been improved in the following three main areas:

  • vMotion memory page tracing performance enhancements
    • During vMotion dirty memory pages (changed memory pages) need to be tracked so that a sync can occur during the final flipover to the other ESXi host. Tracing these dirty memory pages is done by installing traces on all vCPU’s. This in return gives a short hick-up/performance drop (microseconds) during the pre-copy vMotion phase. The new enhanced vMotion doesn’t install a trace on all vCPU’s, but only on one vCPU. Which means the other vCPU’s are still free to perform and run workloads.
  • The time a vCPU spends in trace install time is significantly reduced.
  • One vCPU is in charge of the memory page tracing.
vMotion Old + vMotion 2.0
  • Compacted memory bitmap transfers
    • A memory bitmap is a map of pages that have changed (dirty) during the vMotion process. This bitmap can get quite large when you are talking about “monster” virtual machines. A memory bitmap for a virtual machine with 1GB RAM is 32KB. So back in the day this wasn’t hard to transfer over to another host. But when you are talking about virtual machines that has around 24TB of RAM, the memory bitmap file is already 768MB large. Which takes about 2 seconds to transfer. This also gives the virtual machine a long stun time, in which no operations can be executed.
    • This has been enhanced by compacting the memory bitmap file. Apparently, the memory bitmap file is full of blank spaces but since a lot of memory has already been copied over during the pre-copy phase in the vMotion process, this is not needed. When you compact the blanks out of the memory bitmap file the file size gets reduced significantly.
    • Because the memory bitmap gets compacted a virtual machine with 24TB memory (768MB memory bitmap size) doesn’t take 2 seconds to be transferred anymore, but only 175 milliseconds.
Compacted memory bitmap transfer
  • Fast Suspend Resume addition
    • This is a technique that VMware uses for hot adding devices and storage vMotion. It’s different from vMotion because it doesn’t have to change the active memory file to another host.
    • This technique actually creates a shadow virtual machine, adds the new resources or does the storage vMotion, quiesces the virtual machine, copies over the memory metadata and resumes the virtual machine.
    • Transferring over this memory metadata file is currently done by using one vCPU.
    • Because of this the stun time during one of the operations mentioned above is rather large when you are editing virtual machines with a large amount of RAM.
    • VMware enhanced this by using all the vCPU’s the virtual machine has. This significantly reduces the stun time.

And last but not least there are also new additions to the selectable EVC Modes in a cluster. You can now enable EVC for Intel Cascade Lake and AMD Zen 2 (EPYC Rome) CPU’s.

These are welcome changes to vMotion technique since virtual machines are getting bigger and bigger (Monster VM’s) and applications are becoming more sensitive to latency and performance drops nowadays. So, a pre-copy stun, performance decrease during vMotion and post-copy stun can really leave an impact on the continuity of applications. But, with these enhancements they should no longer have a large impact. If you want to read more on vMotion (this is the old way) just click here!

Distributed Resource Scheduler 2.0 (DRS 2.0)

Since DRS got released back in 2006. Since then virtual machines grew, applications changed with cloud-native, services, container additions. DRS however, did not change that much. There were a couple of enhancements and changes in vSphere 6.7 though. The current version of DRS is a cluster centric service. This means that it wants to ensure that the load on the cluster is balanced so that hosts are not experiencing contention, if they don’t have to. If DRS sees a cluster imbalance, it will calculate if vMotioning a virtual machine to another host will fix this imbalance. If it does, the vMotion will automatically be launched.

The new version of DRS, DRS 2.0 is a complete revamp of the first version. DRS 2.0 is now a virtual machine (workload) centric service instead of a cluster centric service. To understand the changes made to DRS, I will explain the three main categories of these changes:

  • DRS cost-benefit model:
    • Like I said above, DRS 2.0 focusses on a virtual machine “happiness” score instead of a cluster imbalance. This “VM Happiness” score ranges from 0% (not happy) to a 100% (happy).
    • A lower bucket score does not directly mean the VM is not running properly. It’s a number which displays the execution efficiency of a VM.
    • VM’s are placed into VM DRS Score “buckets”. These buckets are 0-20%, 20-40%, 40-60%, 60-80% and 80-100%.
    • “VM Happiness” score is a new metric introduced with this release.
    • The “VM Happiness” score is calculated from over a dozen of metrics. The core metrics that define the most on this score are the Host CPU Cache behavior, VM CPU Ready Time, Swapped Memory, Migration Cost and VM burst capacity.
    • You can find the “VM Happiness” score right from within the vSphere Webclient UI on a VM basis.
    • DRS 2.0 checks the “VM Happiness” score and decides to vMotion a VM to another ESXi host if it can improve it.
DRS 2.0 VM Happiness buckets
  • Support for new resources and devices
    • DRS 2.0 can now do proper distributed load scheduling based on Network Load Balancing. The old DRS version never actually took Network Load as a metric to base loadbalancing decisions on. It would prefer a CPU or Memory metric before the network metric.
    • DRS 2.0 is now hardware aware. This means that if you are using vGPU’s on some virtual machines, DRS 2.0 will only vMotion those virtual machines to other hosts that can provide the vGPU’s.
    • Initial placement support for VM’s with vGPU’s and PCIe devices configured in passthrough.
    • If you have virtual machines that have a fluctuating stable/unstable workload profile, these will also receive some love in the new DRS version. Unstable/Stable workloads are now a part of the cost metric for DRS. This effectively means that virtual machines don’t get pushed around all the time depending on their workload.
    • DRS 2.0 will also check how long the new VM Happiness score would stay stable. This means it will calculate the benefit for a move and how long this benefit would last. This will ensure that unnecessary vMotion’s occur less than before.
  • Faster and scalable
    • In regards to scalability, DRS 2.0 changed a core mechanism that it used to use for its cluster-wide standard deviation model. Because DRS is now virtual machine centric it doesn’t need to take a cluster-wide snapshot to calculate what to do.
    • This means DRS runs every minute instead of every 5 minutes.
    • DRS also doesn’t use Active Memory anymore. It now uses Granted Memory. This changed because the world has changed and businesses don’t really overcommit on their memory anymore.
    • DRS Scalable shares provide relative resource entitlement to ensure that VM’s in a Resourcepool set to High shares, really get more resource prioritization over lower share Resourcepools. This setting is not enabled by default. It is on vSphere with Kubernetes. In the previous DRS version, it could possibly occur that VM’s in a Resourcepool with shares to Normal got the same resource entitlement as a High share Resourcepool. This is now fixed.

vSphere Identity Federation

Ever wondered when you would be able to use Identify Federation to access the vSphere Client? Well you don’t have to wait any longer. vCenter Server 7.0 has Identity Federation possibilities. You are now able to add enterprise identity providers (IdPs) to handle the authentication. This removes the need for cloud providers to provide customers with credentials and managing their passwords.

Initially this will only work with Microsoft Active Directory Federation Services (ADFS). Later on it will work with more providers. This works like you would expect it to, the vSphere client redirects to the external IdP, in this screen you can enter your credentials and once authenticated you get logged into the vSphere Client. A small overview can be seen below.

vSphere Identity Federation Overview

vSphere Trust Authority (vTA)

The vSphere Trust Authority (vTA) creates a hardware root of trust using a separate ESXi host cluster. It is responsible for ensuring that ESXi hosts are running trusted software and for releasing encryption keys only to trusted ESXi hosts. The vTA is here to create a Trusted Infrastructure.

The vTA is also a separate hardware cluster that runs the attestation services. This vTA cluster can be configured with the Principle of Least Privilege. This ensures that only a select number of VI-Admins have access to this cluster.

The vTA cluster checks if a workload ESXi hosts passes the attestation before passing it encryption keys from the KMS server. Once the workload ESXi host passed the attestation, they are marked as trusted. If they don’t pass the attestation, the ESXi hosts get marked untrusted. A valid attestation report from the Attestation Service can be a requirement before the ESXi host receives any encryption keys from the Trusted Key Provider.

If you have a secured/trusted workload running on your environment, let’s say an encrypted virtual machine, the vTA and it’s trusted/untrusted ESXi hosts ensure that the secured workload is only allowed to move on trusted ESXi hosts. You will not be able to move a secured workload from a trusted to an untrusted ESXi host, since the vTA will not provide that untrusted ESXi host the encryption keys it needs.

Key difference with previous versions is that the vCenter Server doesn’t actually need a connection with the KMS server anymore, like in vCenter 6.5/6.7. This also means that the vCenter Server no longer distributes the encryption keys to the ESXi hosts. You can connect the KMS server with the vTA environment directly. This can have some implications to your KMS licensing though. You should check that before using the vTA services. TPM 2.0 is a requirement to for vTA. But most recent hardware have TPM 2.0 implemented so that shouldn’t be an issue.

vTA Overview

This concludes the changes made to VMware Cloud Foundation 4.0 and VMware vCenter Server 7.0 that were noteworthy to report on. Because there is a lot more to come I am going to split the blogpost in several pages. Please continue to the second page to find out more on vSphere 7.0 and its new enhancements along the complete product portfolio.

More on this release can be found on multiple blogposts by VMware HERE.

** Please make note that the announced products haven’t been released yet and that features or certain enhancements could potentially be removed or changed before release.


Bryan van Eeden

Bryan is an ambitious and seasoned IT professional with almost a decade of experience in designing, building and operating complex (virtual) IT environments. In his current role he tackles customers, complex issues and design questions on a daily basis. Bryan holds several certifications such as VCIX-DCV, VCAP-DCA, VCAP-DCD, V(T)SP and vSAN and vCloud Specialist badges.

1 Comment

AWS training in hyderabad · October 3, 2020 at 10:29 am

This is a wonderful article, Given so much information in it, These type of articles keeps the user’s interest in the website, and keep on sharing more … good luck.

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *