Today is the third day for VMworld Europe 2019! Yesterday was full of technical deep dives that were very interesting for me. In the evening it also got quite late at the “legendary’ Veeam party, but we are back at it this morning at 9 AM for the last General Session.

General Session

Ray O’Farrel started on the stage and explained how we can manage complex environments with VMware based services. Tanzu Mission Control and Project Pacific being the key products here. The last couple of days we’ve heard plenty of these two but if you forgot them; Tanzu Mission Control is a new VMware tool that you can use to manage Kubernetes workloads across multiple clouds. Project Pacific is an extension on this by natively integrating containers within vSphere to make containers first class vSphere citizens just like virtual machines. Because of this containers get the benefits that virtual machines currently have.

But how do you secure these workloads? Well here NSX Intelligence can come into place. With NSX Intelligence you can continuously gather telemetry data, from the network and applications (with NSX Guest Introspection that is) which is visualized in a lightweight appliance. Because of this visualization teams can deliver a more granular and dynamic security posture, simplify compliance analysis and streamline security operations.

Once you have your environments running on multiple clouds with Tanzu Mission Control, you want to have insight and recommendations on your cloud spend. Cloudhealth can help with this. Like I said in yesterday’s blog, Cloudhealth can proactively monitor your environments across clouds and recommend changes that can drive down the cloud spend.

When your environments are running, and you are in need to provide access or applications to your organization, you might want to provide them with a virtual desktop. That is why they showed us how easily you can deploy new VMware Horizon VDI’s on multiple different pods across the globe from the same interface, even on Microsoft Azure. The Horizon environment will also support the new Windows 10 Enterprise Multiple-Session templates from Azure! That is great right? Securing the virtual desktops will be done with Carbon black and Workspace ONE in the future. Combining the two together you get a new product called Workspace Security with which you can deliver intrinsic workspace security to the virtual desktops.

Next up was the new VMware CTO. He explained us how the recently acquired software package called Bitfusion works. Bitfusion makes it possible to leverage GPU’s inside virtual machines without having the need for GPU’s on the physical ESXi hosts in the cluster. Virtual Machines connect to the GPU’s remotely through the network. I love the idea of this because you can create a GPU cluster, and remotely provide GPU resources to the virtual machines that are in a, for example, compute cluster. This eliminates the need to have GPU’s within the compute cluster, and helps drive down GPU based cloud spend.

After this Bear Grylls came on stage and gave us a very inspirational talk on how you should take on key aspects of life such as failures and fears and turn it into strength and power. The key takeaway for me was; never give up when it seems tough! Always give the little extra bit and make something out of nothing.

Great guest speaker VMware, well done!

Build Your VMware Private Cloud on Netapp (HBI5806BES)

I picked this session because I recently had a discussion with somebody recently about using a virtualized NetApp ONTAP Select together with VMware vSAN. But it seemed that this session was about NetApp’s own HCI solution which builds you a VMware Private Cloud on premise. Which means it’s also based upon NetApp servers, who knew NetApp also sold servers right?

The NetApp VMware Private Cloud environment really just exists out of VMware vSphere, NSX, vRealize Suit and NetApp Element OS with the NetApp Deployment Engine. With the help of vRealize Lifecycle Management and python scripts you can deploy the entire environment and automate the deployment of NSX in a short time period.

The system exists out of a minimum of two chassis and you can scale out the system as you go, which makes it useful if you want to start with a small environment. A chassis is a 2U node with storage nodes and compute nodes. As said, you can start with two chassis that have a minimum of four storage nodes and two compute nodes. You can scale independently (and forever) by adding chassis and storage/compute nodes. Without the need to buy a fixed system and always adding compute AND storage to the environment.

A nice thing about this solution is that even external existing compute environments can make use of the storage that is available in this HCI system. This is possible because the underlying storage system is just actually a Solidfire storagebox.

There is a vCenter integration plugin which gives you the possibility to manage the system right from the vCenter server. In addition to this the solution also installs a vROPS plugin, with which you can monitor and troubleshoot the entire system. The system can guarantee performance to workloads by using storage policies. And because you use Storage Profiles with VVOLS it can even be done on a per VMDK basis.

Optimizing vSAN for Performance (HCI1757BE)

Since we are using vSAN in our of our environments I figured this session would make sense to catch a quick roundup of vSAN enhancements and performance mitigations.

The session started of by showing the audience what happens with an I/O when vSAN Data Services such as Encryption or Deduplication is enabled. The key takeaway is that it gives an extra bit of latency to your I/O because you need to do more actions than when you don’t have it enabled. Which ofcourse makes sense.

Looking at performance issues on a vSphere vSAN 6.7 environment you should at least look at the sizing, firmware and HCL compatibility and cluster/disk imbalances. Don’t use completely different disks in a diskgroup. That will probably not give you a consistent of performance.

In vSphere vSAN 6.7 a new mechanism called Adaptive Resync got introduced. This feature throttles resync I/O during congestion. It always ensures that resync I/O receives 20% guaranteed performance when it’s being used. When there is no resync I/O the VM I/O can use 100% of the available performance.

In vSAN 6.7 U3 several improvements have been made to increase the performance, below is a short summary:

  • 1.4x – 1.7x improvement for large Seq Writes (256K).
  • Delivers consistent, predictable performance 4K workloads by reducing jitter.
  • Provides 1.3x – 1.6x improvement for 4K/8K/16K random workloads such as database applications.
  • Delivers 15-20% improvement for large Seq reads (256K) because of an optimized disk balancing method.
  • Delivers 1.4x improvement in resync I/O throughput.

Beste practices you can think about when you are optimizing a vSAN environment:

  • Disk groups. Use at least 2 disk groups!
  • Keep your software up to date. Each software release receives new enhancements in regards to performance.
  • Ensure that your vSAN environment has a stable network with multiple 10GB or 25GB interfaces.
  • Also ensure that all the NIC drivers are updated on a regular basis.
  • Using a larger MTU size can also increase the performance.
  • Have a look at NIOC when you use the uplink for more than vSAN services.
  • Clock speed is more important than cores with vSAN.
  • Memory RAM speed has no significant impact on vSAN performance, unless a host has under 32GB RAM. This is because vSAN can use 10GB-30GB RAM depending on the amount of active disk groups.
  • Enable Turbo Boost, C1E states and other power related performance settings in the server BIOS.

As you can see the newest release of vSAN 6.7 U3 is a big deal in regards to performance enhancements. I recommended you to update your environment if you haven’t yet.

Ending

Next up we visited the Solution Exchange to find new and exciting ideas. To be honest, this year it wasn’t that exciting… I did note down some companies and solutions that I need to have a look at in the future.

After this we went back to the hotel to drop our bags and went to VMfest, the VMworld closing party. This year the band Stereophonics provided the entertainment! Awesome ending to this day!

Stay tuned for the last VMworld 2019 blog tomorrow!

Share this if you found this interesting.

Leave a Reply

Your email address will not be published. Required fields are marked *