Session 1: Cloudhealth – Deep Dive
Since Cloudhealth was acquired by VMware in Q3 2018, some changes took place. We wanted to follow up on our knowledge of this product so that we can decide if this is something we should have a look at for our own customers.
For those that don’t know, Cloudhealth is VMware’s multi-cloud management, monitoring, security and cost-insight tool. This tool can connect with a lot of mega-clouds such as Amazon AWS, Microsoft Azure, Google Cloud and the on-premise datacenter and provide you or your tenants insights in the performance of your virtual workloads, monitoring of the workloads and cloud usage insights in regards to costs.
The tool provides a partner model, full multi-tenant environment and ofcourse a centralized management portal. Depending on the license or model that you subscribe to you can get reporting, rightsizing, RBAC, proactive suggestions to clean up unused resources inside on of the connected multi-clouds. There is even an option to create custom pricing models for each customer. This brings a lot of flexibility and insight.
Cloudhealth also has history and archiving function inside the portal. In this section you can select a specific moment in time in which you can see who used what resources and who had what kind of costs. For example; you are able to drill down to a very granular level such as AWS accounts, certain assets, tags, regions and a lot more options so see how many costs are associated to this object. Once you have filtered out the costs of for example a user, you can drill down even more and see the specific line items that are being used and how much they all cost.
From within the same environment you can also see the unused resources that you have provisioned and how much they are costing you each month. Generally speaking businesses have deployed more than they are using, which makes this insight very valuable. You could think of this functionality just like the vROPS recommendations which also alerts you when you have unused or over provisioned virtual workloads. This also works on so called reservations, which is fixed number of resources you have reserved for a fixed time period, which makes them cheaper in most cases. From within Cloudhealth there is the possibility to execute the recommended savings and it will automatically start resizing resources so that you don’t have to do this manually.
Security is also deeply embedded inside the software. As I’ve said before this tool also provides proactive security alerts. It basically scans all your workloads across clouds and proactively provides you with recommendations to enhance the security in your environments. Think about scanning ports, checking if root accounts are being used or checking for public accessible services such as SSH or RDP.
Migration assessment within Cloudhealth is a tool to compare the costs of your on-premise datacenter environment once they would be migrated as-is to one of the mega-clouds. This tool maps the existing infrastructure and utilization and displays what the costs would be inside on of the mega-clouds. This makes creating business cases very easy and from my perspective is very valuable to shutdown or engage discussions with customers that want to either move to AWS, Azure or renew their on-premise datacenter. The displayed recommendations can be displayed on a granular level so that you can easily display what kind of instances, storage types and network types Cloudhealth mapped to your workloads.
Unfortunately vCloud director is not yet supported as an end point within Cloudhealth.
If you want to try out Cloudhealth, there is a 14 day free trial period available for those that are interested.
Session 2: Horizon View 7 Enterprise
This session was a lab, which I didn’t see correctly so I couldn’t take it since I didn’t have my laptop with me.
But there was something that peaked my interest wich is that there is a new “connector” called Horizon Cloud Connector with which you can integrate an on-premise Horizon View environment connection server with your Amazon AWS Horizon Cloud environment. The documentation mentions that once you’ve done this licensing is easy because it automatically provides the on-premise environment with a license. The other benefit is, as far as I’ve understood it, that you can manage your local on-premise Horizon View environment from the VMware Horizon Cloud administrator console. Which could give you a central management pane for several on-premise Horizon View environments. Something to keep in mind for this is that the connector currently doesn’t support IPv6.
Session 3: Lunchbreak session on VMware on IBM Cloud
During our lunch we we’re invited to attend a session from IBM in which they explained their VMware on IBM Cloud services. The session was just to crowded to take any notes but we’ve remembered the following which could be useful.
IBM made sure that rolling out a new VMware based environment is done in just a matter of minutes/hours. They mentioned that they can roll out a complete SDDC environment with HCX (to connect to your own on-premise environment) within a couple of days at a maximum. While deploying you have a couple of choices such as, vCenter on IBM Cloud, vSphere on IBM Cloud and vCenter with NSX-T on IBM Cloud. The differences between these options are basically the level of control that you either gain or lose during deployment of the environment. After deployment you are in full control of everything in this stack including the vCenter environment, as it if was your own on-premise vCenter Server. Which makes this different as supposed tot he VMware Cloud on AWS.
They also told us that you are able to use the IBM vSphere licenses to license the entire environment, that you can use your own vSphere licenses or that you can even mix and match licenses on the VMware on IBM Cloud environments. The last one is interesting because you can bring in your VMware vSphere Enterprise Plus license which is CPU socket based (if you’ve bought them perpetual) and license vSAN or NSX separately per VM, per month and per actual usage.
Since IBM Cloud has environments all over the world, solutions such as stretched vSAN can easily be implemented in a region which has several different datacenters present. Using VMware vMotion technology to move virtual machines can also be used to move virtual machines between regions globally, if the latency permits it ofcourse. All of the region/datacenter interconnects are owned by IBM and at least the ones in Europe have several multiple 10Gbit throughput possibilities.
Session 4: Container/k8s/PKS101 run kubernetes everywhere
Containers seem to be everywhere nowadays but is it just dev/test workloads that are being run this way or is container technology ready for use in a production environment?
It’s VMware’s observation that running containers at scale in a production environment presents some challenges. There are several solutions out there by different vendors that address those challenges. But all with their own approaches.At least for orchestration there now seems to be a standard in the form of Kubernetes which is responsible of running about 80% of the container workload worldwide. So, it’s not strange to see VMware making use of Kubernetes at the core of their solutions such as vSphere Integrated containers, VMware Essential/Enterprise/Cloud PKS and Container Services Extension for vCloud Director.
VMware also has it’s own container registry ‘Harbor’ that provides a private image registry. It has the ability to perform scheduled security scanning for instances to indicate if an image contains outdated binaries. These are only parts of the puzzle when solving challenges with running containers. But we can expect VMware to further develop their portfolio such as with the recent acquisition of Heptio.
Session 5: VMware’s Hybrid Cloud vision and Strategy
This session was all about VMware’s vision around the current cloud trends in the market. The first thing they mentioned is something I’ve mentioned over and over in blogs and in our company: Not every workload is going to be migrated to the mega-cloud! There is going to be a mix between three kind of situations: On-premise workloads, cloud workloads and edge workloads. This means that the Hybrid-Cloud model is now the trend in the market.
VMware is has been abstracting away levels of complexity since version one of ESX. They started out with the introduction of a hypervisor in which you didn’t have to run just one single workload on a physical host. The next version introduced VMware vSphere which included the vCenter Server as a single pane of management for all your physical hosts that ran virtual workloads. The next step was the abstraction of the storage with vSAN and the network with NSX to reduce the level of complexity in that part. This started the foundation of the current trend with the Software Defined Datacenter. The next step that VMware is indicating and is already executing is the Hybrid Cloud strategy that I’ve talked about earlier. The new environments get another layer of abstraction by abstracting the local datacenter to a situation where it doesn’t matter where your workloads run, because everything is accessible through code or API driven automation (look at CAS which I mentioned in yesterdays blog). The Hybrid Cloud strategy includes the virtual cloud network with NSX and VeloCloud, with which you can connect several datacenters to each other, on-premise datacenters, Amazon AWS workloads, VCPP workloads and so on. And as a bonus, all of these services, it doesn’t matter where you buy your resources from, use new Clarity UI framework to generate the same look al feel across clouds. This vision in the end generates a consistent infrastructure with consistent operations.
Something that I already knew but it interesting to tell is that VMware, like basically every business is shifting from long development runs to short development runs. The older development runs were around the 16-18 months at VMWare, which means that fixes, new versions and patches get delivered later than that you would like. So what they’ve done is that they shifted their entire company culture to accommodate a shorter development run where it is only a couple of months before new functionality gets introduced.
They do this through the VMware Cloud on AWS services. They deploy multiple new versions of vSphere into this service each couple of months. Therefor they can find out bugs, security fixes or faults before the software even gets released in a big patch that we can install ourselves. This should mean that new functionality gets to the on-premise environments quicker, but also more stable since it has been tested extensively before.
A cool thing which you could be able to do future in the VMware Cloud environments (which are managed by VMware) is the possibility to have certain periods in which the platform doesn’t install new patches. It’s basically the same as the Windows Update services where you can delay the installation of updates. This however shouldn’t be delayed indefinitely because it is delivered as a service, which needs to keep working to uphold the defined SLA’s.
Session 6: NSX-T Architecture and Benefits Deep Dive
There has been a lot of sessions regarding NSX-T this week and a few have already been mentioned and discussed in the last couple of blogs. But new information is gathered each time! NSX-T is divided in three components:
- NSX-T Data plane:
- Is a module inside the hypervisor kernel. Hypervisor Transport Nodes for switchting, routing and distributed firewalling. These ofcourse run inside the Compute Cluster. Edge Transport Nodes comprised of VM or bare-metal form factor (installed through an ISO, gives performance up to 40Gbit throughput) running L3-L7 services. These run on a Edge Transport node cluster.
- NSX-T Management plane:
- Multiple clustered NSX-T Managers (max 3), like I mentioned in yesterdays blog post. These run in the Management Cluster.
- NSX-T Control plane:
- Collapsed with the NSX-T Manager plane by the way, at least in regards to deployment.
- Contains a table in which virtual elements are saved so that NSX-T can determine the physical location of that element.
NSX-T has support for containers, bare-metal servers, ESXi servers and KVM servers. All of these can natively use the NSX functionality. The underlay for this platform can be from any vendor and can have any topology. The only requirement is that there is IP connectivity and that the MTU is bumped up to 1700. NSX-T is using the standard GENEVE encapsulation for switching.
Distributed routing in NSX-T is very nice. Each hypervisor gets a distributed router component which can route traffic between virtual machines and subnets right on the ESXi host instead of going back to the T0 gateway. This should dramatically reduce the network utilization. This also applies to a multi-router topology with T0 and T1 routers. Each of these routers have distributed routing components inside the ESXi hypervisor.
A service routing component will additionally be placed on the Edge environment when you would want services like NAT on a network. This means the traffic will be routed to the service routing component on the edge node before it can use the NAT service.
NSX-T can be used through the so called “Network Infrastructure as Code” principle. You can also check the API documentation right from the NSX Manager instead of going through the documentation yourself on vmware.com. Like mentioned in earlier blogposts, Terraform also has a NSX-T provider to help with this.
In the latest NSX-T release version 2.4 the consumption of NSX-T has changed. Instead of imperative declaration you can now use declarative declaration. This helps a lot because you can specify a desired outcome instead of specifying the order in which configurations need to take place when you want to deploy something automatically. You can now just declare the desired topology in a JSON file, feed it to NSX-T and NSX will execute it so that you get the topology you declared. All of this with just one API call!
We ended the day with a nice drink with fellow colleagues in the field on a rooftop bar! Tomorrow will already be the last day. Come back tomorrow for the last blogpost!