Today is the last day of VMworld 2019 Europe. Yesterday was all about the keynote with Bear Grylls giving us a very inspirational talk on how to face fears and situations in life. Next we went to some technical sessions which I blogged about yesterday. Today will be about catching the last sessions that we want to have a look at and packing our stuff to go back home!
“If this than that” for vSphere – The Power of Event driven Automation
William Lam gave us this session on how to think or rethink about automation, specifically event drive automation. Event driven automation is different from normal automation with commands. Command based automation is where you tell a system what to do based on for example CREATE, UPDATE, DELETE actions. With Event driven automation you tell the system to execute commands based upon triggered events such as CREATED, UPDATED, DELETED. Which means that the trigger is a fact, it already happened that the virtual machine got deleted for example. With this you can trust the information you use as a trigger for your automation. With command based automation this is not true.
Below are a couple of examples and use cases for which you can use event driven automation:
- Attach a Tag when the VM is powered on.
- Apply VM hardening settings during VM provisioning.
- Allocate an IP address from IPAM during VM requests.
- Update CMDB when VM/Host provisioning or updates.
William also showed us a new appliance that he has worked on with his colleague for the last couple of months now called vCenter Event Broker Appliance (VEBA). VEBA is an open source appliance. With this tool you can “teach” the vCenter to do new tricks and allow you to use event-driven automation without changing the code on the vCenter server appliance.
VEBA subscribes to the vCenter events, the application runs on kubernetes (the app is based upon kubernetes, it doesn’t run on it because VMware is going all out with Kubernetes). It also uses Contour (provides high ingress performance connectivity to the kubernetes cluster) and above that is a framework called OpenFaas. On top of that is the vCenter Connector app. This gets the vCenter events and passes it through the OpenFaas engine.
William showed a demo of things I mentioned above. He also showed us that he could send a Slack message when a VM is powered on. The possibilities are endless. You can send yourself some text messages when an event occurs, or even use a pager!
You can also look at event driven automation from another perspective. Looking at it from the customer side, the customer could possibly publish code on Github which in return deploys a virtual machine on AWS. This uses a new piece of software called Github actions, and executes codeall based upon the event that something got published in Github.
William and Micheal released this fling live during the session! Download it and get started with it now! Download it now and try it out! https://flings.vmware.com/vcenter-event-broker-appliance
NSX-T for small and medium businesses
NSX-T is the next generation of NSX-V and is decoupled from the vSphere environments. It can run on any physical fabric and on any topology. It only needs IP connectivity from one Tunnel Endpoint to another Tunnel Endpoint. Since NSX-V is going end of life within give or take a year, everybody is going to have to implement NSX-T at some point. So I figured; let’s go to this session.
The session reiterated what NSX-T can deliver as to services, such as distributed routing, firewalling, loadbalancing, VPN, connectivity to the physical hardware layer and more. Small (less than 10 hosts) and medium businesses (less than 100 hosts) have more challenges in scaling NSX-T environments inside the datacenter. For small and medium businesses there are a couple of priorities in deciding the design for NSX-T. Below a quick summary:
- Physical footprint:
- Floorspace comes at a premium price for smaller businesses.
- Dedicated appliances take more power, heat and space.
- Hosting location is remote from support personnel.
- Single pane of glass is important to have.
- Maximizing the box functionalities.
- Licensing is based on sockets. So a higher density will generate less cloud spend.
- Overhead can’t be hidden by scale.
- You need to maximize usage of functionality to have a better ROI, since a license includes all functionality.
Next up the NSX-T components and a couple of quick designs were discussed. A large enterprise NSX-T datacenter design (more than 100 hosts) would mean that you can probably create the recommended Edge, Management and Compute clusters. But since we don’t have that you need to tweak the design to support smaller environments.
With a medium sized datacenter environment, you could converge the Edge and Management clusters into one. This would save you a cluster. In this converged cluster you should not use NVDS for the vSphere and NSX services. You will use it for the compute clusters though.
If we go even smaller, and we don’t have any resources to build us two separated clusters, you can even converge all three clusters into one. Management, Edge and Compute will all be in the same cluster. This does have the limitation to be possible on ESXi only. This also means that the Edge will always have to be in a VM form factor and each host will have the NVDS configured. You should also take into consideration that when you have a host with 2 pNICS, all traffic will go through these two pNICS, including the VMkernel traffic. With 4 pNICS you can still separate management and workload traffic on a physical NIC level.
Another thing is NSX-T manager sizing. You should never deploy a small sized NSX-T manager for production. This is only for PoC purposes. This also includes the “small” sizing for the Edge VM. Latency between the three NSX-T managers (when clustered) should be less than 10ms. Regarding the Edge nodes, you should make sure that the Edge node has the best network connectivity in the environment. This is the point where most traffic will go through so you would want that!
So when designing the NSX-T environment, you should really consider the design constraints to make sure your design fits. How many VM’s are you going to place in the datacenter? How many VM’s are you going to place per host? And how are you licensed? Take into consideration the failures you would be able to tolerate, how many hosts or racks can you afford to lose?
Don’t overkill the NSX-T design though, NSX-T is extremely scalable which means you can scale out the computer layer, scale out the Edges and more.
HCX for Cloud Providers Architecture Deep Dive
Like I said earlier this week; one of the most important things that customers are worried about nowadays is the fact that they don’t know how they can migrate their IT estate to another cloud environment. You would have to think about compatibility across clouds, networking across clouds, application dependencies or even just for the simple fact that migrating to another cloud provides the business with downtime.
HCX is probably the tool you would want to take a look at. It enables customers to manage, migrate and control applications workloads across clouds. HCX can do this with a very easy configuration and with zero downtime, which is obviously the best feature. HCX can be connected to traditional vCenter Servers and vCloud Director endpoints. You can even connect legacy vCenter Servers up until version 5.0.
With HCX you have the choice of picking one of the two versions. Advanced and Enterprise. With HCX advanced you can move any workload to any endpoint as long as it is a VMware environment. You can also use bulk migration, retain IP addresses and extend workloads
With HCX Enterprise you can bulk migrate virtual machines with zero downtime to another environment. This is done with a variant of vMotion called replication-assisted vMotion. You can also couple your SRM environment with HCX enterprise to further increase the disaster recovery procedure. The last thing you can also do with HCX Enterprise is that you can use bulk migration for non-vmware workloads! That’s really cool!
When you want to install HCX to move a certain workload you need a couple of items. First up is the HCX manager, the manager that connects to the vCenter or vCD instance. Then you install the HCX Interconnect over which the network flows flow and the appliance that instantiates the L2 tunnel. Next up is the WAN Optimization appliance which obviously ensures a stable steady flow optimized for WAN usage. The last thing to install is the Network Extension Appliance which glues together the source and target VDS, N1K or NSX environments so that a connection between those two is possible. In the end this service mesh of appliances will extend the network from the source to the destination, even the ESXi management and vMotion networks, so that live migrations through vMotion are possible. The nice thing about HCX is that in total it only needs 3 ports open to the outside world! Another great thing is that it can tolerate 500ms latency between the source and destination sites. Funny thing is, that’s more than Long Distance vMotion can tolerate because that would be 150ms latency as a maximum.
Zero downtime migrations with HCX can only be done sequentially. Because only one vMotion can occur at any given time. But in the near future a new release will use the replication-assisted vMotion so that you can do bulk migrations in parallel with zero downtime.
Testing HCX has been on my list for ages now. I hope I get to do this soon!
This has been a great VMworld with lots and lots of new ideas to test and build. I hope you enjoyed the blog posts the last couple of days and I can’t wait for VMworld next year!