Introduction

In the previous post here we have discussed the installation for the first NSX-T Manager and its prerequisites. In the next series of the blogpost we will discuss deploying the second and third NSX-T Manager appliance and how to connect them together in a cluster. We will also discuss configuring the Virtual IP (VIP) so that the NSX-T Manager nodes are configured with High Availabilty and with fault tolerance.

Install the second and third NSX-T Managers

As mentioned before the next step is deploying the second and third NSX-T Managers. There are two scenario’s at this point which really depends how your environment looks like.

  1. You can deploy the second and third NSX Manager in the cluster through the NSX-T Manager UI. This is only possible if the NSX-T Manager has the vCenter Server in which you deployed the first one configured as Compute Manager in NSX-T Manager. This can also be seen in the below image:
NSX-T Manager Compute Manager requirement to deploy NSX-T Managers from the UI
NSX-T Manager Compute Manager requirement to deploy NSX-T Managers from the UI
  1. If you don’t have this, or if you don’t want this for in example a Management Cluster/Environment that does not have or need NSX-T functionality. You will have to deploy the nodes from the .ova again, and join them manually in the NSX-T manager cluster.

Deploy the NSX-T Manager through the NSX-T Manager UI

To deploy the last two NSX-T Manager nodes through the NSX-T Manager UI you can follow the next steps:

  1. To do this login through the FQDN defined in the .ova deployment.
  2. Go to System -> Appliances-> Add NSX Appliance
NSX-T Manager add node through UI
NSX-T Manager add node through UI
NSX-T Manager add node through UI step 2
NSX-T Manager add node through UI step 2
  1. In the next step you can pick the Cluster, Resourcepool, Datastore and Management Network you want to use for the deployment.
NSX-T Manager add node through UI step 3
NSX-T Manager add node through UI step 3
  1. In the last screen you can enter the desired SSH configuration and System Root, Admin CLI and Audit CLI usernames and passwords. Press on “Install Appliance” to have the appliance deployed.
NSX-T Manager add node through UI step 4
NSX-T Manager add node through UI step 4
  1. Once this is done you can do the same another time to have the third NSX-T Manager appliance deployed.

Once this is done you should have three NSX-T Manager appliances such as below:

3 Node NSX-T Manager cluster is ready
3 Node NSX-T Manager cluster is ready

Deploy the NSX-T Manager through the .ova and CLI

To deploy the last two NSX-T Manager nodes through the CLI you can follow the next steps:

  1. Follow the same deployment as in my previous blogpost for the first NSX-T Manager. Make sure you enter the required information correctly and pick the same Size and Role.
  2. Login through the FQDN defined in the .ova deployment. and go to System -> Appliances and copy the “Cluster ID”.
NSX-T Manager find Cluster ID
NSX-T Manager find Cluster ID
  1. Click on “View Details” for the first node and also copy the “Cert Thumbprint”.
NSX-T Manager find Cluster certificate thumbprint
NSX-T Manager find Cluster certificate thumbprint
  1. With this you can create the following command:
join <NSX-T Manager IP> cluster-id <cluster_id> username admin password <password> thumbprint <cert_thumbprint>
Data on this node will be lost. Are you sure? (yes/no): yes
Join operation successful. Services are being restarted. Cluster may take some time to stabilize.
NSX-T Manager CLI join cluster command.
  1. Once you’ve created the command you can enter them on the two other NSX-T nodes you have deployed in the environment. Login through SSH and enter it. This will join the NSX-T Manager node to the cluster and sync their services together. This will take a bit but once it is done you can use the following command to check the environment directly from the CLI:
nsxt-mgr02> get cluster config
Thu Aug 13 2022 UTC 13:24:56.691
Cluster Id: f9b80e89-a3ac-4e6f-9430-6b304cba61ef
Cluster Configuration Version: 1
Number of nodes in the cluster: 2

nsxt-mgr02> get cluster status
Thu Aug 13 2022 UTC 13:24:56.691
Cluster Id: f9b80e89-a3ac-4e6f-9430-6b304cba61ef
Overall Status: STABLE

Group Type: DATASTORE
Group Status: STABLE

Members:
    UUID                                       FQDN                                       IP               STATUS
    773b1942-7eee-b646-003b-6750183d9b2c       nsxt-mgr01                                 192.168.178.51   UP
    8b061942-2840-ec1d-6acc-6f781b586d66       nsxt-mgr02                                 192.168.178.52   UP

In the first bit you can see that we now have two NSX-T Managers connected in the cluster. The second command status should display UP for all the Group Types and each members. Once this is done you should have a fully working 2 node NSX-T Manager cluster. Repeat the steps for the third node. In the next topic we will discuss how to connect them together for High Availability and Fault Tolerance through the use of the Virtual IP (VIP).

Virtual IP (VIP) configuration

The reason we are going to configure a Virtual IP (VIP) is because this provides fault tolerance to the NSX-T Manager cluster and obviously High Availabilty. How this works is that there is one ‘leader’ in the cluster which ‘owns’ the VIP and services the incoming HTTP and API requests. If the leader becomes unavailable a new one is elected by NSX-T. Be aware that a failover does take a little bit of time. In this time there might be a possibility of downtime in the sense that the NSX-T Manager access is not available. Data plane traffic for the NSX-T Fabric is not impacted by this. Please also note that the VIP does not loadbalance traffic for the cluster nodes. This can however be done if you configure an external loadbalance before this environment.

To configure the Virtual IP (VIP) follow the next steps:

  1. To do this login through the FQDN defined in the .ova deployment.
  2. Verify that the system is Stable and Healthy.
Healthy NSX-T Manager cluster
Healthy NSX-T Manager cluster
  1. Go to System -> Appliances -> Select “Set Virtual IP” in the Virtual IP field. Don’t forget that to follow the warning “When assigning Virtual IP, all the NSX Manager VMs in the cluster must be configured in the same subnet.”
NSX-T Manager cluster set Virtual IP (VIP)
NSX-T Manager cluster set Virtual IP (VIP)
  1. Enter the VIP, press Save.
NSX-T Manager Cluster VIP configuration
NSX-T Manager Cluster VIP configuration
NSX-T Manager Cluster VIP configuration finished
NSX-T Manager Cluster VIP configuration finished

Now that all of these steps are finished we have a fully working NSX-T Manager cluster environment completely with High Availability and Fault Tolerance by using a Virtual IP (VIP). You are ready to continue to the next step, which would be configuring backup and some basic settings on the NSX-T Manager. More on this in the next installment in this series.

Categories: NSX

Bryan van Eeden

Bryan is an ambitious and seasoned IT professional with almost a decade of experience in designing, building and operating complex (virtual) IT environments. In his current role he tackles customers, complex issues and design questions on a daily basis. Bryan holds several certifications such as VCIX-DCV, VCAP-DCA, VCAP-DCD, V(T)SP and vSAN and vCloud Specialist badges.

0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *