Introduction
In the previous post here we have discussed the installation for the first NSX-T Manager and its prerequisites. In the next series of the blogpost we will discuss deploying the second and third NSX-T Manager appliance and how to connect them together in a cluster. We will also discuss configuring the Virtual IP (VIP) so that the NSX-T Manager nodes are configured with High Availabilty and with fault tolerance.
Install the second and third NSX-T Managers
As mentioned before the next step is deploying the second and third NSX-T Managers. There are two scenario’s at this point which really depends how your environment looks like.
- You can deploy the second and third NSX Manager in the cluster through the NSX-T Manager UI. This is only possible if the NSX-T Manager has the vCenter Server in which you deployed the first one configured as Compute Manager in NSX-T Manager. This can also be seen in the below image:
- If you don’t have this, or if you don’t want this for in example a Management Cluster/Environment that does not have or need NSX-T functionality. You will have to deploy the nodes from the .ova again, and join them manually in the NSX-T manager cluster.
Deploy the NSX-T Manager through the NSX-T Manager UI
To deploy the last two NSX-T Manager nodes through the NSX-T Manager UI you can follow the next steps:
- To do this login through the FQDN defined in the .ova deployment.
- Go to System -> Appliances-> Add NSX Appliance
- In the next step you can pick the Cluster, Resourcepool, Datastore and Management Network you want to use for the deployment.
- In the last screen you can enter the desired SSH configuration and System Root, Admin CLI and Audit CLI usernames and passwords. Press on “Install Appliance” to have the appliance deployed.
- Once this is done you can do the same another time to have the third NSX-T Manager appliance deployed.
Once this is done you should have three NSX-T Manager appliances such as below:
Deploy the NSX-T Manager through the .ova and CLI
To deploy the last two NSX-T Manager nodes through the CLI you can follow the next steps:
- Follow the same deployment as in my previous blogpost for the first NSX-T Manager. Make sure you enter the required information correctly and pick the same Size and Role.
- Login through the FQDN defined in the .ova deployment. and go to System -> Appliances and copy the “Cluster ID”.
- Click on “View Details” for the first node and also copy the “Cert Thumbprint”.
- With this you can create the following command:
join <NSX-T Manager IP> cluster-id <cluster_id> username admin password <password> thumbprint <cert_thumbprint> Data on this node will be lost. Are you sure? (yes/no): yes Join operation successful. Services are being restarted. Cluster may take some time to stabilize.
- Once you’ve created the command you can enter them on the two other NSX-T nodes you have deployed in the environment. Login through SSH and enter it. This will join the NSX-T Manager node to the cluster and sync their services together. This will take a bit but once it is done you can use the following command to check the environment directly from the CLI:
nsxt-mgr02> get cluster config Thu Aug 13 2022 UTC 13:24:56.691 Cluster Id: f9b80e89-a3ac-4e6f-9430-6b304cba61ef Cluster Configuration Version: 1 Number of nodes in the cluster: 2 nsxt-mgr02> get cluster status Thu Aug 13 2022 UTC 13:24:56.691 Cluster Id: f9b80e89-a3ac-4e6f-9430-6b304cba61ef Overall Status: STABLE Group Type: DATASTORE Group Status: STABLE Members: UUID FQDN IP STATUS 773b1942-7eee-b646-003b-6750183d9b2c nsxt-mgr01 192.168.178.51 UP 8b061942-2840-ec1d-6acc-6f781b586d66 nsxt-mgr02 192.168.178.52 UP
In the first bit you can see that we now have two NSX-T Managers connected in the cluster. The second command status should display UP for all the Group Types and each members. Once this is done you should have a fully working 2 node NSX-T Manager cluster. Repeat the steps for the third node. In the next topic we will discuss how to connect them together for High Availability and Fault Tolerance through the use of the Virtual IP (VIP).
Virtual IP (VIP) configuration
The reason we are going to configure a Virtual IP (VIP) is because this provides fault tolerance to the NSX-T Manager cluster and obviously High Availabilty. How this works is that there is one ‘leader’ in the cluster which ‘owns’ the VIP and services the incoming HTTP and API requests. If the leader becomes unavailable a new one is elected by NSX-T. Be aware that a failover does take a little bit of time. In this time there might be a possibility of downtime in the sense that the NSX-T Manager access is not available. Data plane traffic for the NSX-T Fabric is not impacted by this. Please also note that the VIP does not loadbalance traffic for the cluster nodes. This can however be done if you configure an external loadbalance before this environment.
To configure the Virtual IP (VIP) follow the next steps:
- To do this login through the FQDN defined in the .ova deployment.
- Verify that the system is Stable and Healthy.
- Go to System -> Appliances -> Select “Set Virtual IP” in the Virtual IP field. Don’t forget that to follow the warning “When assigning Virtual IP, all the NSX Manager VMs in the cluster must be configured in the same subnet.”
- Enter the VIP, press Save.
Now that all of these steps are finished we have a fully working NSX-T Manager cluster environment completely with High Availability and Fault Tolerance by using a Virtual IP (VIP). You are ready to continue to the next step, which would be configuring backup and some basic settings on the NSX-T Manager. More on this in the next installment in this series.
0 Comments