Introduction

In this blogpost I want to explain how to use an open source load balancer called HAProxy in conjunction with VMware Cloud Director (VCD) version 10.3.x and below. Like many of you might know, VCD is mostly deployed in a clustered manner. Which means that atleast 3 cells have to be deployed to make this configuration work. There will be one primary cell and two secondary cells. Three cells are mandatory because with two you can have a split-brain scenario. With three cells you have a majority that determines what should happen. If we have a look at this your architecture can look like the following figure I borrowed from VMware:

VMware Cloud Director clustered environment (3 cells)
VMware Cloud Director clustered environment (3 cells)

You have the ability to add more cells, so called Application cells, to the cluster. These cells provide additional compute power for the front-end, but don’t have the PostgresSQL database. They just connect to the primary cell for the database contents. That would look something like this:

VMware Cloud Director clustered environment (5 cells)
VMware Cloud Director clustered environment (5 cells)

In this blogpost I will not go any deeper into the VCD architecture and cell configuration, but what I do want to explain what you could do in regard to the load balancing of these cells. All of these cells can be accessed through a Virtual IP (VIP) which a load balancer can provide. Looking at the VCD network requirements we can see a couple of things we have to account for.

  • We need two ports (can be on the same or different (V)IPs) to load balance environment. The first would be the HTTP(S) service endpoint on port TCP/443 and the second would be the Console Proxy service endpoint on port TCP/8443.
  • The HTTP(S) service endpoint supports a SSL-terminating configuration. This means so much that the load balancer offloads/decrypts on the load balancer instead of the back-end, in our case the VCD appliances. This should save processing time and load on the back-end environment.
  • The Console Proxy service endpoint does not support SSL-termination. All connections to the console service should go directly to the console proxy IP address. **In VCD 10.4 it does.

HAProxy configuration

Now the reason I am writing this blog is because we didn’t want to use NSX-V (outdated) or NSX-T for the load balancing of this environment. Simply because we don’t have NSX-T on this specific environment and did not want to deploy an entire NSX-T environment to load balance a couple of cells. Most of the guides on the web such as this one: here talk about using NSX for this very reason. So without further delay let’s go talk HAProxy configurations.

Now I won’t go into how to setup an entire HAProxy configuration. If you guys want I could do that in another blogpost. For now I want to discuss the HAProxy configuration that worked for us. Please note that every environment is different, so unfortunately I cannot guarantee any success when using this configuration, but this is fairly default and usable for everybody. Accounting for the specific networking requirements mentioned earlier and in the official documentation here we can use the following basic HAProxy configuration.

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    log         XXX.XXX.XXX.XXX:514 local0
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    ca-base     /etc/ssl/certs
    maxconn     4000
    user        xxxxxxxxx
    group       xxxxxxxxx
    daemon
    tune.bufsize 65536
    tune.http.maxhdr 256
    ssl-default-bind-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
    ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
    ssl-default-server-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
    ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          2m
    timeout server          2m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend http-frontend
    bind *:80
    redirect scheme https if !{ ssl_fc }

frontend https-frontend
    bind XXX.XXX.XXX.XXX:443 ssl crt /etc/ssl/ssl.pem
    mode http
    use_backend production_vcloud if { ssl_fc_sni vcloud.fqdn.tld }

frontend console-frontend
    bind XXX.XXX.XXX.XXX:8443
    mode tcp
    option tcplog
    use_backend production_console_vcloud

#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend production_vcloud
    mode http
    balance source
    option forwardfor
    option httpchk GET /api/server_status
    http-check expect string Service\ is\ up.
    server web01 vcd-cell01:443 check ssl verify none
    server web02 vcd-cell02:443 check ssl verify none
    server web03 vcd-cell03:443 check ssl verify none

backend production_console_vcloud
    mode tcp
    balance source
    server web01 vcd-cell01:8443 check port 8443
    server web02 vcd-cell02:8443 check port 8443
    server web03 vcd-cell03:8443 check port 8443

I will explain each section below. Within HAProxy there are three main configuration sets starting with the global and default configuration in the /etc/haproxy/haproxy.cfg. In this section we configure the default timeouts, certificate location, log location and default load balancing mode (HTTP or TCP mode a.k.a. L7 or L4 load balancing). The following two parameters were items we had to adjust within our environment because we send larger packets:

tune.bufsize 65536
tune.http.maxhdr 256

In the following section you define your front-end HAProxy configuration:

#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend http-frontend
    bind *:80
    redirect scheme https if !{ ssl_fc }

frontend https-frontend
    bind XXX.XXX.XXX.XXX:443 ssl crt /etc/ssl/ssl.pem
    mode http
    use_backend production_vcloud if { ssl_fc_sni vcloud.fqdn.tld }

frontend console-frontend
    bind XXX.XXX.XXX.XXX:8443
    mode tcp
    option tcplog
    use_backend production_console_vcloud

If we go through this step by step, the frontend “http-frontend” does nothing more than redirect everything to the next front-end called “https-frontend” to force HTTPS traffic. The “https-frontend” does the following:

  • It binds the HAProxy IP address to port TCP/443 and uses the defined SSL certificate.
  • It load balances based on Layer 7 filtering and processing.
  • It tells HAProxy to use the back-end “production_vcloud” to redirect the requests to port TCP/443 to.

The frontend “console-frontend” does not much else than the following:

  • It binds the HAProxy IP address to port TCP/8443.
  • It load balances based on Layer 4 mode.
  • It tells HAProxy to use the back-end “production_console_vcloud” to redirect the requests to port TCP/8443 to.

Now the next section is for the back-end configuration for HAProxy. This section tells HAProxy what servers there are to load balance to.

#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend production_vcloud
    mode http
    balance source
    option forwardfor
    option httpchk GET /api/server_status
    http-check expect string Service\ is\ up.
    server web01 vcd-cell01:443 check ssl verify none
    server web02 vcd-cell02:443 check ssl verify none
    server web03 vcd-cell03:443 check ssl verify none

backend production_console_vcloud
    mode tcp
    balance source
    server web01 vcd-cell01:8443 check port 8443
    server web02 vcd-cell02:8443 check port 8443
    server web03 vcd-cell03:8443 check port 8443

Let’s have a look at the configuration and go through it. The backend called “production_vcloud” is used by the front-end “https-frontend”. This backend does the following:

  • It load balances based on Layer 7 filtering and processing.
  • It balances to each server based on the source IP address hashed and divided by the total weight of the running servers.
  • It enables insertion for the X-Forwarded-For header.
  • It does a HTTP check on the VCD Appliance health status page on /api/server_status.
  • It checks if the configured appliances are alive based on a SSL health check.

The back-end called “production_console_vcloud” actually does this a bit different, but not that different from it’s front-end counterpart called “console-frontend”.

  • It load balances based on Layer 4 mode.
  • It balances to each server based on the source IP address hashed and divided by the total weight of the running servers.
  • It checks if the configured appliances are alive based on a simple port 8443 check.

Not much else is needed to load balance your VCD environment. Obviously if you have more than three appliances you will have to add them to the configuration. There are also loads of differences possible within the configuration. You can for example combine the front-end and back-end into one configuration section, this is called the “listen” section. I’ve also dressed the configuration down a bit due to the fact that we have several websites on the same port on the same load balancer configured. We use ACL’s and the “ssl_fc_sni” haproxy configuration parameter which extracts the Server Name Indication TLS expression (SNI) field and determines to what front-end and back-end the destined packet was meant for. If anybody is interested in that part I will include it.

Now again like I said there is much more to this then I can discuss in a single blogpost, there are many more configuration examples, but this one definitely works on VCD 10.3.x and below. In VCD 10.4 there will be a big change to this since in VCD 10.4 they will change the way the console load balancing works. I will post more on this in a later blogpost.


Bryan van Eeden

Bryan is an ambitious and seasoned IT professional with almost a decade of experience in designing, building and operating complex (virtual) IT environments. In his current role he tackles customers, complex issues and design questions on a daily basis. Bryan holds several certifications such as VCIX-DCV, VCAP-DCA, VCAP-DCD, V(T)SP and vSAN and vCloud Specialist badges.

5 Comments

Essam · March 20, 2023 at 9:20 am

Hi
If I use VMware Cloud Director 10.4 or VMware Cloud Director 10.4.1 what configuration change that I need in the HAProxy configuration.

    Bryan van Eeden · March 30, 2023 at 7:24 pm

    If using this example, none. This workes also on VCD 10.4.1.1.

Vt · August 24, 2023 at 11:14 am

Thank you very much. Very usefull post.

NPO · December 8, 2023 at 6:51 am

Starting with VMware Cloud Director 10.4, the console proxy uses the same IP address and port as the REST API. The console proxy and REST API use a single certificate. Because of the unified access point, customizing the VMware Cloud Director public console proxy address is no longer necessary.

Also, on VCD 10.4.2, it has two network interface which eth1 is use for database.

    Bryan van Eeden · March 14, 2024 at 8:20 am

    Correct. That is why I mention in the beginning of the post this is for VCD 10.3 and before. Starting from 10.4 it uses a consolidated access point on the same port. The two network interfaces have always been there, the first being the regular network interface and the second being an interface for database replication.

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *