As many as a system administrator has learned over the years, it is crucial for a production operation to ensure that a system is highly accessible. Managing and maintaining a load balancer can often be a difficult task. DigitalOcean offers a Load Balancer product for only $ 10 / month, which simplifies the task of managing and maintaining a load balancer.
What are the features of DigitalOcean Load Balancer? Many options are available that can affect how well the load balancer works and performs.
- Redundant Load Balancers ̵1; Configured with automatic failover
- Add resources by name or tag
- Supported protocols: HTTP (s), HTTP / 2, TCP
- LetsEncrypt SSL Certificate (if DigitalOcean is your DNS provider)
- PROXY protocol support
- Sticky Sessions via Cookies
- Configurable health controls for the backend droplet
- Algorithm: Round Robin or Least Connections
- SSL redirection to force all HTTP to HTTPS
- Backend Keepalive for performance
Limits for DigitalOcean Load Balancers
There are a number of limitations to DigitalOcean Load Balancers that one must be aware of.
- Incoming connections only support TLS 1.2, but connections to drops support TLS 1.1 and TLS 1.2
- No support for IPv6
- SSL crawling does not support headings, for example
X-Forwarded-For, because it only works for HTTP or HTTPS with one certificate
- Sticky sessions are not visible after Load Balancer, the cookies are set and removed at the edge and are not forwarded
- When you are alive, there is a time limit of 60 seconds
- Load equalizers support 10,000 simultaneous connections distributed across all resources (ie 5,000 to two different drops)
- Health checks are sent as HTTP 1.0
- Floating IP addresses cannot be assigned to Load Balancers
50055are reserved for Load Balancer
- Let’s Encrypt is only supported when DigitalOcean is used as DNS
- Let’s Encrypt on Load Balancers does not support wildcard certificates
Create a load balance
After choosing to create a new load balance, it is necessary to select the region in which the load balance is to be created and co-located with the drops to load balance. Load balancing does not work across different data center regions, so all drops must be placed together.
Next, we need to define the resources to be added to the load balancer. The best way to do this is through tags, as each new tagged resource is added to the load balancer. Since there is a limit of ten drops that can be added individually, it is easy to use tags around that limit as it does not limit the number of drops to be added.
After adding resources, it is necessary to create all the necessary rules for the forwarding of traffic. In this example, we use only a standard web server and non-SSL traffic. Therefore, all we need is a simple rule to forward to port 80.
There are advanced settings that you can set, but they can also be changed later, if needed. These settings apply to the algorithm, Sticky Sessions, Health Checks, SSL Redirection, Proxy Protocol Support, and if Backend Keepalive is enabled.
Finally, select a name for the load balancer and click Create Load Balancer.
Once the Load Balancer has been created, you can navigate to see the assigned resources and their status. If you have applied firewalls to Droplets, make sure you have opened the correct incoming ports for the health checks to work.
To see this work, first navigate to each individual Droplets IP. In this case, we have simply installed Nginx and created one
index.html file i
/var/www/html which has identifying text for each server.
As you can see, each server displays the correct text that we expect. Now we want to test what happens when we go to the IP address of the load equalizer itself. After several recharges, you see that both sides come up to the same IP address when connections are routed between associated drops.
Backend Connection Health
Health connection checks run continuously depending on the schedule. As soon as a backend Droplet is determined not to work, the load balance will stop directing connections to the broken backend Droplet.
As you can see in the screenshot below, after being turned off
lc-test-02, the load balance stopped controlling connections there. When you refresh the page, all you get from the test server is 1.
As you can see, DigitalOcean Load Balancers is an incredibly useful and inexpensive way to easily charge balance connections over a number of drops. With the addition of HTTP / 2 support, SSL walkthrough and termination, and Let’s Encrypt Support with DigitalOcean Load Balancers will easily add high availability and the ability to load balancers to many applications.