Getting Started with HAProxy Server in the Cloud

Share this post on:

HAProxy is one of the most popular and powerful open-source load balancer. With its robust capabilities, HAProxy serves as an ideal API load balancer, allowing users to distribute traffic efficiently and ensure that services are available, scalable, and secure. This guide will walk you through everything you need to know to get started with HAProxy on AWS and Azure, covering key configuration tricks, best practices, and security considerations.

What is HAProxy and Why Use it as an API Load Balancer?

HAProxy (High Availability Proxy) is a popular open-source tool designed for load balancing HTTP, HTTPS, and TCP requests. Renowned for its stability, speed, and scalability, HAProxy is widely used to distribute API requests across multiple servers efficiently. Its flexible configuration makes it an excellent choice for high-traffic environments where uptime and performance are critical, especially for API handling. Additionally, HAProxy is available on many Unix/Linux distributions, making it easy to access and deploy.

Key Benefits of Using HAProxy as an API Load Balancer:

  • High Availability: HAProxy offers active and passive health checks, ensuring only healthy servers receive requests.
  • Efficient Load Distribution: It effectively balances traffic to avoid overloading a single server, enabling optimal performance.
  • Customizable Configuration: HAProxy supports advanced configurations, allowing you to fine-tune for specific needs.
  • Built-in Security Features: HAProxy includes features like rate limiting, IP filtering, and SSL termination for enhanced security.
  • Open-Source and Cost-Effective: HAProxy is free and highly customizable, making it a cost-effective solution for companies of all sizes.

Setting Up HAProxy on AWS and Azure Marketplaces

To deploy HAProxy on either AWS or Azure, select your cloud below to launch a pre-configured HAProxy server instance, which simplifies setup. Then, configure it according to your application’s requirements. Both platforms offer our ready-to-use HAProxy images, which streamline deployment, saving time and reducing complexity.

Select Your Preferred Cloud

Sample HAProxy Configuration for a Web App

In this section, we’ll set up HAProxy as a reverse proxy that provides secure HTTPS access to users while forwarding requests to backend servers over HTTP. This configuration is helpful when backend servers do not support HTTPS or if you want to centralize SSL/TLS termination at the proxy level.

Here’s a sample HAProxy configuration that enables HTTPS on the frontend while connecting to backend servers over HTTP. This setup secures traffic between users and HAProxy, even though the backend servers only use HTTP.

# Global settings
global
    log /dev/log local0
    log /dev/log local1 notice
    maxconn 2000
    tune.ssl.default-dh-param 2048

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000ms
    timeout client  50000ms
    timeout server  50000ms
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
    errorfile 504 /etc/haproxy/errors/504.http

# Frontend configuration - listens for HTTPS requests
frontend https_frontend
    bind *:443 ssl crt /etc/haproxy/certs/mycert.pem
    mode http
    option http-server-close
    option forwardfor
    default_backend web_backends

# Backend configuration - communicates with backend servers over HTTP
backend web_backends
    mode http
    balance roundrobin
    option httpchk
    server web1 192.168.1.101:80 check
    server web2 192.168.1.102:80 check

Explanation of the Configuration

  • Frontend Section (https_frontend): This part of the configuration listens for incoming HTTPS traffic on port 443. The bind *:443 ssl crt /etc/haproxy/certs/mycert.pem directive binds the frontend to port 443, enabling SSL/TLS encryption with the specified certificate file (mycert.pem). All incoming HTTPS requests will be decrypted at HAProxy, ensuring that sensitive data remains protected during transit.
  • Backend Section (web_backends): This backend communicates with our web servers, which only support HTTP. Each server line specifies a backend server (in this example, web1 and web2) that only supports HTTP on port 80. By using HAProxy as an intermediary, we ensure that users connect securely over HTTPS, even though our backend servers remain on HTTP.

How This Configuration Secures Unsecured Backend Servers

By using HTTPS in the frontend and HTTP in the backend, we create a secure connection between users and HAProxy, while allowing the backend servers to remain on HTTP. This setup is beneficial in scenarios where backend servers don’t natively support HTTPS or where SSL/TLS termination is more manageable when handled centrally by HAProxy. Consequently, we protect user data in transit, ensuring that sensitive information isn’t exposed to eavesdropping, even if backend servers operate over HTTP.

Advanced use-cases of HAProxy

To fully leverage HAProxy’s capabilities, here are some advanced tips and tricks that can help you optimize performance, improve security, and gain more control over traffic in your environment.

1. Backend Pools and Health Checks

Set up backend server pools in the configuration file to define which servers handle the requests. Ensure each backend server has health checks configured to check availability:

backend api_servers
    mode http
    balance roundrobin
    server api1 10.0.1.1:80 check
    server api2 10.0.1.2:80 check

2. Session Persistence (Sticky Sessions)

If your API requires that users’ sessions remain on the same server, use session persistence. Sticky sessions, or session persistence, ensure that once a user connects to a specific server, all their subsequent requests are routed to that same server. This approach is particularly useful for applications that store session-specific data (like authentication tokens, shopping carts, or personalization) locally on each server rather than in a shared store.

backend api_servers
    cookie SERVERID insert indirect nocache
    server api1 10.0.1.1:80 check cookie api1
    server api2 10.0.1.2:80 check cookie api2

3. Rate Limiting and Timeout Settings

Rate limiting and timeout settings are essential for controlling traffic flow and managing resource usage in HAProxy. Rate limiting restricts the number of incoming requests over a specified period, preventing traffic spikes from overwhelming backend servers and mitigating risks from excessive or malicious requests. This can be particularly valuable for APIs and web applications that need to control per-user or per-IP request volumes, ensuring fair access for legitimate users.

frontend api_gateway
    mode http
    bind *:80
    acl too_many_requests sc_http_req_rate(0) gt 100
    http-request deny if too_many_requests
    timeout client 30s
    timeout server 30s
    timeout connect 5s

4. Using Weighting for Service Upgrades

HAProxy’s weighting feature enables precise control over traffic distribution among backend servers, which is especially useful during rolling upgrades. When introducing new servers or deploying updated versions, you may want to send only a fraction of the traffic to these instances initially. By adjusting server weights, you can gradually increase the traffic directed to upgraded servers, testing stability and performance before fully transitioning. This approach helps minimize risks during deployment, ensuring that any issues on the new servers impact only a small portion of users.

backend api_servers
    balance roundrobin
    server api1 10.0.1.1:80 weight 80 check
    server api2 10.0.1.2:80 weight 20 check

In this example, api1 will receive 80% of the traffic, and api2 will receive 20%. Adjust the weights as needed to balance traffic according to your upgrade plan.

Securing HAProxy Server

Securing HAProxy is essential to safeguard your API endpoints and prevent unauthorized access. Here are key security practices to implement:

1. Enable SSL/TLS

Configure HAProxy to handle SSL termination, which decrypts incoming HTTPS requests before passing them to the backend servers. SSL certificates can be installed by specifying paths to the certificates in your configuration:

frontend https_front
    bind *:443 ssl crt /etc/ssl/certs/your_certificate.pem
    default_backend api_servers

2. IP Whitelisting

To limit access to only trusted IP addresses, set up access control lists (ACLs) to allow specific IP ranges:

frontend api_gateway
    acl trusted_network src 192.168.0.0/16
    http-request deny unless trusted_network

3. Enable Rate Limiting

Protect against DDoS and brute force attacks by setting rate limits:

frontend api_gateway
    stick-table type ip size 200k expire 5m store http_req_rate(10s)
    acl too_many_requests sc_http_req_rate(0) gt 100
    http-request deny if too_many_requests

Conclusion

Setting up and configuring HAProxy on AWS and Azure Marketplaces provides a flexible, secure, and highly available load balancing solution for APIs. By following the tips above, you can leverage HAProxy’s advanced features to optimize API performance, handle traffic surges, manage rolling updates, and secure your endpoints. This guide should help you get started with HAProxy and understand some of the best practices for using it effectively.

Make sure to test configurations thoroughly and monitor HAProxy’s performance over time to ensure it continues to meet your application’s needs. With the right configuration, HAProxy can be a powerful ally in delivering reliable, scalable API services.

Share this post on: