Load balancer (Networking)

From wikieduonline
Revision as of 15:01, 14 October 2024 by Welcome (talk | contribs) (→‎Related terms)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Load balancing across multiple application instances is a commonly used technique for optimizing resource utilization, maximizing throughput, reducing latency, and ensuring fault‑tolerant configurations.


Load balancing in Nginx is configured using upstream directive.[1].

Network load balancer can provide service for different protocols, such as TCP, UDP, HTTP or HTTPS.


Configuration options[edit]

Nginx configuration example[edit]

upstream backend {
    # no load balancing method is specified for Round Robin. Other options: least_conn, ip_hash, least_time header, random two least_time=last_byte
    server backend1.example.com slow_start=30s;
    server backend2.example.com max_conns=3;
    server backend3.example.com weight=5;
    server backend4.example.com;
    #server backend5.example.com:443;    (if you are connecting to an https backend. Additional configuration is required)
    server 192.0.0.1 backup;
    #queue 100 timeout=70;  (option if using max_conns directive)
}


HTTPS termination[edit]

HTTPS termination is at least supported on Nginx, Amazon ELB[2] and OpenStack [3]

Activities[edit]

Related terms[edit]

See also[edit]

  • https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/#overview
  • https://aws.amazon.com/elasticloadbalancing/faqs/
  • https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html#deploy-a-load-balancer-with-backend-re-encryption
  • Advertising: