There are some configuration options on your load balancer that can cause problems or
inefficient behavior in your
Load balancer closing silent connections
Many load balancers have a default configuration that closes TCP connections after a
few seconds of the connection being silent. This is appropriate for a load balancer
handling connections to a web server. However,
traffic is different and there can be long intervals of silence on the
Do not configure load balancers or firewalls to close TCP connections that are not
maintains a connection for
every live session so that data can be pushed.
If a network device terminates a TCP connection autonomously,
might interpret this as a close initiated by the
client and close the session. If this happens, any reconnection attempts made by the
Many load balancers include a connection pooling feature where connections between
the load balancer and the
server are kept alive
and reused by other clients. In fact, multiple clients can be multiplexed through a
single server-side connection.
, a client is associated with a single
/HTTP connection for the lifetime of that
connection. If a
server closes a client, the
connection is also closed.
makes no distinction
between a single client connection and a multiplexed connection, so when a client
sharing a multiplexed connection closes, the connection between the load balancer
is closed, and subsequently all of the
client-side connections multiplexed through that server-side connection are
For this reason, it is required that load balancers are not configured to pool
connections when working with
Reuse TCP connection
If your load balancer is configured to create a new TCP connection between the load
for each request from a specific
client, this can be expensive. Creating a new TCP connection per request, increases
the time each request takes to be processed and increases the amount of traffic
between the load balancer and
To avoid this, ensure that your load balancer is configured to reuse a TCP connection
for requests from the same client.
We recommend that you use the sticky-by-IP routing strategy when your clients connect
using streaming protocols. This ensures that client connections are always routed to
server where their sessions are located.
However, the drawback of this approach is that multiple users masquerading behind a
proxy or access point can have the same IP address, and all requests from clients
with that IP address are routed to the same
server. Load balancing still occurs, but some hosts might be unfairly loaded.
TCP retransmission timeout
If you use
failover, the TCP retransmission
timeout on your load balancer's host server can cause long waits for clients whose
connections failover from one
server to another.
server becomes unavailable, the load
balancer can hold open existing client connections to this server. These connections
can continue to receive and buffer data from the client for the duration of the
timeout, before being closed. This data is discarded when the connection closes.
You can avoid this problem by changing the TCP retransmission timeout of the host
server of your load balancer or by configuring the load balancer to shutdown
servers it knows are