A maximum queue depth can be configured for client queues so that clients are closed
if their message backlog becomes too large.
The maximum queue depth must be chosen carefully as a large size might lead to excessive
memory usage and vulnerability to Denial of Service attacks, whilst a small size can
lead to slow clients being disconnected too frequently.
Client queues do not take any memory, as
a Zero Copy paradigm, but there are consequences in setting them too small or too large.
If the client queue is set too small, once the client has filled its queue the
server closes the client.
When considering queue depth take into account the average message size and publication
rate. Messages that are held in the client queue are not garbage collected and can get
promoted, which increases their impact on GC pressure. If messages in the client queue
build up, consider the maximum delay in the context of you application. For example:
Assuming 100 bytes is the average message size and the application is publishing an
average of 100 messages per second. If the client queue is setup to have a maximum depth
of 1000 messages this means we allow messages to build up for a slow client for up to 10
seconds, during this time a slow client is building up a cache of 100,000 bytes of messages
to be sent.
Note: It is natural for queues to build up a little with spikes in
publication rate or momentary bandwidth limits, but the tolerance to such delays is
expressed in the client queue depth and must be considered in that context.