LavinMQ Prefetch

In LavinMQ, messages are pushed from the broker to the consumers. The LavinMQ default prefetch setting gives clients an unlimited buffer, meaning that LavinMQ, by default, sends as many messages as it can to any consumer that appears ready to accept them. It is, therefore, possible to have more than one message “in-flight” on a channel at any given moment.

Prefetching in LavinMQ limits the number of messages sent from the broker at one time, to keep the number of unacked (not handled) messages in flight as low as possible.

LavinMQ Prefetch

What is the LavinMQ prefetch?

There are two prefetch options available in LavinMQ, Channel Prefetch and Consumer Prefetch. The LavinMQ Consumer Prefetch defines the maximum number of unacknowledged deliveries permitted. Consumer Prefetch sets a cap on the number of messages to be consumed before the broker waits for an acknowledgment.

What benefits does LavinMQ prefetch have?

An unlimited buffer of messages sent from the broker to the consumer could lead to a window of many unacknowledged messages. If, for some reason, the consumer suffers from a failure, the queue of unacknowledged messages sent to a consumer can grow rapidly.

Prefetching in LavinMQ allows you to limit the number of unacked messages.

When are LavinMQ prefetches useful?

LavinMQ prefetche should be used when you want to keep all consumers of a queue maximally busy. Prefetch only applies when you are consuming messages from a queue (not using GET to consume), and where explicit acknowledgements are required.

Optimizing the prefetch count requires you to consider the number of consumers and messages the broker handles. A larger prefetch count generally improves the rate of message delivery while a smaller value maintains the evenness of message consumption.

LavinMQ prefetch

Messages sent from the broker to the consumer are cached by the LavinMQ client library (in the consumer) until processed. All pre-fetched messages are invisible to other consumers and are listed as unacked messages in the LavinMQ management interface.

LavinMQ Prefetch

There are two prefetch options available: channel prefetch count and consumer prefetch count.

Channel Prefetch

The channel prefetch count defines the max number of unacknowledged deliveries that are permitted on a channel. Setting a limit on this buffer caps the number of received messages before the broker waits for an acknowledgment. Because a single channel may consume from multiple queues, coordination between them is required to ensure that they don’t pass the limit. This can be a slow process and it is not the recommended approach. However, if the use case covers “there should only be X messages in flight regardless of how many consumers there are”, channel prefetch can be an option.

Consumer Prefetch

The recommended and most efficient method is to specify the prefetch count on each consumer where the consumer itself is responsible for the prefetch count, avoiding conflicts that might occur using Channel Prefetch.

LavinMQ Prefetch

Set the prefetch count

LavinMQ uses AMQP version 0.9.1 by default. The protocol includes the property Basic QOS (Quality of Service) for setting the prefetch count. The basic.qos value determines the procedure of how AMQP Consumers are reading messages from queues.

Consider the following Python (Pika) example:

connection = pika.BlockingConnection()
channel = connection.channel()
channel.basic_qos(10, global=False)

The basic_qos function contains the global flag. Setting the value to false applies the count to each new consumer. Setting the value to true applies a channel prefetch count to all consumers. The global flag is set to false by default by most APIs.

Optimizing the prefetch count requires you to consider the number of consumers and messages the broker handles. There is a negligible amount of additional overhead given that the broker must understand how many messages to send to each consumer instead of to each channel.

Optimum consumer prefetch count

#### Large prefetch count A large prefetch count generally improves the rate of message delivery. The broker does not need to wait for acknowledgments as often and communication between the broker and consumers decreases.

A too-large prefetch count, could take lots of messages off the queue and deliver all of them to one single consumer, keeping the other consumers in an idling state.

Set Consumer Prefetch Count

Small prefetch count

Small prefetch counts are ideal for distributing messages across larger systems. Smaller values maintain the evenness of message consumption. A value of one helps ensure equal message distribution.

A prefetch count that is set too small may hurt performance since LavinMQ might end up in a state where the broker is waiting for permission to send more messages.

Set Consumer Prefetch Count

Set the correct prefetch value

  • One or a few consumers with a short processing time: It is recommended to prefetch many messages at once to keep your client as busy as possible. If you have about the same processing time all the time and network behavior remains the same, simply take the total round trip time and divide by the processing time on the client for each message to get an estimated prefetch value.
  • Many consumers and short processing time: It is recommended to set a lower prefetch value if you have many consumers. A value that is too low will keep the consumers idling a lot since they need to wait for messages to arrive. A value that is too high may keep one consumer busy while other consumers are being kept in an idling state.
  • Many consumers and/or long processing time: It is recommended to set the prefetch count to one (1) so that messages are evenly distributed among all your workers.

Please note that if your client auto-acks messages, the prefetch value will have no effect.

Conclussions

  • It is always recommended to set a prefetch value when you are consuming messages from a queue, and where explicit acknowledgements are used.
  • Prefetch value does not have an impact if you are using the Basic.get request.
  • Use consumer prefetch where possible, instead of channel prefetch.
  • Be aware of your setup and set the prefetch count based on the number of consumers and the processing time to get the most desirable result. Theoretically, set prefetch to the roundtrip latency divided by the message processing time +1, but always keep in mind that message size, etc. can affect the outcome.
  • Avoid the usual mistake of having an unlimited prefetch, where one client receives all messages and runs out of memory and crashes, causing all the messages to be re-delivered.

Ready to take the next steps? Here are some things you should keep in mind:

Managed LavinMQ instance on CloudAMQP

LavinMQ has been built with performance and ease of use in mind - we've benchmarked a throughput of about 1,000,000 messages/sec. You can try LavinMQ without any installation hassle by creating a free instance on CloudAMQP. Signing up is a breeze.

Help and feedback

We welcome your feedback and are eager to address any questions you may have about this piece or using LavinMQ. Join our Slack channel to connect with us directly. You can also find LavinMQ on GitHub.