Choosing your prefetch
The rule of thumb is to set the prefetch value to (the total round trip time) / (processing time on the client for each message) if you have a single consumer.
The default prefetch is unlimited which could be an issue for high availability because crashing with unlimited prefetch can causes a lot of issues with message redelivery. This also causes performance issues since all unacknowledged messages are stored in RAM on the broker.
With multiple consumers and/or slow consumers, you probably want a lower prefetch (1 is recommended) to keep consumers from idling.
Queues
Keep them short if possible to free up RAM. This means setting a max-length, setting a TTL, or enabling lazy queues. Setting a max-length will discard messages from the head of the queue. This is important to remember when creating debug queues so that they don’t cause performance degradation.
Auto-delete queues you aren’t using either with a TTL. You could also use auto-delete which deletes when the last consumer has canceled or when the channel is closed but this could lead to lost messages.
“Queue performance is limited to one CPU care” The consistent hash exchange plugin or the RabbitMQ sharding plugin helps you load balance or partition queues respectively.
One queue can handle up to 50k messages/s.
Connections
Connections and channels should be long lived but channels can be opened and closed more frequently.
Messages
Persistent messages have to be written to disk which prevents data loss but hurts performance. Make queues durable to survive broker restarts.
Cluster setup
For high availability, having more than one RabbitMQ node is desirable in case one node goes down.
Exchange type
Direct exchanges are the fastest.
Caution with TTL
“TTL and dead lettering can generate performance effects that you have not forseen.”
Dead Lettering
Be aware that throwing an error and rejecting/nack’ing a message will cause a message to be requeued unless
Versioning
Update your RabbitMQ/Erlang versions.
References
[https://www.cloudamqp.com/blog/2017-12-29-part1-rabbitmq-best-practice.html](Rabbit Best Practice)