Reliable message delivery: How LavinMQ keeps your data safe
Here’s how to make sure LavinMQ never loses a message.
In LavinMQ, we’ve built this guarantee directly into the core I/O engine using a disk-first philosophy and the full durability guarantees of the AMQP protocol. In this post, we’ll dive into how to bridge the producer, restart, and consumer gaps to build a resilient message pipeline.
The I/O philosophy: disk-first
Unlike brokers that try to keep everything in RAM and only flush to disk when they have to, LavinMQ uses a segment-based message store. We write data to disk segments aggressively because, in Crystal, we can do that with extremely low overhead.
However, the protocol still dictates the safety rules. To build a truly durable pipeline in LavinMQ, you need to close three specific gaps.
1. The producer gap: Use confirms, not just publishes
A standard basic_publish is an optimistic operation. You’re throwing a message over the fence and hoping it lands. If the network hiccups or the broker’s disk is full, the message just disappears.
Here we use publisher confirms. This turns the publish into a two-way handshake.
channel.confirm_delivery()Because LavinMQ is optimized for sequential disk writes, these confirms are incredibly fast. You get the safety of a “receipt” without the performance penalty usually associated with synchronous persistence.
2. The restart gap: Why “durable” matters
In LavinMQ, we write almost everything to disk to keep memory usage low. But there’s a catch: if you don’t declare your queue as durable, LavinMQ assumes you don’t care about that data after a reboot.
channel.queue_declare(queue="orders", durable=True)When a queue is durable, its state and all its messages are recovered from the disk segments the moment the LavinMQ process starts back up. It’s the difference between a temporary buffer and a permanent piece of infrastructure.
3. The consumer gap: The power of basic_ack
The most dangerous moment for a message is right after it’s delivered. If you use “auto-ack,” LavinMQ deletes the message from the segment the millisecond it’s sent to the worker. If that worker crashes ten seconds later, the task is gone.
The solution is manual acknowledgments.
When a consumer picks up a task, LavinMQ marks it as “unacknowledged.” We keep that message safe in the background. Only when your code sends the basic_ack signal do we finally mark that data for deletion. If the consumer’s TCP connection drops before that signal, LavinMQ immediately requeues the message for the next available worker.
Handling poison messages
What if the message is the problem? If a task crashes every worker that tries to process it, you end up in a crash-loop.
LavinMQ handles this with Dead Letter Exchanges. Instead of letting a failing message bounce around forever, you can set a policy that routes it to a quarantine queue after a certain number of retries or a timeout.
# Configure the queue to move failures to a 'failed_tasks' exchange
arguments={"x-dead-letter-exchange": "dlx"}This gives you an inspectable backlog of failed work that you can debug and replay later, without stopping the rest of your pipeline.
For the strongest possible guarantee, LavinMQ also supports transactions. Publish one or more messages and commit or roll back as a unit, just like a database transaction.
Flow control: prefetch as a safety valve
Finally, there is Prefetch (QoS). It limits how many unacknowledged messages a single worker can hold.
channel.basic_qos(prefetch_count=10)If you don’t set this, LavinMQ will try to push the entire queue to your worker as fast as the network allows. If that worker dies, 10,000 messages might need to be requeued at once. Prefetch keeps the batches small and the recovery time near-zero.
Conclusions
By combining publisher confirms, durable queues, and manual acks, you aren’t just “queueing.” You’re building a system with the same integrity as a database, but with the sub-millisecond latency of a specialized message broker.
In LavinMQ, we didn’t build a safety net on top of the broker. We built the broker to be the safety net.
Lovisa Johansson