Part 1: An overview of LavinMQ streams
Long before streams were introduced, LavinMQ only supported one queue type - for the purpose of this guide, let’s call it the traditional queue.
From event-driven architectures to real-time analytics, modern teams need to store and replay data, not just move it. LavinMQ handles these continuous streams without the heavy infrastructure usually required for high-throughput logging.
By combining an append-only storage model with a lightweight core, LavinMQ enables high-performance streaming and message replaying with minimal resource overhead.
It’s a simple, efficient foundation for data that needs to move - and stay.
LavinMQ keeps your app focused by handling background tasks asynchronously, queuing and processing them while keeping you updated when they're done.
Imagine you need to track user activity in your application: clicks, page views, searches, and other events. The application needs to stream these events to a message broker in real time.
Long before streams were introduced, LavinMQ only supported one queue type - for the purpose of this guide, let’s call it the traditional queue.
Client applications can interact with a Stream using an AMQP client library, just like with traditional queues in LavinMQ.
In LavinMQ Streams, every message has a unique offset, which represents its position in the stream. Offsets serve a similar purpose as indexes in arrays.
Data retention settings let you control when a Stream trims old messages—either after reaching a set size or age.
This part will focus on server-side offset tracking and stream filtering.
LavinMQ Streams, Kafka, and RabbitMQ Streams were all built for a similar use-case: high-throughput message streaming. But how do they compare?