Setting up a LavinMQ cluster with Docker Compose

This guide will help you set up a 3-node LavinMQ cluster with Docker Compose.

This will set up 3 LavinMQ nodes and 3 etcd nodes. LavinMQ uses etcd to track which nodes are in-sync. For more information on clustering, see the clustering docs.

Configuration

docker-compose.yml

name: lavinmq-cluster
services:
  lavinmq1:
    image: cloudamqp/lavinmq:latest
    hostname: lavinmq1
    container_name: lavinmq1
    ports:
      - 5672:5672
      - 5679:5679
      - 15672:15672
    volumes:
      - lavinmq-data:/var/lib/lavinmq/data
    configs:
      - source: lavinmq_config
        target: /etc/lavinmq/lavinmq.ini
    depends_on:
      etcd1:
        condition: service_healthy
      etcd2:
        condition: service_healthy
      etcd3:
        condition: service_healthy
    networks:
      - network

  lavinmq2:
    image: cloudamqp/lavinmq:latest
    hostname: lavinmq2
    container_name: lavinmq2
    ports:
      - 5673:5672
      - 15679:5679
      - 25672:15672
    volumes:
      - lavinmq-data:/var/lib/lavinmq/data
    configs:
      - source: lavinmq_config
        target: /etc/lavinmq/lavinmq.ini
    depends_on:
      etcd1:
        condition: service_healthy
      etcd2:
        condition: service_healthy
      etcd3:
        condition: service_healthy
    networks:
      - network

  lavinmq3:
    image: cloudamqp/lavinmq:latest
    hostname: lavinmq3
    container_name: lavinmq3
    ports:
      - 5674:5672
      - 25679:5679
      - 35672:15672
    volumes:
      - lavinmq-data:/var/lib/lavinmq/data
    configs:
      - source: lavinmq_config
        target: /etc/lavinmq/lavinmq.ini
    depends_on:
      etcd1:
        condition: service_healthy
      etcd2:
        condition: service_healthy
      etcd3:
        condition: service_healthy
    networks:
      - network

  etcd1:
    image: bitnami/etcd:latest
    container_name: etcd1
    ports:
      - 2379:2379
      - 2380:2380
    environment:
      - ETCD_NAME=etcd1
      - ETCD_DATA_DIR=etcd-data:/var/lib/etcd1/data
      - ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379
      - ETCD_LISTEN_PEER_URLS=http://0.0.0.0:2380
      - ETCD_INITIAL_ADVERTISE_PEER_URLS=http://etcd1:2380
      - ETCD_ADVERTISE_CLIENT_URLS=http://etcd1:2379
      - ETCD_INITIAL_CLUSTER=etcd1=http://etcd1:2380,etcd2=http://etcd2:2380,etcd3=http://etcd3:2380
      - ETCD_INITIAL_CLUSTER_STATE=new
      - ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster-0
      - ALLOW_NONE_AUTHENTICATION=yes
    volumes:
      - etcd-data:/var/lib/etcd1/data
    networks:
      - network
    healthcheck:
      test: [ "CMD-SHELL", "sh", "-c", "ETCDCTL_API=3 etcdctl --endpoints=http://etcd1:2379 endpoint health | grep -q 'is healthy'" ]
      interval: 5s
      timeout: 5s
      retries: 3

  etcd2:
    image: bitnami/etcd:latest
    container_name: etcd2
    ports:
      - 12379:2379
      - 12380:2380
    environment:
      - ETCD_NAME=etcd2
      - ETCD_DATA_DIR=etcd-data:/var/lib/etcd2/data
      - ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379
      - ETCD_LISTEN_PEER_URLS=http://0.0.0.0:2380
      - ETCD_INITIAL_ADVERTISE_PEER_URLS=http://etcd2:2380
      - ETCD_ADVERTISE_CLIENT_URLS=http://etcd2:2379
      - ETCD_INITIAL_CLUSTER=etcd1=http://etcd1:2380,etcd2=http://etcd2:2380,etcd3=http://etcd3:2380
      - ETCD_INITIAL_CLUSTER_STATE=new
      - ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster-0
      - ALLOW_NONE_AUTHENTICATION=yes
    volumes:
      - etcd-data:/var/lib/etcd2/data
    depends_on:
      etcd1:
        condition: service_started
    networks:
      - network
    healthcheck:
      test: [ "CMD-SHELL", "sh", "-c", "ETCDCTL_API=3 etcdctl --endpoints=http://etcd2:2379 endpoint health | grep -q 'is healthy'" ]
      interval: 5s
      timeout: 5s
      retries: 3

  etcd3:
    image: bitnami/etcd:latest
    container_name: etcd3
    ports:
      - 22379:2379
      - 22380:2380
    environment:
      - ETCD_NAME=etcd3
      - ETCD_DATA_DIR=etcd-data:/var/lib/etcd3/data
      - ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379
      - ETCD_LISTEN_PEER_URLS=http://0.0.0.0:2380
      - ETCD_INITIAL_ADVERTISE_PEER_URLS=http://etcd3:2380
      - ETCD_ADVERTISE_CLIENT_URLS=http://etcd3:2379
      - ETCD_INITIAL_CLUSTER=etcd1=http://etcd1:2380,etcd2=http://etcd2:2380,etcd3=http://etcd3:2380
      - ETCD_INITIAL_CLUSTER_STATE=new
      - ETCD_INITIAL_CLUSTER_TOKEN=etcd-cluster-0
      - ALLOW_NONE_AUTHENTICATION=yes
    volumes:
      - etcd-data:/var/lib/etcd3/data
    depends_on:
      etcd1:
        condition: service_started
    networks:
      - network
    healthcheck:
      test: [ "CMD-SHELL", "sh", "-c", "ETCDCTL_API=3 etcdctl --endpoints=http://etcd3:2379 endpoint health | grep -q 'is healthy'" ]
      interval: 5s
      timeout: 5s
      retries: 3

configs:
  lavinmq_config:
    content: |
        [clustering]
        enabled = true
        bind = 0.0.0.0
        etcd_endpoints = etcd1:2379,etcd2:12379,etcd3:22379

networks:
  network:
    driver: bridge

volumes:
  lavinmq-data:
  etcd-data:

Running the cluster

Start the containers by running:

docker compose up

You can now start publishing and consuming messages at amqp://guest:guest@localhost, and access the management UI by visiting http://localhost:15672

Visit cloudamqp/lavinmq on Docker Hub to learn more.


Ready to take the next steps? Here are some things you should keep in mind:

Managed LavinMQ instance on CloudAMQP

LavinMQ has been built with performance and ease of use in mind - we've benchmarked a throughput of about 1,000,000 messages/sec. You can try LavinMQ without any installation hassle by creating a free instance on CloudAMQP. Signing up is a breeze.

Help and feedback

We welcome your feedback and are eager to address any questions you may have about this piece or using LavinMQ. Join our Slack channel to connect with us directly. You can also find LavinMQ on GitHub.