Docker Swarm Logging with ELK and the Log Collector

docker swarm logging

If you’re running containers at scale, you are most likely either already using a container orchestration tool or are in the process of deliberating which one to use. To be able to build and run hundreds of containers, a management layer on top of your Docker hosts is necessary to be able to orchestrate the launching, scaling and updating of your containers efficiently.

Docker Swarm is Docker’s built-in orchestration service. Included in the Docker Engine since version 1.12, Docker Swarm allows you to natively manage a cluster of Docker Engines, easily create a “swarm” of Docker hosts, deploy application services to this swarm, and manage the swarm’s behavior.

Logging in Swarm mode is not that different than logging in a “non-swarm” mode — container logs are written to stdout and stderr and can be collected using any of the logging drivers or log routers available. If you’re using the ELK Stack for centralized logging, you can use any of the methods outlined in this article.

The Docker Log Collector is a good option to use in Docker Swarm since it allows you to get a comprehensive picture of your swarm by a) providing three layers of information from your Docker nodes — container logs, daemon events and Docker stats from your hosts and b) allowing you to monitor cluster activity and performance using environment variables.

Let’s take a look.

Creating a Docker Swarm Cluster

If you already have Docker Swarm set up with running services, you can skip the following two sections that explain how to setup Docker Swarm and install a demo app.

To create a Docker Swarm, I created three different Docker hosts — one to act as a manager node and the two others as workers.

On the host designated to be the manager, run (replace with the public IP of the host):

sudo docker swarm init --advertise-addr

You should get the following output:

Swarm initialized: current node (dxn1zf6l61qsb1josjja83ngz) is now a manager.

To add a worker to this swarm, run the following command:
    docker swarm join \
    --token  \
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

The –advertise-addr flag configures the manager host to publish its address as, so before continuing on verify that the other nodes in the swarm are able to access the manager at this specific IP address.

Next, SSH into your other nodes and use the command supplied above to join the Swarm cluster:

docker swarm join \
  --token   \
This node joined a swarm as a worker.

On your manager, enter the following command to see the list of nodes in your cluster:

sudo docker nodes ls

ID                           HOSTNAME          STATUS  AVAILABILITY  MANAGER STATUS

qdf1ipmtijgnti0n8ie1uaeo2 *  ip-172-31-53-87   Ready   Active        Leader

t5rineip3z01na6t7s3qwftit    ip-172-31-63-228  Ready   Active        

wvqzw4384nyj8yzz0zx1exnkx    ip-172-31-60-187  Ready   Active

Deploying the Demo App

Now that we have a Docker Swarm set up, we’re going to deploy the sample voting app on it.

To do this, use the following commands:

git clone
cd example-voting-app.git
sudo docker stack deploy --compose-file docker-stack.yml vote

Within a few seconds you will have multiple services up and running, and replicated as defined in the application’s docker-stack.yml  file.

View the services using:

docker service ls

ID            NAME             MODE        REPLICAS  IMAGE

6j76wdkt63a0  vote_vote        replicated  2/2       dockersamples/examplevotingapp_vote:before

78vgc1t221kn  vote_db          replicated  1/1       postgres:9.4

noj8ujaxsrx1  vote_result      replicated  1/1       dockersamples/examplevotingapp_result:before

nvek40nqdvgn  vote_worker      replicated  1/1       dockersamples/examplevotingapp_worker:latest

nx5g9ln4uxb0  vote_redis       replicated  2/2       redis:alpine

pfud4pp24ret  vote_visualizer  replicated  1/1       dockersamples/visualizer:stable

To take a look at the app, access it using port 5000 on ANY of the cluster nodes.



Results can be seen using port 5001:


docker swarm logging

Running the Log Collector

Our next step is to run the Docker log collector.

Wrapping docker-loghose and docker-stats, and running as a separate container per Docker host, the log collector fetches logs and monitors stats from your Docker environment and ships them to the ELK Stack.

As specified above, the log collector ships the following types of messages:

  • Docker container logs — logs produced by the container themselves, the equivalent for the output of the ‘docker logs’ command)
  • Docker events — Docker daemon “admin” actions (e.g., kill, attach, restart, and die)
  • Docker stats — monitoring statistics for each of the running Docker containers (e.g., CPU, memory, and network)

Using the -a flag, we can add any number of labels to the data coming in from the containers. In Docker Swarm, we can use this option to add the name or ID of  the cluster node.

For example, on the manager node, use the following command (enter your token in the place holder):

sudo docker run -d --restart=always -v /var/run/docker.sock:/var/run/docker.sock logzio/logzio-docker -t  -a env=dev -a swarm-node=master

Repeat this command for the other cluster nodes.

For worker 1:

sudo docker run -d --restart=always -v /var/run/docker.sock:/var/run/docker.sock logzio/logzio-docker -t  -a env=dev -a swarm-node=worker1

For worker 2:

sudo docker run -d --restart=always -v /var/run/docker.sock:/var/run/docker.sock logzio/logzio-docker -t  -a env=dev -a swarm-node=worker2

Within a few minutes, you will have data streaming into the ELK Stack.

Analyzing the Data

To help you begin analyzing the data being shipped from your Docker Swarm, here are a few tips and tricks.

First, decide which type of data you wish to focus on.

To analyze container logs only, use:

tags: docker-logs

To analyze container stats only, use:

type: docker-stats


Select some of the fields from the list of available fields on the left. This will give you some visibility into the data being displayed in the main viewing area. For example, add the ‘swarm-node’, ‘name’ and ‘env’ fields.


You can focus on logs for a specific node using:

tags:docker-logs AND swarm-node:worker1

Visualizing the Data

Our last and final step in this article is to visualize the data coming from our Swarm nodes. Kibana is renown for its visualization capabilities and the sky’s the limit with what you can do with your data.

Here are a few examples.

Metric visualizations are ideal for showing a single stats. You can use them, for example, to show the number of worker nodes in your Swarm cluster:


Another example is to show a breakdown of logs per Swarm node using a pie chart visualization:


You can show the same data over time using a line chart visualization:


Using the Docker stats data, we can create a series of visualizations analyzing performance and resource consumption of our Swarm nodes. Here is an example of showing memory consumption over time, per node:



Once you have your visualizations lined up, you can add them all up into one comprehensive dashboard for monitoring your Docker Swarm:


While the methodology for logging in a Docker Swarm mode does not differ from logging in a regular Docker environment, analysis and visualization can vary based on node data we decide to ship with the logs. The log collector makes this pretty easy to do.

Observability at scale, powered by open source

Internal Live. Join the weekly live demo.
DevOps Pulse 2022: Observability Trends and Challenges.
Forrester Observability Snapshot.

Organize Your Kubernetes Logs On One Unified SaaS Platform

Learn More