docker elk stack

In a previous post, we introduced the Docker log collector. This log collector does an excellent job of collecting and shipping container logs, Docker daemon events, and container monitoring metrics to the ELK Stack. But what about the machine hosting Docker? How can one determine the health of that system or monitor what is happening right now?

The Docker performance agent is dedicated to monitoring the performance of hosts and can be used in a Docker environment together with the log collector to give you a comprehensive picture of all of the different layers that comprise your Docker environment.

The agent collects performance data using collectl, an open source tool that allows you to monitor various resource utilization metrics such as CPU, disk, memory, and inode use. The data is outputted into a log file, which is then picked up by RSYSLOG and forwarded into the ELK Stack (for more information on how this agent works, read this article on monitoring performance with ELK).

This guide will describe how to install the agent and use its shipped data to analyze the performance of a Docker host in the ELK Stack.

Note: To follow the procedures described below, you’ll need Docker installed (preferably with some containers running) and a account (create a free one here).

Installing the agent

First, pull the image from the Docker hub:

$ docker pull logzio/logzio-perfagent

Before we run the image, here is a brief explanation of the various environment variables used in the run command — both the mandatory and optional ones.

  • LOGZ_TOKEN (mandatory). This variable defines the account to which the data will be shipped. You can find your token in the Settings section of your user interface.
  • USER_TAG (optional). This variable assigns an entered string to the user_tag field.  This can be useful for monitoring a number of Docker hosts and can help when creating visualizations in Kibana. One recommended use-case for this variable is to denote the host role.
  • HOSTNAME (optional). This variable defines the hostname with which to associate the performance data that is sent by the container. This string will be provided in the syslog5424_host field of each entry.
  • INSTANCE (optional). This variable defines the IP address that will be provided in the instance field of each entry.

Now, let’s get down to business. Here is an example of the run command used for running the image:

$ docker run -d --net="host" -e LOGZ_TOKEN="UfKqCazQjUYnBNcJqSryIRyDIjExjwIZ" -e USER_TAG="workers" -e HOSTNAME=<code>hostname</code> -e INSTANCE="" --restart=always logzio/logzio-perfagent

Analyzing the data

After running the image, data should begin to show up in Kibana in a matter of seconds.

Usually, you’ll be shipping a number of other log types into Elasticsearch. To filter out the noise, query Elasticsearch by entering the USER_TAG variable that we used when running the image in the Kibana search field:


Now, you can begin to analyze the logs by adding some fields. For example, add the ‘type’, ‘instance’ and ‘mem_used’ fields. This will give you some more insight into the list of logs:

list of logs

Select one of the entries to view all of the available fields. This will give you a better idea of what data is being shipped into the system and indexed by Elasticsearch.

Visualizing the data

Next, let’s see how to transform the data into a more user-friendly visualization. To do this, first save the search above. The saved search can now be the basis of any visualization or dashboard that you create.

Next, select the Visualize tab. You will get a selection of various visualization types from which you can choose. In this example, we’re going to go for the line-chart visualization type.

What we’re going to visualize is the average CPU usage over time. To do this, the configuration of the X and Y axes is as follows:

  • Y axis – Aggregation by the average value of the ‘cpu_sys_percent’ field
  • X axis – Aggregation by a date histogram using the ‘@timestamp’ field

Hit the green Play button to see a preview of the visualization:

preview of visualization

This is just one example of how to visualize the performance data that is collected by the agent. Read on to learn how to hit the ground running with a ready-made monitoring dashboard.

Installing the Docker Performance Dashboard provides Docker users with a ready-made dashboard for monitoring the performance of the host machine. This dashboard is available in the ELK Apps tab within the user interface. ELK Apps is a free collection of pre-made and customized Kibana searches, visualizations and  dashboards for specific log types.

To install the Docker Performance Dashboard, select the ELK Apps tab and search for Docker:

docker elk apps

Click the Install button in the performance tab, and the dashboard will be displayed in Kibana:

docker apm dashboard elk stack

The dashboard contains the following visualizations:

  • CPU User Mode %
  • CPU Wait % (Disk IO)
  • CPU Avg. Load 1
  • Memory Free
  • Net RX Total KB
  • Net TX Total KB
  • CPU System %
  • Total Sockets Used
  • CPU Idle %
  • Disk Total Write KB
  • Disk Total Read KB
  • Memory Used
  • Total Inodes Used
  • User Tags

In just a few seconds, you will have an entire monitoring dashboard up and running that will paint a real-time picture of how your Docker host is performing. As mentioned in the introduction, this agent should be used together with the Docker log collector to get a comprehensive view of your Docker environment. is a predictive, cloud-based log management platform that is built on top of the open-source ELK Stack and can be used for log analysis, application monitoring, business intelligence, and more. Start your free trial today!

Daniel Berman is Product Evangelist at He is passionate about log analytics, big data, cloud, and family and loves running, Liverpool FC, and writing about disruptive tech stuff.