Introducing the Docker Log Collector

Docker environments produce a large number of log messages, system events, and statistical data that, taken together, can provide an accurate picture of how our containers are performing.

Docker-specific constraints, however, make extracting insights from this valuable data a real challenge.

To overcome this challenge, has released a new Docker log collector that collects logs and monitoring statistics from your Docker environment and continuously streams them into the cloud-based, enterprise-grade ELK Stack.

The challenge of Docker logging

The reason that Docker log analysis and management is difficult stems from the very fact that Docker is a distributed system. A typical Docker setup consists of multiple containers and produces numerous types of logs including Docker container logs and Docker service logs. Also, the containers are not static beings. They are constantly on the move — starting, restarting, and dying. When a container shuts down, any saved files are lost — making logging to a file in the container extremely risky. Add the fact that sometimes there are multiple services being executed within a single container — each of which is producing its own set of logs — and you can understand why Docker logging is quite the task.

Unfortunately, the tools currently used by the Docker community offer only partial solutions.

Docker drivers can output container logs to a specified endpoint such as Fluentd, Syslog, or Journald. Logspout can route container logs to Syslog or a third-party module such as Redis, Kafka, or Logstash. Despite the risks specified above, the good old method of application logging, in which an application (Java, PHP, etc.) writes application-specific log messages to a file within a container is still very much in use. Data volumes is another method that allows you to share data between a host machine and a dedicated container that is storing data.

All of these methods require additional setup and can ship only container logs.

The approach

This post introduces a new Docker image that provides a unified and comprehensive logging solution for Docker environments.

Wrapping docker-loghose and docker-stats, and running as a separate container, this log collector fetches logs and monitors stats from your Docker environment and ships them to the ELK Stack.

The log collector ships the following types of messages:

  • Docker container logs — logs produced by the container themselves, the equivalent for the output of the ‘docker logs’ command)
  • Docker events — Docker daemon “admin” actions (e.g., kill, attach, restart, and die)
  • Docker stats — monitoring statistics for each of the running Docker containers (e.g., CPU, memory, and network)

Note: To follow the procedure below and analyze Docker logs in this manner, you will need to install Docker and create a free account (you can do that here).

Running the Docker container

The first step is to pull the Docker Image:

[snippet slug=pull-the-docker-image lang=bsh]

Next, run the container. The most important parameter in the following command is the token parameter (-t) as this defines the endpoint to where you are shipping the data (you can locate your token in the Settings section of the user interface):

$ docker run -d --restart=always -v /var/run/docker.sock:/var/run/docker.sock logzio/logzio-docker -t <YourLogz.ioToken>

There are several additional options you can use when running the image.

You can select which type of information to ship. By default, the container is configured to send all three types of information specified above. However, you can limit this as follows:

  • Pass the –no-logs flag if you do not want Docker logs to be shipped
  • Pass the –no-dockerEvents flag if you do not want Docker events to be shipped

You can also create a whitelist or blacklist of containers and images for which you want to ship logs.

If you want to ship the logs of only a specific container/image, add these parameters:

--matchByName REGEX
--matchByImage REGEX

If you would like to refrain from shipping the logs for a specific container/image, add:

--skipByName REGEX
--skipByImage REGEX

There are additional configuration options available, and I recommend you refer to the Docker Hub for more information.

Analyzing the data

Once you run the container, Docker data will begin shipping to the ELK Stack.

Access the user interface, and open Kibana. If you have other shipping pipelines active and sending logs into, the best way to filter the logs is by searching for the three different log types using the OR logical statement:

type:docker-logs OR type:docker-events OR type:docker-stats

Add some fields from the list of fields on the left. This will help you to read the various entries and understand the available information indexed by Elasticsearch.

For example, start by adding the “image,” “name,” and “type” fields.

Expand the entries and take a look at the data as ingested into Elasticsearch. Select the JSON tab to view all of the available data in JSON-format:

{"_index":"logz-dkdhmyttiiymjdammbltqliwlylpzwqb-160501_v1","_type":"docker-events","_id":"AVRsEEG-hmInuJ9vaOze","_score":null,"_source":{"image":"sha256:2359fa12fdedef2af79d9b836a26175808d4b1433b5e7022d2d73c72b2a43b60","action_type":"attach","type":"docker-events","execute":"bash ","tags":["_logz_http_bulk_json_8070"],"@timestamp":"2016-05-01T11:20:04.859+00:00","name":"tiny_williams","host":"c9d742de8d0a","from":"linode\/lamp","id":"9ee3562bbd6104711b2faf7c588e9127299b9db6a843e50327d99545ec63476a"},"fields":{"@timestamp":[1462101604859]},"highlight":{"type":["@kibana-highlighted-field@docker-events@\/kibana-highlighted-field@"]},"sort":[1462101604859]}

Once you have a general idea of what information is available, you can start to think about how to aggregate and visualize the data — which is the next step required to be able to create correlations between the various container messages.

Visualizing the data

Visualizations are one of the most popular features in the entire ELK Stack, and the ability to create graphical depictions of your Docker container data is extremely useful. You could for example, use the docker-stats logs to create a chart of CPU usage over time for each of your running containers. Or, you could create a table listing the five containers that send and receive the biggest amounts of data. The sky’s the limit.

To illustrate this point, we’re going to create a new table that lists the containers that are consuming the most resources.

To do this, first enter a search for the “docker-stats”:


Save the new search by clicking the Save Search icon in the top-left corner of the page, and then select the Visualize tab in Kibana.

For the visualization type, select the Data Table, and use your newly-saved search as the source for the new visualization.

Our next step is to configure the various metrics and aggregations for the graph’s X and Y axes.

Using the “docker-stats” search as our data source, we’re going to configure our table columns by defining metrics aggregated by “Sum” for each of the three resource types: network, CPU, and memory.

Next, we’re going to cross-reference this information with the names of the top five containers.

Hit the Play button to see the end result.

You can save the new visualization for future use or create a dashboard with additional visualizations.

A Bonus Docker Dashboard!

To make the deal even sweeter, we at have put together a Docker dashboard that contains a number of useful visualizations.

To install the dashboard, select the ELK Apps tab in the user interface and search for Docker (ELK Apps is a free gallery of pre-defined and customized Kibana searches, visualizations and dashboards). Click the Install button for the Docker dashboard ELK App.

Observability at scale, powered by open source


2022 Gartner® Magic Quadrant for Application Performance Monitoring and Observability
Forrester Observability Snapshot.

Consolidate Your AWS Data In One Place

Learn More