Docker Stats Monitoring: Taking Dockbeat for a Ride

There is no silver bullet. This is how I always answer those asking about the best logging solution for Docker. A bit pessimistic of me, I know. But what I’m implying with that statement is that there is no perfect method for gaining visibility into containers. Dockerized environments are distributed, dynamic, and multi-layered in nature, so they are extremely difficult to log.

That’s not to say that there are no solutions — to the contrary. From Docker’s logging drivers to logging containers to using data volumes, there are plenty of ways to log Docker, but all have certain limitations or pitfalls. Logz.io users use a dedicated container that acts as a log collector by pulling Docker daemon events, stats, and logs into our ELK Stack (Elasticsearch, Logstash and Kibana).

That’s why I was curious to hear about the first release of Dockbeat (called Dockerbeat prior to Docker’s new repo naming conventions) — the latest addition to Elastic’s family of beats, which is a group of different log collectors developed for different environments and purposes. Dockbeat was contributed by the ELK community and is focused on using the docker stats API to push container resource usage metrics such as memory, IO, and CPU to either Elasticsearch or Logstash.

Below is a short review of how to get Dockbeat up and running as well as a few personal first impressions by me. My environment was a locally installed ELK Stack and Docker on Ubuntu 14.04.

Installing Dockbeat

To get Dockbeat up and running, you can either build the project yourself or use the binary release on the GitHub repository. The former requires some additional setup steps (installing Go and Glide, for starters), and I eventually opted for the latter. It took just a few steps and proved to be pretty painless (an additional method is to run Dockbeat as a container — see the repo’s readme for more details).

You will first need to download the source code and release package from: https://github.com/Ingensi/dockbeat/releases

$ git clone https://github.com/Ingensi/dockbeat.git
$ wget https://github.com/Ingensi/dockbeat/releases/download/v1.0.0/dockbeat-v1.0.0-x86_64

Configuring and Running Dockbeat

Before you start Dockbeat, there is the matter of configurations. Since I used a vanilla installation with Docker and ELK installed locally, I did not need to change a thing in the supplied dockbeat.yml file.

Dockbeat is configured to connect to the default Docker socket:

dockbeat
   socket: ${DOCKER_SOCKET:unix:///var/run/docker.sock}

My local Elasticsearch was already defined in the output section:

### Elasticsearch as output
     elasticsearch
          hosts: [“localhost:9200”]

Of course, if you’re using a remotely installed Elasticsearch or Logstash instance, you will need to change these configurations respectively.

Before you start Dockbeat, you will need to grant execute permissions to the binary file:

$ chmod +x dockbeat-v1.0.0-x86_64

Then, to start Dockbeat, use the following run command:

./dockbeat-v1.0.0-x86_64 -c dockbeat-1.0.0/dockbeat.yml -v -e

Please note: I used the two optional parameters (‘-v’, ‘-e’) to see the output of the run command, but these, of course, are not mandatory.

Dockbeat then runs, and if all goes as expected, you should see the following lines in the debug output:

2016/09/18 09:13:40.851229 beat.go:173: INFO dockbeat successfully setup. Start running.
2016/09/18 09:13:40.851278 dockbeat.go:196: INFO dockbeat%!(EXTRA string=dockbeat is running! Hit CTRL-C to stop it.)
2016/09/18 09:14:47.101231 dockbeat.go:320: INFO dockbeat%!(EXTRA string=Publishing %v events, int=5)
...

It seems like all is working as expected, so my next step is to ping Elasticsearch:

$ curl localhost:9200/_cat/indices

The output displayed displays a cross-section of Elasticsearch indices:

yellow open dockbeat-2016.09.18 5 1 749 0 773.7kb 773.7kb

yellow open .kibana             1 1   1 0   3.1kb   3.1kb

The next step is to define the index pattern in Kibana. After clicking on the Setting tab in Kibana, I entered dockbeat.* in the index name/pattern field and selected the @timestamp filter to create the new index pattern:

configure kibana index pattern

Now, all the metrics collected by Dockbeat and stored by Elasticsearch are listed in the Visualize tab in Kibana:

dockbeat logs elasticsearch

Analyzing Container Statistics

The metrics collected by Dockbeat via the docker stats API are pretty extensive and include container attributes, CPU usage, network statistics, memory statistics, and IO access statistics.

To gain some more visibility into the log messages, I added some of the fields from the menu shown on the left in the above screenshot. I started with the ‘type’ and ‘containerName’ fields and then explored some of the other indexed fields.

Querying options in Kibana are varied  — you can start with a free-text search for a specific string or use a field-level search. Field-level searches allow you to search for specific values within a given field with the following search syntax.

To search for logs for a specific container, I entered the following query in Kibana:

containerName: jolly_yalow

The result:

dockbeat container logs

Free-text search is the simplest query method, but because we are analyzing metrics, it is not the best way to go about analyzing the statistics unless you are looking for a specific container name.

Visualizing Container Stats

Visualizations are one of the biggest advantages of working with ELK, and the sky’s the limit as far as the number of container log visualizations is concerned. You can slice and dice the stats any way you like — it all boils down to what data you want to see.

Building visualizations in Kibana do require a certain amount of expertise, but the result is worthwhile. You can end up with a nice monitoring dashboard for your Docker containers.

Here are a few examples.

Number of Containers

Sometimes it’s easy to lose track of the number of containers we have running. In a production environment, this number can easily reach twenty per host or more. To see a unique count of the running containers, I used the Metric visualization to display a count of the ‘containerName’ field:

number of dockbeat containers

Average CPU/Memory/Network Over Time

Another example is to create a line chart that visualizes the average resource consumption per container over time. The configuration below is for network stats, but the same configuration can be applied to all types of metrics — all you have to do is change the field that is used to aggregate the Y Axis in the chart.

The configuration:

average cpu memory over time

The resulting line chart:

dockbeat line chart

Compiling these visualizations into one dashboard is easy — it’s simply a matter of selecting the Dashboard tab and then manually adding the visualizations.

The Bottom Line

Dockbeat was easy to install and get up and running — it worked right out of the box in my local sandbox environment and required no extra configurations! If you’re looking for a lightweight monitoring tool, Dockbeat is a good way to start.

As I said in the introduction, there is no perfect logging solution for Docker. Containers produce other useful output information including Docker daemon events (such as attach, commit, and copy) and Docker logs (where available), and these are not collected by Dockbeat even though they are necessary to get a more holistic monitoring view of a Dockerized environment.

Looking ahead, this seems like the logical next addition for this community beat.

Get started for free

Completely free for 14 days, no strings attached.