This guide from explains how to build Docker containers and then explores how to use Filebeat to send logs to Logstash before storing them in Elasticsearch and analyzing them with Kibana.

The popular open source project Docker has completely changed service delivery by allowing DevOps engineers and developers to use software containers to house and deploy applications within single Linux instances automatically.

The ELK Stack is a collection of three open-source products: Elasticsearch, Logstash, and Kibana. Elasticsearch is a NoSQL database that is based on the Lucene search engine. Logstash is a log pipeline tool that accepts inputs from various sources, executes different transformations, and exports the data to various targets. Kibana is a visualization layer that works on top of Elasticsearch.

Filebeat is an application that quickly ships data directly to either Logstash or Elasticsearch. Filebeat also needs to be used because it helps to distribute loads from single servers by separating where logs are generated from where they are processed. Consequently, Filebeat helps to reduce CPU overhead by using prospectors to locate log files in specified paths, leveraging harvesters to read each log file, and sending new content to a spooler that combines and sends out the data to an output that you have configured.

This guide from, a predictive, cloud-based log management platform that is built on top of the open-source ELK Stack, will explain how to build Docker containers and then explore how to use Filebeat to send logs to Logstash before storing them in Elasticsearch and analyzing them with Kibana.

Creating a Dockerfile

This section will outline how to create a Dockerfile, assemble images for each ELK Stack application, configure the Dockerfile to ship logs to the ELK Stack, and then start the applications. If you are unsure about how to create a Dockerfile script, you can learn more here.

Docker containers are built from images that can range from basic operating system data to information from more elaborate applications. Each command that you write creates a new image that is layered on top of the previous (or base) image(s). You can then create new containers from your base images.

Let’s say that you need to create a base image (we’ll call it java_image) to pre-install a few required libraries for your ELK Stack. First, let’s make sure that you have all of the necessary tools, environments, and packages in place. For the sake of this article, you will use Ubuntu:16.10 with OpenJDK 7 and a user called esuser to avoid starting Elasticsearch as the root user.

The Dockerfile for java_image:

By calling on docker build -t java_image, Docker will create an image with the custom tag, java_image (using -t as the custom tag for the image).

To ensure that your image has been created successfully, you type docker images into your terminal window and java_image will appear in the list that the Docker images command produces.

It will look something like this:

Assembling an Elasticsearch Image

After building the base image, you can move onto Elasticsearch and use java_image as the base for your Elasticsearch image. The rest of the command will download from the Elasticsearch website, unpack, configure the permissions for the Elasticsearch folder, and then start Elasticsearch.

Because of the nature of Docker containers, once they are closed, the data inside is no longer available and the new running Docker image will create a brand new container. You will need to create a specific configuration for each service (if it requires a configuration) and then pass the configuration onto your image using theADD Dockerfile command. You do not want to go into each new running Docker image inside its container and manually configure the service.

Before you start to create the Dockerfile, you should create an elasticsearch.yml file. I usually do this in the same location as the Dockerfile for the relevant image:

The Dockerfile for the Elasticsearch image (remember java_image is your base image) should look like this:

Docker creates an Elasticsearch image by executing a similar command to the one for java_image:

Assembling a Logstash Image

Logstash image creation is similar to Elasticsearch image creation (as it is for all Docker image creations), but the steps in creating a Dockerfile vary. Here, I will show you how to configure a Docker container that uses NGINX installed on a Linux OS to track the NGINX and Linux logs and ship them out. This means that you will have to configure Logstash to receive these logs and then pass them onto Elasticsearch.

As mentioned above, we are using Filebeat first to isolate where logs are generated from, where they are processed and then to ship the data quickly. So, we need to use Filebeat to send logs from their points of origin (NGINX, Apache, MySQL, Redis, and so on) to Logstash for processing.

Use a Beats input plugin (this is a platform that lets you build customized data shippers for Elasticsearch) to configure Logstash, which will listen on port 5000:

The output is easy to guess. You want Elasticsearch to store your logs, so the Logstash output configuration will be this:

Do not be confused by the es:9200 inside hosts and whether Logstash will know the IP address for es. The answers will be clear when you start to execute the Docker instances.

The complex part of this configuration is the filtering. You want to log NGINX and Linux logs. Filebeat will monitor access.log and error.log files for NGINX and syslogfiles for Linux logs. I will explain how Filebeat monitors these files below.

You can learn more in our guide to parsing NGINX logs with Logstash. For the purposes of this guide, you will use the same Logstash filter. (For Linux logs, however, use the default pattern for syslog logs in Logstash — SYSLOGLINE — for filtering.)

The final filter configuration is this:

For the moment, it does not matter how type and input_type fit in — it will become clear when you start to configure Filebeat.

The complete logstash.conf looks like this:

The Dockerfile for the Logstash image is this:

Now, build the Logstash image with the same command that you had used for the previous image:

Creating a Kibana Configuration File

To create a Kibana configuration file next to your Dockerfile, use kibana.yml. You can create configuration files next to a Dockerfile to help to create images, but in theory you can store configuration files — or any type of file — inside a Dockerfile. Just make sure that the locations inside the Dockerfile are stated properly.

A complete kibana.yml configuration file is this:

$ sudo apt-get update

Now, you can build the Kibana image with this:

Booting the ELK Stack

Once the ELK Stack configuration is complete, you can start it. First, start with Elasticsearch:

If, for example, you have to stop and restart the Elasticsearch Docker container due to an Elasticsearch failure, you will lose data. By default, Docker filesystems are temporary and will not persist data if a container is stopped and restarted. Luckily, Docker provides a way to share volumes between containers and host machines (or any volume that can be accessed from a host machine).

The command to keep data persistent is:

The host path always comes first in the command and the : allows you to separate it from the container path. After executing the run command, Docker generates a Container ID that you can print on your terminal.

Next, start Logstash:

Notice that there’s a new flag in the code: –link.

When you configured the Logstash output earlier, the property es:9200 was inside hosts (where Logstash is supposed to send logs). While we mentioned that we would provide you with an answer on how Docker would resolve this host, we did not touch on the Linux network configuration.

To answer that issue, the –link flag shows you that the container will resolve es hosts. You can learn more about container linking in Docker’s documentation.

The last piece in our stack is Kibana:

This, too, comes with a new flag, -p, that allows you to expose container port 5601 to host machine port 5601.

After Kibana is started successfully, you can access it using: http://localhost:5601. However, your Elasticsearch is still empty, so we need to fill it.

The missing pieces to the puzzle are NGINX instances (in a Linux OS) that will generate NGINX logs together with Linux logs. Filebeat will then collect and ship the logs to Logstash.

Here’s how to create your Filebeat image. First, a Dockerfile for Filebeat looks like this:

Whereas the filebeat.yml looks like this:

The answer to the question in the Logstash configuration section on the sources of the type and input_type properties is that Filebeat added the types that you added inside the configuration to each log. The registry_file flag is used to store the state of files that were recently read, and this is useful in situations where logs are persistent.

Use this command to build the Filebeat image:

Creating an NGINX Image

The last step is to create an NGINX image. However, you need to configure NGINX before you start:

The most important part of this configuration is the first line that says not to spawn after starting NGINX (otherwise the container will stop).

The Dockerfile for NGINX is this:

And finally, the NGINX image:

Now that the last piece of the puzzle is complete, it’s time to hook it up to the ELK Stack that you installed earlier:

When you work with persistent logs, you need the -v flag. This is called logging via data volumes so that the modified versions of commands listed above are these:


Now, what should you do with Filebeat? Do you need two instances, or will one suffice? The answer is straightforward. You can still work with one Filebeat instance because you can share different locations of volumes on your host machine, and this is enough to separate logs on NGINX instances.

Let’s say you start two instances with NGINX containers with these two commands, and one mapped with port 8080 and the other with port 8081. After some time, these two instances will generate enough logs, and we can see them in Kibana here: http://localhost:5601.

Using Docker Log Data

Here are a few of the ways that you can use the data.

kibana discover section

Figure 1: Kibana’s discover section after getting NGINX instances up and running

You can create a few charts on Kibana’s Visualize page and collect them in a customized dashboard.

track browser agents kibana

Figure 2: Pie charts that represent the number of browser agents

You can execute a query to track different browser agents that have visited published sites via Docker containers.

kibana map of ip address locations

Figure 3: Map of IP address locations

Kibana can create a map because Logstash searches for each IP address within the logs before sending them to Elasticsearch.

kibana ip addresses that visit a site

Figure 4: IPs that visited the published site

The data in Figure 3 can be displayed in table form, which can be used to check and filter for server abuse.

custom kibana dashboard

Figure 5: A customized dashboard built from Figure 2 and Figure 4

Figure 5 represents one possible way to customize your dashboard. This example was based on the charts in Figure 2 and Figure 4. (You can see our post on how to create custom Kibana visualizations.)

Important note: has custom, predefined dashboards in our free ELK Apps library. Our guide has more information about them.

How to Log Docker Container Activity

Now that you’re more familiar with Docker, you can start logging container activity. You can start by examining the basics of Docker’s Remote API, which provides a lot of useful information about containers that can be used for processing. Then, you can monitor container activity (events) and analyze statistics to detect instances with short lifespans.

By default, the Docker daemon listens on unix://var/run/docker.sock. You must have root access to interact with the daemon. Docker leaves space to bind to other ports or hosts. Learn more.

Next, dump your Docker events into your ELK Stack by streaming data from the /events Docker endpoint. While there are probably several ways to do this, I will tell you two:

  • Download the Docker-API library, which is written in Python and Ruby. Create a small script to stream data from the /events Docker endpoint, which will redirect data to the file logging system that you created or send it directly to the exposed Logstash port. The second way is much easier than the first, and involves binding.
  • Bind Docker to port 2375 by following the instructions here. (Ubuntu users have a different set of instructions.) Use the wget command to stream data in shared volumes that will be monitored and shipped by Logstash to Elasticsearch.

The starting command:

The wget command will create a file named events in the working directory. However, if you want to give the file another name or location, the -O flag will do the job.

It’s important to know where wget streams data because you will have to share the file with your container.

This is a modified Logstash configuration:

Starting the ELK Stack

Now you have everything that you need to monitor Docker events, so it is time to get your ELK Stack up and running.

The command to start ELK is the same as above:

After you enter a few commands to start and stop your containers and configure the shipped Docker event logs to Elasticsearch, Kibana will provide you with data. If you visit http://localhost:5601, you should see a similar screen to this one:

kibana discover section with Docker data

Figure 6: The Kibana discover section with fresh data from a Docker events stream

The main challenge here is to detect Docker instances with short lifespans. So, before we move to solving this problem and creating a query, we have to define what is considered “short” because it is a subjective term that means different things in different types of systems. Therefore, a predefined threshold needs to be defined first.

Now that you’ve equipped Elasticsearch with enough information to calculate these statistics, simply subtract the “die” timestamp for a particular container ID from the “start” of the event. The result represents the lifespan of your Docker instances.

This is a complex query that requires the introduction of a scripted metric aggregation. A scripted metric essentially allows you to define map and reduce jobs to calculate a metric, which in this case requires subtracting the “die” events from the “start” events in a particular container ID.

A query that calculates a container’s lifespan looks like this:

This query can be applied to previously defined structures that Logstash previously shipped to Elasticsearch.

The result of the query looks like this:

Notice how the containers with live_session.value NULL either have not died yet or could be missing part of the “start/die” event pair.

The query above calculates the life span (in seconds) for each container in Elasticsearch. The query can be modified to dump containers with a certain lifespan by simply changing the last condition in the reduce script. If you are interested in getting statistics for a particular timestamp period, the filter property can be modified to contain the timestamp range.

Ideas for Future Improvements

There are two approaches to logging. The one described above uses data volumes, which means that containers share a dedicated space on a host machine to generate logs. This is a pretty good approach because the logs are persistent and can be centralized, but moving containers to another host can be painful and potentially lead to data loss.

The second approach uses the Docker logging driver. There are several ways to accomplish this such as using the Fluentd logging driver in which Docker containers forward logs to Docker, which then uses the logging driver to ship them to Elasticsearch. Another approach uses syslog/rsyslog in which the shared data volumes for containers are removed from the equation, giving containers the flexibility to be moved around easily.[/fusion_text][/three_fourth][/fullwidth]