bro 1

This series describes how to integrate the ELK Stack with Bro for network analysis and monitoring. Part 1 will explain how to set up the integration, part 2 will show some examples of how to analyze and visualize the data in Kibana.

It’s no secret that cyber attacks are constantly on the rise. Ransomware, DDoS, data breaches — you name it. Organizations are scrambling to employ a variety of protocols and security strategies to bolster their ability to defend themselves from these attacks, and logging and monitoring play a key role in these attempts.

A large amount of solutions and tools have been developed to help secure the different layers comprising an IT architecture, starting with the infrastructure level and ending with the application level. This series focuses on exploring the integration of one such tool – Bro, with the ELK Stack.


What is Bro?

Co-developed by its original developer, Vern Paxson, and a team of Berkeley researchers, Bro is a feature-rich, and powerful, open source network security monitor that tracks network traffic in real time.

Out of the box, Bro will give you immediate insight into all network activity including file types, deployed software, and networked devices like any Network Intrusion Detection System (NIDS). But its power lies in its policy scripts and analyzers that allow users to analyze the data, identify patterns and take event-based action. Additionally, Bro plays nicely with external tools (e.g. Critical Stack) to help gain more insights from the monitored data.

As one would expect from such a solution, Bro logs absolutely everything. Depending on how you use Bro, you could potentially be logging into more than 50 different log files. I’ll dive deeper into Bro’s logging features later, but even a Bro deployment for monitoring a small environment generates a huge amount of log data.

That’s where the ELK Stack can be of help by allowing users to centralize Bro logging into one location, and providing analysis and visualization tools. In this first part of the series, we will explore how to hook up Bro logs with the ELK Stack. In the second part, we will provide tips and best practices to analyze the data.

Setting up Bro

If you’ve already set up Bro, feel free to skip to the next section. If not, here are the instructions for installing Bro, from source, on an Ubuntu 16.04 server. The whole process should take about 15-20 minutes.

Preparing your environment

Start by updating your system:

Bro has quite a few of dependencies that need to be installed:

Installing Bro

We are now ready to install Bro from source.

First, clone the Bro repository from GitHub:

Access the directory and run Bro’s configuration:

It’s time to build the program (this will take quite a while so you might want to go get a coffee):

Once the build process completes, install Bro with:

Bro is installed in the /usr/local/bro directory, and the last step to complete the installation process is to export the /usr/local/bro/bin directory into your $PATH:

Configuring Bro

Bro has a number of configuration files, all located under the /usr/local/bro/etc directory. Your first tweak needs to be done to the node.cfg file which is used to configure which servers to monitor.

By default, Bro is configured to operate in standalone mode which should suffice for a local installation. Still, take a look at the Bro section in the file and make sure the interface matches the public interface of your server (on Linux, you can run ifconfig to verify).

Next, the networks.cfg file is where you configure the IP networks of the servers you wish to monitor.

Open the file:

Delete the existing entries, and enter the public and private IP space of your server (on Linux, use ip addr show to check your network addresses).

Your file should look something like this:

Logging and mailing is configured in the broctl.cfg file. Here, all we will do is enter a valid email address to receive notifications from Bro:

Running Bro

Finally, run Bro with the following command (also used to subsequently apply configuration changes):

To make sure all is running as expected, use:

Bro logging 101

As mentioned above, one of Bro’s strongest features is its logging capabilities. Bro logs everything. You can take a look at some of the provided log files here. There are also additional logs that may be created as a result of implementing an integration with an external plugin (e.g. Critical Stack).

By default, all Bro logs are written to /usr/local/bro/logs/current (on Linux) and are rotated on a daily basis. Take a look:

Take a closer look at a sample log:

Each Bro log includes some definitions at the top of the file — a list of the different fields in the file and their type. Note the structure of the log file — the fields are separated by by a tab ( \t ) character, you will see how this affects processing with Logstash later on in the article.

Shipping the logs into ELK

To ship the logs into the ELK Stack, we will be using Filebeat to tail the log files and ship them via Logstash into a local Elasticsearch instance. For users, I will also explain how to ship the logs from Filebeat into directly.

Important! We will be focusing on only one Bro log file, the conn.log file. To collect all the log files, you will need to configure multiple pipelines.

Configuring Filebeat

Assuming you have Filebeat installed already, open your Filebeat configurations file:

In the Filebeat configuration file, define the path to the log files and your output destination.

The example below defines one prospector for Bro’s conn.log file which contains data on a network’s TCP/UDP/ICMP connections. To track the other logs you would need to add a prospector for each file in a similar fashion. The output is a local Logstash instance.

Save your file.

Configuring Logstash

Next, we need to configure Logstash. I will provide the complete example at the end of the section, but let’s go over the different sections one by one.


The input section is pretty straightforward. We will use the beats input plugin and define the host and port accordingly:


The filter section is more complicated because of the structure of the different Bro log files.

We’ll start by using an if statement to tell Logstash to remove the comments at the top of the log file:

Next, we will use the Logstash CSV filter plugin to process the data. We will first use an if statement to apply the subsequent filter configurations to the “bro-conn” log type (as defined in our Filebeat configuration file above).

The columns option allows us to define each column head as a field. The CSV filter plugin uses a comma as the default delimiter, but in this case we’re using the separator option to define a tab as the delimiter.  

Next, we’re going to define the provided ts field as our timestamp field using the date filter plugin and the built-in Unix pattern:

To enrich our IPs with geographical information, we’re going to use the geoip filter plugin:

The final filter plugin we will use is the mutate plugin, to rename fields (Elasticsearch has issues with periods in field names) and define field types:


Like the input, this section is also pretty simple:

Complete configuration file

Here is the complete configuration file, for inputting Bro conn logs from Filebeat, processing them and sending them for indexing in Elasticsearch:

Starting the data pipeline

Now that we’ve got all the pieces in place, it’s time to start the pipeline.

Start Logstash:

Start Filebeat:

After a while, and if there are no errors in your configuration files, a new Logstash index will be created, and its pattern can be defined in Kibana:


Enter the index pattern, select the timestamp field, and create the new index pattern.

Opening the Discover page in Kibana, you should see your Bro conn.log messages displayed. On the left a list of all the available fields for analysis, as processed by Logstash, will be displayed:

timestamp 30

Shipping Bro logs into

Making a few adjustments to the Filebeat configuration file, you can ship logs directly into the ELK Stack.

Before editing the Filebeat configuration file though, we need to download an SSL certificate to encrypt the data:

Open your Filebeat configuration file, and use the following configuration (retrieve your account token from the Settings page in and enter it in the relevant field):

By the way, you can use the Filebeat wizard in the Filebeat Log Shipping section to automatically generate a ready-made configuration file to use.

Restart Filebeat.

Within a few moments, you should be seeing your Bro logs appear in

bro logs in

Note! Bro logs are currently not a supported log type in, so to tweak parsing please use our 24/7 chat support.


Congrats! You’ve got your Bro logs indexed in Elasticsearch and available for analysis and visualization in Kibana. This leads us directly to the next part of this series which will explain how to analyze the Bro log data.

To reiterate what was stressed above, the workflow here will take through the steps for shipping one type of Bro logs (conn.log). You will need to build multiple pipelines for shipping the other log types. Also — logs are just the tip of the iceberg as far as Bro’s capabilities are concerned. Power users will want to dive deeper into using the analyzers and policy scripts it supports.

Want to do more to protect your data and prevent DDoS attacks? can help!

Daniel Berman is Product Evangelist at He is passionate about log analytics, big data, cloud, and family and loves running, Liverpool FC, and writing about disruptive tech stuff. Follow him @proudboffin.