This series describes how to integrate the ELK Stack with Bro for network analysis and monitoring. Part 1 will explain how to set up the integration, part 2 will show some examples of how to analyze and visualize the data in Kibana.
It’s no secret that cyber attacks are constantly on the rise. Ransomware, DDoS, data breaches — you name it. Organizations are scrambling to employ a variety of protocols and security strategies to bolster their ability to defend themselves from these attacks, and logging and monitoring play a key role in these attempts.
A large amount of solutions and tools have been developed to help secure the different layers comprising an IT architecture, starting with the infrastructure level and ending with the application level. This series focuses on exploring the integration of one such tool – Bro, with the ELK Stack.
What is Bro?
Co-developed by its original developer, Vern Paxson, and a team of Berkeley researchers, Bro is a feature-rich, and powerful, open source network security monitor that tracks network traffic in real time.
Out of the box, Bro will give you immediate insight into all network activity including file types, deployed software, and networked devices like any Network Intrusion Detection System (NIDS). But its power lies in its policy scripts and analyzers that allow users to analyze the data, identify patterns and take event-based action. Additionally, Bro plays nicely with external tools (e.g. Critical Stack) to help gain more insights from the monitored data.
As one would expect from such a solution, Bro logs absolutely everything. Depending on how you use Bro, you could potentially be logging into more than 50 different log files. I’ll dive deeper into Bro’s logging features later, but even a Bro deployment for monitoring a small environment generates a huge amount of log data.
That’s where the ELK Stack can be of help by allowing users to centralize Bro logging into one location, and providing analysis and visualization tools. In this first part of the series, we will explore how to hook up Bro logs with the ELK Stack. In the second part, we will provide tips and best practices to analyze the data.
Setting up Bro
If you’ve already set up Bro, feel free to skip to the next section. If not, here are the instructions for installing Bro, from source, on an Ubuntu 16.04 server. The whole process should take about 15-20 minutes.
Preparing your environment
Start by updating your system:
sudo apt-get update
Bro has quite a few of dependencies that need to be installed:
sudo apt-get install bison cmake flex g++ gdb make libmagic-dev libpcap-dev libgeoip-dev libssl-dev python-dev swig2.0 zlib1g-dev
Installing Bro
We are now ready to install Bro from source.
First, clone the Bro repository from GitHub:
git clone --recursive git://git.bro.org/bro
Access the directory and run Bro’s configuration:
cd bro ./configure
It’s time to build the program (this will take quite a while so you might want to go get a coffee):
make
Once the build process completes, install Bro with:
sudo make install
Bro is installed in the /usr/local/bro directory, and the last step to complete the installation process is to export the /usr/local/bro/bin directory into your $PATH:
export PATH=/usr/local/bro/bin:$PATH
Configuring Bro
Bro has a number of configuration files, all located under the /usr/local/bro/etc directory. Your first tweak needs to be done to the node.cfg file which is used to configure which servers to monitor.
sudo vim /usr/local/bro/etc/node.cfg
By default, Bro is configured to operate in a standalone mode which should suffice for a local installation. Still, take a look at the Bro section in the file and make sure the interface matches the public interface of your server (on Linux, you can run ifconfig to verify).
[bro] type=standalone host=localhost interface=eth0
Next, the networks.cfg file is where you configure the IP networks of the servers you wish to monitor.
Open the file:
sudo vim /usr/local/bro/etc/networks.cfg
Delete the existing entries, and enter the public and private IP space of your server (on Linux, use IP addr show to check your network addresses).
Your file should look something like this:
172.31.63.255/20 Public IP space 172.31.54.208/20 Private IP space
Logging and mailing are configured in the broctl.cfg file. Here, all we will do is enter a valid email address to receive notifications from Bro:
sudo vim /usr/local/bro/etc/broctl.cfg MailTo = YourEmailAddress
Running Bro
Finally, run Bro with the following command (also used to subsequently apply configuration changes):
sudo /usr/local/bro/bin/broctl deploy
To make sure all is running as expected, use:
sudo /usr/local/bro/bin/broctl status Name Type Host Status Pid Started bro standalone localhost running 23593 26 Feb 07:46:07
Bro logging 101
As mentioned above, one of Bro’s strongest features is its logging capabilities. Bro logs everything.
By default, all Bro logs are written to /usr/local/bro/logs/current (on Linux) and are rotated on a daily basis. Take a look:
ls /usr/local/bro/logs/current capture_loss.log dhcp.log ssh.log stderr.log weird.log conn.log dns.log stats.log stdout.log
Take a closer look at a sample log:
cat conn.log #separator \x09 #set_separator , #empty_field (empty) #unset_field - #path conn #open 2018-02-26-09-00-15 #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig local_resp missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents #types time string addr port addr port enum string intervalcount count string bool bool count string count count count count set[string] 1519635604.811714 CdAdHe3goKQ8tqNnla 172.31.54.208 40831 172.31.0.2 53 udp dns 0.001741 0 87 SHR T T0 Cd 0 0 1 115 - 1519635604.830159 Ch3Mbz13ux7pB7qin2 172.31.54.208 56873 172.31.0.2 53 udp dns 0.151780 0 94 SHR T T0 Cd 0 0 1 122 - 1519635605.010701 Cq1w0t10CT93Gv0JV 172.31.54.208 46079 172.31.0.2 53 udp dns 0.002579 0 81 SHR T T0 Cd 0 0 1 109 - 1519635613.686409 Ck1FSFAvl6XX1BCL6 172.31.54.208 58678 172.31.0.2 53 udp dns 0.001522 0 83 SHR T T0 Cd 0 0 1
Each Bro log includes some definitions at the top of the file — a list of the different fields in the file and their type. Note the structure of the log file — the fields are separated by a tab ( \t ) character, you will see how this affects processing with Logstash later on in the article.
Shipping the logs into ELK
To ship the logs into the ELK Stack, we will be using Filebeat to tail the log files and ship them via Logstash into a local Elasticsearch instance. For Logz.io users, I will also explain how to ship the logs from Filebeat into Logz.io directly.
Important! We will be focusing on only one Bro log file, the conn.log file. To collect all the log files, you will need to configure multiple pipelines.
Configuring Filebeat
Assuming you have Filebeat installed already, open your Filebeat configurations file:
sudo vim /etc/filebeat/filebeat.yml
In the Filebeat configuration file, define the path to the log files and your output destination.
The example below defines one prospector for Bro’s conn.log file which contains data on a network’s TCP/UDP/ICMP connections. To track the other logs you would need to add a prospector for each file in a similar fashion. The output is a local Logstash instance.
filebeat.prospectors: - input_type: log paths: - "/usr/local/bro/logs/current/conn.log" fields: type: "bro-conn" fields_under_root: true output.logstash: # The Logstash hosts hosts: ["localhost:5044"]
Save your file.
Configuring Logstash
Next, we need to configure Logstash. I will provide the complete example at the end of the section, but let’s go over the different sections one by one.
sudo vim /etc/logstash/bro-conn-01.conf
Input
The input section is pretty straightforward. We will use the beats input plugin and define the host and port accordingly:
input { beats { port => 5044 host => "localhost" } }
Filter
The filter section is more complicated because of the structure of the different Bro log files.
We’ll start by using an if statement to tell Logstash to remove the comments at the top of the log file:
if [message] =~ /^#/ { drop { } }
Next, we will use the Logstash CSV filter plugin to process the data. We will first use an if statement to apply the subsequent filter configurations to the “bro-conn” log type (as defined in our Filebeat configuration file above).
if [type] == "bro-conn" { csv { columns => ["ts","uid","id.orig_h","id.orig_p","id.resp_h","id.resp_p","proto","service","duration","orig_bytes","resp_bytes","conn_state","local_orig","local_resp","missed_bytes","history","orig_pkts","orig_ip_bytes","resp_pkts","resp_ip_bytes","tunnel_parents"] separator => " " }
The columns option allows us to define each column head as a field. The CSV filter plugin uses a comma as the default delimiter, but in this case we’re using the separator option to define a tab as the delimiter.
Next, we’re going to define the provided ts field as our timestamp field using the date filter plugin and the built-in Unix pattern:
date { match => [ "ts", "UNIX" ] }
To enrich our IPs with geographical information, we’re going to use the geoip filter plugin:
geoip { source => "id.orig_h" }
The final filter plugin we will use is the mutate plugin, to rename fields (Elasticsearch has issues with periods in field names) and define field types:
mutate { convert => { "id.orig_p" => "integer" } convert => { "id.resp_p" => "integer" } convert => { "orig_bytes" => "integer" } convert => { "duration" => "float" } convert => { "resp_bytes" => "integer" } convert => { "missed_bytes" => "integer" } convert => { "orig_pkts" => "integer" } convert => { "orig_ip_bytes" => "integer" } convert => { "resp_pkts" => "integer" } convert => { "resp_ip_bytes" => "integer" } rename => { "id.orig_h" => "id_orig_host" } rename => { "id.orig_p" => "id_orig_port" } rename => { "id.resp_h" => "id_resp_host" } rename => { "id.resp_p" => "id_resp_port" } }
Output
Like the input, this section is also pretty simple:
output { elasticsearch { hosts => ["localhost:9200"] }
Complete configuration file
Here is the complete configuration file, for inputting Bro conn logs from Filebeat, processing them and sending them for indexing in Elasticsearch:
input { beats { host => "localhost" port => 5044 } } filter { if [message] =~ /^#/ { drop { } } if [type] == "bro-conn" { csv { columns => ["ts","uid","id_orig_h","id_orig_p","id_resp_h","id_resp_p","proto","service","duration","orig_bytes","resp_bytes","conn_state","local_orig","local_resp","missed_bytes","history","orig_pkts","orig_ip_bytes","resp_pkts","resp_ip_bytes","tunnel_parents"] separator => " " } date { match => [ "ts", "UNIX" ] } geoip { source => "id.orig_h" } mutate { convert => { "id.orig_p" => "integer" } convert => { "id.resp_p" => "integer" } convert => { "orig_bytes" => "integer" } convert => { "duration" => "float" } convert => { "resp_bytes" => "integer" } convert => { "missed_bytes" => "integer" } convert => { "orig_pkts" => "integer" } convert => { "orig_ip_bytes" => "integer" } convert => { "resp_pkts" => "integer" } convert => { "resp_ip_bytes" => "integer" } } } } output { elasticsearch { hosts => ["localhost:9200"] } }
Starting the data pipeline
Now that we’ve got all the pieces in place, it’s time to start the pipeline.
Start Logstash:
cd /usr/share/logstash sudo bin/logstash -f bro-conn-01.conf
Start Filebeat:
sudo service filebeat start
After a while, and if there are no errors in your configuration files, a new Logstash index will be created, and its pattern can be defined in Kibana:
Enter the index pattern, select the timestamp field, and create the new index pattern.
Opening the Discover page in Kibana, you should see your Bro conn.log messages displayed. On the left a list of all the available fields for analysis, as processed by Logstash, will be displayed:
Shipping Bro logs into Logz.io
Making a few adjustments to the Filebeat configuration file, you can ship logs directly into the Logz.io ELK Stack.
Before editing the Filebeat configuration file though, we need to download an SSL certificate to encrypt the data:
wget https://raw.githubusercontent.com/logzio/public-certificates/master/COMODORSADomainValidationSecureServerCA.crt sudo mkdir -p /etc/pki/tls/certs sudo cp COMODORSADomainValidationSecureServerCA.crt /etc/pki/tls/certs/
Open your Filebeat configuration file, and use the following configuration (retrieve your account token from the Settings page in Logz.io and enter it in the relevant field):
filebeat.prospectors: - input_type: log paths: - "/usr/local/bro/logs/current/conn.log" fields: logzio_codec: plain token: <yourAccountToken> type: bro-conn fields_under_root: true ignore_older: 3h output.logstash: hosts: ["listener.logz.io:5015"] ssl: certificate_authorities: ['/etc/pki/tls/certs/COMODORSADomainValidationSecureServerCA.crt']
By the way, you can use the Filebeat wizard in the Filebeat Log Shipping section to automatically generate a ready-made configuration file to use.
Restart Filebeat.
sudo service filebeat restart
Within a few moments, you should be seeing your Bro logs appear in Logz.io.
Note! Bro logs are currently not a supported log type in Logz.io, so to tweak parsing please use our 24/7 chat support.
Summary
Congrats! You’ve got your Bro logs indexed in Elasticsearch and available for analysis and visualization in Kibana. This leads us directly to the next part of this series which will explain how to analyze the Bro log data.
To reiterate what was stressed above, the workflow here will take through the steps for shipping one type of Bro logs (conn.log). You will need to build multiple pipelines for shipping the other log types. Also — logs are just the tip of the iceberg as far as Bro’s capabilities are concerned. Power users will want to dive deeper into using the analyzers and policy scripts it supports.