Network Analysis with Packetbeat and the ELK Stack


Packetbeat is an open-source data shipper and analyzer for network packets that are integrated into the ELK Stack (Elasticsearch, Logstash, and Kibana). A member of Elastic’s family of log shippers (Filebeat, Topbeat, Libbeat, Winlogbeat), Packetbeat provides real-time monitoring metrics on the web, database, and other network protocols by monitoring the actual packets being transferred across the wire.

Monitoring data packets with the ELK Stack can help to detect unusual levels of network traffic and unusual packet characteristics, identify packet sources and destinations, search for specific data strings in packets, and create a user-friendly dashboard with insightful statistics. Packet monitoring can complement other security measures (such as the creation of SIEM dashboards) and help to improve your response times to malicious attacks.

In this article, I will demonstrate most of the above-mentioned benefits. Specifically, we will use Packetbeat to monitor the HTTP transactions of an e-commerce web application and analyze the data using the cloud-based, enterprise ELK Stack.

Installing and configuring Packetbeat

Our first step is to install and configure Packetbeat (full installation instructions are here):

$ sudo apt-get install libpcap0.8
$ curl -L -O 
sudo dpkg -i packetbeat_1.2.2_amd64.deb

Open the configuration file at /etc/packetbeat/packetbeat.yml:

$ sudo vim /etc/packetbeat/packetbeat.yml

The Sniffer section of the configuration file determines which network interface to “sniff” (i.e., monitor). In our case, we’re going to listen to all the messages sent or received by the server:

interfaces: device: any

In the Protocols section, we need to configure the ports on which Packetbeat can find each protocol. Usually, the default values in the configuration file will suffice, but if you are using non-standard ports, this is the place to add them.

My e-commerce application is serviced by an Apache web server and a MySQL database, so my protocols are defined as follows:

  ports: [53]

  include_authorities: true
  include_additionals: true

  ports: [80, 8080, 8081, 5000, 8002]

  ports: [3306]

The Output section is the next section we need to configure. Here, you can define the outputs to use to export the data. You can output to Elasticsearch or Logstash, for example, but in our case, we’re going to output to a file:

### File as output
    path: "/tmp/packetbeat"
    filename: packetbeat
    rotate_every_kb: 10000
    number_of_files: 7

An output configuration to Elasticsearch would look something like this:

         hosts: [""]

And last but not least, we’re going to configure the Logging section to define a log file size limit that once reached, will trigger an automatic rotation:

           rotateeverybytes: 10485760

Once done, start Packetbeat:

$ sudo /etc/init.d/packetbeat start

Installing and configuring Filebeat

Packetbeat data can be ingested directly into Elasticsearch or forwarded to Logstash before ingestion into Elasticsearch. Since we do not yet have a native log shipper for Packetbeat, we’re going to use Filebeat to input the file exported by Packetbeat into

First, download and install the Public Signing Key:

$ curl | sudo apt-key add -

Then, save the repository definition to /etc/apt/sources.list.d/beats.list:

$ echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/beats.list

Now, update the system and install Filebeat:

$ sudo apt-get update && sudo apt-get install filebeat

The next step is to download a certificate and move it to the correct location, so first, run:

$ wget

And then:

$ sudo mkdir -p /etc/pki/tls/certs
$ sudo cp COMODORSADomainValidationSecureServerCA.crt /etc/pki/tls/certs/

We now need to configure Filebeat to ship our Packetbeat file into

Open the Filebeat configuration file:

$ sudo vim /etc/filebeat/filebeat.yml

Defining the Filebeat Prospector

Prospectors are where we define the files that we want to tail. You can tail JSON files and simple text files. In our case, we’re going to define the path to our Packetbeat JSON file.

Please note that when harvesting JSON files, you need to add ‘logzio_codec: json’ to the fields object. Also, the fields_under_root property must be set to ‘true’. Be sure to enter your token in the necessary namespace.

A complete list of known types is available here, and if your type is not listed there, please let us know.

        - /tmp/packetbeat/*
        logzio_codec: json
        token: UfKqCazQjUYnBN***********************
      fields_under_root: true
      ignore_older: 24h

Defining the Filebeat Output

Outputs are responsible for sending the data in JSON format to Logstash. In the configuration below, the Logstash host is already defined along with the location of the certificate that you downloaded earlier and the log rotation setting:

    # The Logstash hosts
    hosts: [""]
      # List of root certificates for HTTPS server verifications
      Certificate_authorities: ['/etc/pki/tls/certs/COMODORSADomainValidationSecureServerCA.crt']
  # To enable logging to files, to_files option has to be set to true
    # Configure log file size limit.
    rotateeverybytes: 10485760 # = 10MB

Like before, be sure to put your token in the required fields.

Once done, start Filebeat:

$ sudo service filebeat start

Analyzing the data

To verify the pipeline is up and running, access the user interface and open the Kibana tab. After a minute or two, you should see a stream of events coming into the system.

You may be shipping other types of logs into, so the best way to filter out the other logs is by first opening one of the messages coming in from Packetbeat and filtering via the ‘source’ field.

The messages list is then filtered to show only the data outputted by Packetbeat:

log data output by pocketbeat

To help to identify the different types of messages, add the ‘type’ field from the list of available fields on the left. In our case, we can see Apache, MySQL and DNS messages.

I’m going to focus on HTTP traffic by entering the following query:


Our next step is to visualize the data. To do this, we’re going to save the search and then select the Visualize tab in Kibana.

We’re going to create a new line chart based on the saved search that depicts the amount of HTTP transactions over time.

The specific configuration of this visualization looks like this:

kibana visualization configuration
Hit the Play button to see a preview of the visualization:

kibana visualization preview

Save the visualization.

Another way to use Kibana to visualize Packetbeat data is to create a vertical bar chart stacking the different HTTP codes over time.

The specific configuration of this visualization looks like this:

http status code visualization configuration

The end result:

http status code dashboard

As this image shows, this visualization helps to identify traffic peaks in conjunction with HTTP codes.

After saving the visualization, it’s time to create your own personalized dashboard. To do this, select the Dashboard tab, and use the + icon in the top-right corner to add your two visualizations.

If you’re using, you can use a ready-made dashboard that will save you the time spent on creating your own set of visualizations.

Select the ELK Apps tab:

elk apps

ELK Apps are free and pre-made visualizations, searches and dashboards customized for specific log types. (You can see the library directly or learn more about them.) Enter ‘Packetbeat’ in the search field:

elk apps packetbeat
Install the HTTP dashboard, and then open it in Kibana:

http status code dashboard kibana

In just a few seconds, you can have your own network monitoring dashboard up and running, giving you a real-time picture of the packets being transmitted over the wire.

Observability at scale, powered by open source

Internal Live. Join the weekly live demo.
DevOps Pulse 2022: Observability Trends and Challenges.
Forrester Observability Snapshot.

Consolidate Your AWS Data In One Place

Learn More