shipping logs to filebeat

Replacing Logstash Forwarder, Filebeat is the ELK Stack’s next-gen shipper for log data, tailing log files, and sending the traced information to Logstash for parsing or Elasticsearch for storage., our enterprise-grade ELK as a service with added features, allows you to ship logs from Filebeat easily using an automated script. Once the logs are shipped and loaded in Kibana, you can use’s features to monitor your logs and predict issues.

Here, I will explain how to establish a pipeline for shipping your logs to using Filebeat. (Note: You can also ship logs to using TopBeat, PacketBeat or WinlogBeat — see this knowledge base article for more information.)


To complete the steps below, you’ll need the following:

  • A common Linux distribution, with TCP traffic allowed to port 5000
  • An active account. If you don’t have one yet, create a free account here.
  • 5 minutes of free time!

Step 1: Installing Filebeat

I’m running Ubuntu 12.04, and I’m going to install Filebeat 1.1.1 from the repository. If you’re using a different OS, additional installation instructions are available here.

First, I’m going to download and install the Public Signing Key:

Next, I’m going to save the repository definition to /etc/apt/sources.list.d/beats.list:

Finally, I’m going to run apt-get update and install Filebeat:

Step 2: Downloading the Certificate

Our next step is to download a certificate and move it to the correct location, so first, run:

And then:

Step 3: Configuring Filebeat

Our next step is to configure Filebeat to ship logs to by tweaking the Filebeat configuration file, which on Linux is located at: /etc/filebeat/filebeat.yml

Before you begin to edit this file, make a backup copy just in case of problems.

Below is an example configuration that you can use as a reference though I highly recommend using the configuration supplied in the UI: Log Shipping –> Filebeat.

Defining the Filebeat Prospector

Prospectors are where we define the log files that we want to tail. You can tail JSON files and simple text files. In the example above, I’ve defined the path for tailing any log file under the /var/log/ directory ending with .log (line 12).

Please note that when harvesting JSON files, you need to add ‘logzio_codec: json’ to the fields object (line 28). When harvesting text lines, you need to add ‘logzio_codec: plain’ to the fields object (line 15).

Two additional properties are important for defining the prospector:

  • First, the fields_under_root property should always be set to true
  • Second, the document_type property is used to identify the type of log data and should be defined. While not mandatory, defining this property will help optimize’s parsing and groking of your data

A complete list of known types is available here, and if your type is not listed here, please let us know.

Defining the Filebeat Output

Outputs are responsible for sending the data in JSON format to a destination of your choice. In the example above, we have defined the host (line 45) along with the location of the certificate that we downloaded earlier and the log rotation setting (line 48).

Be sure to use your token in the required fields (you can find your account token in the settings section, in the top-right corner of the UI).

Step 4: Verifying the pipeline

That’s it. You’ve successfully installed Filebeat and configured it to ship logs to!

Make sure Filebeat is running:

And if not, enter:

To verify the pipeline, head over to your Kibana and see if the log files are being shipped. It may take a minute or two for the pipeline to work — but once you’re up and running, you can start to analyze your logs by performing searches, creating visualizations, using the alerting feature to get notifications on events, and using our free ELK Apps library.

Please note that Filebeat saves the offset of the last data read from the file in the registry, so if the agent restarts, it will continue from the saved offset.

Easily Configure and Ship Logs with ELK as a Service.