Shipping Logs to Logz.io with Filebeat

shipping logs to filebeat

Replacing Logstash Forwarder, Filebeat is the ELK Stack’s next-gen shipper for log data, tailing log files, and sending the traced information to Logstash for parsing or Elasticsearch for storage.

Logz.io, our enterprise-grade ELK as a service with added features, allows you to ship logs from Filebeat easily using an automated script. Once the logs are shipped and loaded in Kibana, you can use Logz.io’s features to monitor your logs and predict issues.

Here, I will explain how to establish a pipeline for shipping your logs to Logz.io using Filebeat. (Note: You can also ship logs to Logz.io using TopBeat, PacketBeat or WinlogBeat — see this knowledge base article for more information.)

Prerequisites

To complete the steps below, you’ll need the following:

  • A common Linux distribution, with TCP traffic allowed to port 5000
  • An active Logz.io account. If you don’t have one yet, create a free account here.
  • 5 minutes of free time!

Step 1: Installing Filebeat

I’m running Ubuntu 12.04, and I’m going to install Filebeat 1.1.1 from the repository. If you’re using a different OS, additional installation instructions are available here.

First, I’m going to download and install the Public Signing Key:

curl https://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -

Next, I’m going to save the repository definition to /etc/apt/sources.list.d/beats.list:

echo "deb https://packages.elastic.co/beats/apt stable main" | sudo tee -a /etc/apt/sources.list.d/beats.list

Finally, I’m going to run apt-get update and install Filebeat:

sudo apt-get update && sudo apt-get install filebeat

Step 2: Downloading the Certificate

Our next step is to download a certificate and move it to the correct location, so first, run:

wget https://raw.githubusercontent.com/logzio/public-certificates/master/COMODORSADomainValidationSecureServerCA.crt

And then:

sudo mkdir -p /etc/pki/tls/certs

sudo cp COMODORSADomainValidationSecureServerCA.crt /etc/pki/tls/certs/

Step 3: Configuring Filebeat

Our next step is to configure Filebeat to ship logs to Logz.io by tweaking the Filebeat configuration file, which on Linux is located at: /etc/filebeat/filebeat.yml

Before you begin to edit this file, make a backup copy just in case of problems.

Below is an example configuration that you can use as a reference though I highly recommend using the configuration supplied in the Logz.io UI: Log Shipping –> Filebeat.

################### Filebeat Configuration Example #########################
#
############################# Filebeat #####################################
filebeat:
  # List of prospectors to fetch data.
  prospectors:
     # This is a text lines files harvesting definition 
    -
      # Path like /var/log/*/*.log can be used.
      # Make sure no file is defined twice as this can lead to unexpected behaviour.
      paths:
        - /stack/apache2/logs/*.log
      # Additional fields.
      fields:
        logzio_codec: plain
        token: <your logz.io token>
        type: SOME_LINE_LOG_TYPE
      fields_under_root: true
      encoding: utf-8
      # Ignore files which were modified more then the defined timespan in the past
      # Time strings like 2h (2 hours), 5m (5 minutes) can be used.
      ignore_older: 3h
    # This is a JSON files harvesting definition 
    -
      paths:
        - /path/to/json/file.json
      fields:
        logzio_codec: json
        token: <your logz.io token>
        type: MY_JSON_LOG_TYPE
      fields_under_root: true
      encoding: utf-8
      ignore_older: 3h
  # Name of the registry file, which is used to keep track of the location 
  # of logs in the files that have already been sent between restarts
  # of the filebeat process.
  registry_file: /var/lib/filebeat/registry
###############################################################################
############################# Libbeat Config ##################################
# Base config file used by all other beats for using libbeat features
############################# Output ##########################################
# Configure what outputs to use when sending the data collected by the beat.
output:
    logstash:
    # The Logstash hosts
      hosts: ["listener.logz.io:5015"]
    #  The below configuration is used for Filebeat 1.3 or lower
    tls:
      certificate_authorities: ['/etc/pki/tls/certs/COMODORSADomainValidationSecureServerCA.crt']
      
    #  The below configuration is used for Filebeat 5.0 or higher      
    ssl:
      certificate_authorities: ['/etc/pki/tls/certs/COMODORSADomainValidationSecureServerCA.crt']
    
############################# Logging #########################################
# default to syslog.
logging:
  # To enable logging to files, to_files option has to be set to true
  files:
    # Configure log file size limit.
    rotateeverybytes: 10485760 # = 10MB
  # Sets log level. The default log level is error.
  # Available log levels are: critical, error, warning, info, debug
  #level: error

Defining the Filebeat Prospector

Prospectors are where we define the log files that we want to tail. You can tail JSON files and simple text files. In the example above, I’ve defined the path for tailing any log file under the /var/log/ directory ending with .log (line 12).

Please note that when harvesting JSON files, you need to add ‘logzio_codec: json’ to the fields object (line 28). When harvesting text lines, you need to add ‘logzio_codec: plain’ to the fields object (line 15).

Two additional properties are important for defining the prospector:

  • First, the fields_under_root property should always be set to true
  • Second, the document_type property is used to identify the type of log data and should be defined. While not mandatory, defining this property will help optimize Logz.io’s parsing and groking of your data

A complete list of known types is available here, and if your type is not listed here, please let us know.

Defining the Filebeat Output

Outputs are responsible for sending the data in JSON format to a destination of your choice. In the example above, we have defined the Logz.io host (line 45) along with the location of the certificate that we downloaded earlier and the log rotation setting (line 48).

Be sure to use your Logz.io token in the required fields (you can find your account token in the Logz.io settings section, in the top-right corner of the UI).

Step 4: Verifying the pipeline

That’s it. You’ve successfully installed Filebeat and configured it to ship logs to Logz.io!

Make sure Filebeat is running:

$ cd /etc/init.d
$ ./filebeat status

And if not, enter:

$ sudo ./filebeat start

To verify the pipeline, head over to your Kibana and see if the log files are being shipped. It may take a minute or two for the pipeline to work — but once you’re up and running, you can start to analyze your logs by performing searches, creating visualizations, using the Logz.io alerting feature to get notifications on events, and using our free ELK Apps library.

Please note that Filebeat saves the offset of the last data read from the file in the registry, so if the agent restarts, it will continue from the saved offset.

Easily Configure and Ship Logs with Logz.io ELK as a Service.
Thank you for Subscribing!
Artboard Created with Sketch.

Leave a Reply

×

Turn machine data into actionable insights with ELK as a Service

By submitting this form, you are accepting our Terms of Use and our Privacy Policy

×

DevOps News and Tips to your inbox

We write about DevOps. Log Analytics, Elasticsearch and much more!

By submitting this form, you are accepting our Terms of Use and our Privacy Policy
× Enter to win $300 to Amazon. Take the DevOps Pulse 2019! Take Survey