A Beginner’s Guide to Logstash Grok

logstash grok

The ability to efficiently analyze and query the data being shipped into the ELK Stack depends on the information being readable. This means that as unstructured data is being ingested into the system, it must be translated into structured message lines.

This ungrateful but critical task is usually left to Logstash (though there are other log shippers available, see our comparison of Fluentd vs. Logstash as one example). Regardless of the data source that you define, pulling the logs and performing some magic to beautify them is necessary to ensure that they are parsed correctly before being outputted to Elasticsearch.

Data manipulation in Logstash is performed using filter plugins. This article focuses on one of the most popular and useful filter plugins — the Logstash grok filter, which is used to parse unstructured data into structured data.

What is grok?

The original term is actually pretty new—coined by Robert A. Heinlein in his 1961 book Stranger in a Strange Land—it refers to understanding something to the level one has actually immersed oneself in it. It’s an appropriate name for the grok language and Logstash grok plugin, which modify information in one format and immerse it in another (JSON, specifically). There are already a couple hundred Grok patterns for logs available.

How does it work?

Put simply, grok is a way to match a line against a regular expression, map specific parts of the line into dedicated fields, and perform actions based on this mapping.

Built-in, there are over 200 Logstash patterns for filtering items such as words, numbers, and dates in AWS, Bacula, Bro, Linux-Syslog and more. If you cannot find the pattern you need, you can write your own custom pattern. There are also options for multiple match patterns, which simplifies the writing of expressions to capture log data.

Here is the basic syntax format for a Logstash grok filter:

%{PATTERN:FieldName}

This will match the predefined pattern and map it to a specific identifying field. Since grok is essentially based upon a combination of regular expressions, you can also create your own regex-based grok filter. For example:

(?\d\d-\d\d-\d\d)

This will match the regular expression of 22-22-22 (or any other digit) to the field name.

A Logstash grok example

To demonstrate how to get started with grokking, I’m going to use the following application log:

2016-07-11T23:56:42.000+00:00 INFO [MySecretApp.com.Transaction.Manager]:Starting transaction for session -464410bf-37bf-475a-afc0-498e0199f008

The goal I want to accomplish with a grok filter is to break down the logline into the following fields: timestamp, log level, class, and then the rest of the message.

The following grok pattern will do the job:

grok {
   match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:log-level} \[%{DATA:class}\]:%{GREEDYDATA:message}" }
 }

_grokparsefailure

This will try to match the incoming log to the given pattern. In case of a match, the log will be broken down into the specified fields, according to the defined patterns in the filter. In case of a mismatch, Logstash will add a tag called _grokparsefailure.

However, in our case, the filter will match and result in the following output:

{
     "message" => "Starting transaction for session -464410bf-37bf-475a-afc0-498e0199f008",
     "timestamp" => "2016-07-11T23:56:42.000+00:00",
     "log-level" => "INFO",
     "class" => "MySecretApp.com.Transaction.Manager"
}

Manipulating the data

On the basis of a match, you can define additional Logstash grok configurations to manipulate the data. For example, you can make Logstash 1) add fields, 2) override fields, or 3) remove fields.

grok {
   match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:log-level} \[%{DATA:class}\]:%{GREEDYDATA:message}" }
   overwrite => [“message”]
   add_tag =>  [ "My_Secret_Tag” ] 
}

In this case, we are using the ‘overwrite’ action to overwrite the ‘message’ field. This way our ‘message’ field will not appear with the other fields we defined (‘timestamp’, ‘log-level’ and ‘class’). Also, we are using the ‘add_tag’ action to add a custom tag field to the log.

A full list of available actions you can use to manipulate your logs is available here, together with their input type and default value.

The grok debugger

A great way to get started with building your grok filters is this grok debug tool: https://grokdebug.herokuapp.com/

This tool allows you to paste your log message and gradually build the grok pattern while continuously testing the compilation. As a rule, I recommend starting with the %{GREEDYDATA:message} pattern and slowly adding more and more patterns as you proceed.

In the case of the example above, I would start with:

%{GREEDYDATA:message}

Then, to verify that the first part is working, proceed with:

%{TIMESTAMP_ISO8601:timestamp} %{GREEDYDATA:message}

Common examples

Here are some examples that will help you to familiarize yourself with how to construct a grok filter:

Syslog

Parsing syslog messages with Grok is one of the more common demands of new users,. There are also several different kinds of log formats for syslog so keep writing your own custom grok patterns in mind. Here is one example of a common syslog parse:

grok {
   match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp}
%{SYSLOGHOST:syslog_hostname}
%{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?:
%{GREEDYDATA:syslog_message}" }
}

If you are using rsyslog, you can configure the latter to send logs to Logstash.

Apache access logs

grok  {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
   }

Elasticsearch

grok {
      match => ["message", "\[%{TIMESTAMP_ISO8601:timestamp}\]\[%{DATA:loglevel}%{SPACE}\]\[%{DATA:source}%{SPACE}\]%{SPACE}\[%{DATA:node}\]%{SPACE}\[%{DATA:index}\] %{NOTSPACE} \[%{DATA:updated-type}\]",
                "message", "\[%{TIMESTAMP_ISO8601:timestamp}\]\[%{DATA:loglevel}%{SPACE}\]\[%{DATA:source}%{SPACE}\]%{SPACE}\[%{DATA:node}\] (\[%{NOTSPACE:Index}\]\[%{NUMBER:shards}\])?%{GREEDYDATA}"
      ]
   }

Summing it up

Logstash grok is just one type of filter that can be applied to your logs before they are forwarded into Elasticsearch. Because it plays such a crucial part in the logging pipeline, grok is also one of the most commonly-used filters.

Here is a list of some useful resources that can help you along the grokking way:

Happy grokking!

Easily Configure and Ship Logs with Logz.io ELK as a Service.
Artboard Created with Sketch.

Internal