The ability to efficiently analyze and query the data being shipped into the ELK Stack depends on the information being readable. This means that as unstructured data is being ingested into the system, it must be translated into structured message lines.

This ungrateful but critical task is usually left to Logstash (though there are other log shippers available, see our comparison of Fluentd vs. Logstash as one example). Regardless of the data source that you define, pulling the logs and performing some magic to beautify them is necessary to ensure that they are parsed correctly before being outputted to Elasticsearch.

Data manipulation in Logstash is performed using filter plugins. This article focuses on one of the most popular and useful filter plugins – the Logstash grok filter, which is used to parse unstructured data into structured data.

Before we get going, we’re obligated to tell you that you can avoid parsing altogether with Logz.io’s parsing-as-a-service – where Logz.io users simply reach out to our Customer Support Engineers via chat to get their logs parsed. This service plays a small part in our platform that manages the entire log data pipeline out-of-the-box – for zero maintenance logging and fast analysis. See our log management product based on OpenSearch to learn more.

What is grok?

The original term is actually pretty new — coined by Robert A. Heinlein in his 1961 book Stranger in a Strange Land — it refers to understanding something to the level one has actually immersed oneself in it. It’s an appropriate name for the grok language and Logstash grok plugin, which modify information in one format and immerse it in another (JSON, specifically). There are already a couple hundred Grok patterns for logs available.

How does it work?

Put simply, grok is a way to match a line against a regular expression, map specific parts of the line into dedicated fields, and perform actions based on this mapping.

Built-in, there are over 200 Logstash patterns for filtering items such as words, numbers, and dates in AWS, Bacula, Bro, Linux-Syslog and more. If you cannot find the pattern you need, you can write your own custom pattern. There are also options for multiple match patterns, which simplifies the writing of expressions to capture log data.

Here is the basic syntax format for a Logstash grok filter:

%{SYNTAX:SEMANTIC}

The SYNTAX will designate the pattern in the text of each log. The SEMANTIC will be the identifying mark that you actually give that syntax in your parsed logs. In other words:

%{PATTERN:FieldName}

This will match the predefined pattern and map it to a specific identifying field.

For example, a pattern like 127.0.0.1 will match the Grok IP pattern, usually an IPv4 pattern.

Grok has separate IPv4 and IPv6 patterns, but they can be filtered together with the syntax IP.

This standard pattern is as follows:

IPV4 (?<![0-9])(?:(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2}))(?![0-9])

Pretending there was no unifying IP syntax, you would simply grok both with the same semantic field name:

%{IPv4:Client IP} %{IPv6:Client IP}

Again, just use the IP syntax, unless for any reason you want to separate these respective addresses into separate fields.

Since grok is essentially based upon a combination of regular expressions, you can also create your own custom regex-based grok filter with this pattern:

(?<custom_field>custom pattern)

For example:

(?\d\d-\d\d-\d\d)

This grok pattern will match the regex of 22-22-22 (or any other digit) to the field name.

Logstash Grok Pattern Examples

To demonstrate how to get started with grokking, I’m going to use the following application log:

2016-07-11T23:56:42.000+00:00 INFO [MySecretApp.com.Transaction.Manager]:Starting transaction for session -464410bf-37bf-475a-afc0-498e0199f008

The goal I want to accomplish with a grok filter is to break down the logline into the following fields: timestamp, log level, class, and then the rest of the message.

The following grok pattern will do the job:

grok {
   match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:log-level} \[%{DATA:class}\]:%{GREEDYDATA:message}" }
 }

#NOTE:GREEDYDATA is the way Logstash Grok expresses the regex .*

Grok Data Type Conversion

By default, all SEMANTIC entries are strings, but you can flip the data type with an easy formula. The following Logstash grok example converts any syntax NUMBER identified as a semantic num into a semantic float, float:

%{NUMBER:num:float}

It’s a pretty useful tool, even though it is currently only available for conversions to float or integers int.

_grokparsefailure

This will try to match the incoming log to the given grok pattern. In case of a match, the log will be broken down into the specified fields, according to the defined grok patterns in the filter. In case of a mismatch, Logstash will add a tag called _grokparsefailure.

However, in our case, the filter will match and result in the following output:

{
     "message" => "Starting transaction for session -464410bf-37bf-475a-afc0-498e0199f008",
     "timestamp" => "2016-07-11T23:56:42.000+00:00",
     "log-level" => "INFO",
     "class" => "MySecretApp.com.Transaction.Manager"
}

Manipulating the data

On the basis of a match, you can define additional Logstash grok configurations to manipulate the data. For example, you can make Logstash 1) add fields, 2) override fields, or 3) remove fields.

grok {
   match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:log-level} \[%{DATA:class}\]:%{GREEDYDATA:message}" }
   overwrite => [“message”]
   add_tag =>  [ "My_Secret_Tag” ] 
}

In our case, we are using the ‘overwrite’ action to overwrite the ‘message’ field. This way our ‘message’ field will not appear with the other fields we defined (timestamp, log-level, and class). Also, we are using the ‘add_tag’ action to add a custom tag field to the log.

A full list of available actions you can use to manipulate your logs is available here, together with their input type and default value.

The grok debugger

A great way to get started with building your grok filters is this grok debug tool: https://grokdebug.herokuapp.com/

This tool allows you to paste your log message and gradually build the grok pattern while continuously testing the compilation. As a rule, I recommend starting with the %{GREEDYDATA:message} pattern and slowly adding more and more patterns as you proceed.

In the case of the example above, I would start with:

%{GREEDYDATA:message}

Then, to verify that the first part is working, proceed with:

%{TIMESTAMP_ISO8601:timestamp} %{GREEDYDATA:message}

Common Logstash grok examples

Here are some examples that will help you to familiarize yourself with how to construct a grok filter:

Syslog

Parsing syslog messages with Grok is one of the more common demands of new users,. There are also several different kinds of log formats for syslog so keep writing your own custom grok patterns in mind. Here is one example of a common syslog parse:

grok {
   match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp}
%{SYSLOGHOST:syslog_hostname}
%{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?:
%{GREEDYDATA:syslog_message}" }
}

If you are using rsyslog, you can configure the latter to send logs to Logstash.

Apache Access logs

grok  {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
   }

Elasticsearch

grok {
      match => ["message", "\[%{TIMESTAMP_ISO8601:timestamp}\]\[%{DATA:loglevel}%{SPACE}\]\[%{DATA:source}%{SPACE}\]%{SPACE}\[%{DATA:node}\]%{SPACE}\[%{DATA:index}\] %{NOTSPACE} \[%{DATA:updated-type}\]",
                "message", "\[%{TIMESTAMP_ISO8601:timestamp}\]\[%{DATA:loglevel}%{SPACE}\]\[%{DATA:source}%{SPACE}\]%{SPACE}\[%{DATA:node}\] (\[%{NOTSPACE:Index}\]\[%{NUMBER:shards}\])?%{GREEDYDATA}"
      ]
   }

Redis

grok {
       match => ["redistimestamp", "\[%{MONTHDAY} %{MONTH} %{TIME}]",
                ["redislog", "\[%{POSINT:pid}\] %{REDISTIMESTAMP:timestamp}"], 
                ["redismonlog", "\[%{NUMBER:timestamp} \[%{INT:database} %{IP:client}:%{NUMBER:port}\] "%{WORD:command}"\s?%{GREEDYDATA:params}"]
      ]
    }

MongoDB

MONGO_LOG %{SYSLOGTIMESTAMP:timestamp} \[%{WORD:component}\] %{GREEDYDATA:message}MONGO_QUERY \{ (?<={ ).*(?= } ntoreturn:) \}MONGO_SLOWQUERY %{WORD} %{MONGO_WORDDASH:database}\.%{MONGO_WORDDASH:collection} %{WORD}: %{MONGO_QUERY:query} %{WORD}:%{NONNEGINT:ntoreturn} %{WORD}:%{NONNEGINT:ntoskip} %{WORD}:%{NONNEGINT:nscanned}.*nreturned:%{NONNEGINT:nreturned}..+ (?<duration>[0-9]+)msMONGO_WORDDASH \b[\w-]+\bMONGO3_SEVERITY \wMONGO3_COMPONENT %{WORD}|-MONGO3_LOG %{TIMESTAMP_ISO8601:timestamp} %{MONGO3_SEVERITY:severity} %{MONGO3_COMPONENT:component}%{SPACE}(?:\[%{DATA:context}\])? %{GREEDYDATA:message}

AWS

ELB_ACCESS_LOG %{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE:elb} %{IP:clientip}:%{INT:clientport:int} (?:(%{IP:backendip}:?:%{INT:backendport:int})|-) %{NUMBER:request_processing_time:float} %{NUMBER:backend_processing_time:float} %{NUMBER:response_processing_time:float} %{INT:response:int} %{INT:backend_response:int} %{INT:received_bytes:int} %{INT:bytes:int} "%{ELB_REQUEST_LINE}"
CLOUDFRONT_ACCESS_LOG (?<timestamp>%{YEAR}-%{MONTHNUM}-%{MONTHDAY}\t%{TIME})\t%{WORD:x_edge_location}\t(?:%{NUMBER:sc_bytes:int}|-)\t%{IPORHOST:clientip}\t%{WORD:cs_method}\t%{HOSTNAME:cs_host}\t%{NOTSPACE:cs_uri_stem}\t%{NUMBER:sc_status:int}\t%{GREEDYDATA:referrer}\t%{GREEDYDATA:agent}\t%{GREEDYDATA:cs_uri_query}\t%{GREEDYDATA:cookies}\t%{WORD:x_edge_result_type}\t%{NOTSPACE:x_edge_request_id}\t%{HOSTNAME:x_host_header}\t%{URIPROTO:cs_protocol}\t%{INT:cs_bytes:int}\t%{GREEDYDATA:time_taken:float}\t%{GREEDYDATA:x_forwarded_for}\t%{GREEDYDATA:ssl_protocol}\t%{GREEDYDATA:ssl_cipher}\t%{GREEDYDATA:x_edge_response_result_type}

Summing it up

Logstash grok is just one type of filter that can be applied to your logs before they are forwarded into Elasticsearch. Because it plays such a crucial part in the logging pipeline, grok is also one of the most commonly-used filters.

Here is a list of some useful resources that can help you along the grokking way:

Happy grokking!

Get started for free

Completely free for 14 days, no strings attached.