It’s 3 AM and your phone is ringing.
Rubbing your eyes, you take a look at the alert you just got from PagerDuty.
A critical service has just gone offline. Angry customers are calling support. Your boss is on the phone, demanding the issue be resolved ASAP.
You open up your log management tool only to be faced by 5 million log messages.
The scenario above may sound somewhat dramatic, but for engineers monitoring modern applications and systems, it is a recurring nightmare. The reason for this is simple — log data is big data. The growing volume, velocity, and variety in log data mean it’s not enough to collect, process and store the data, you need advanced tools to be able to analyze it and identify the needle in the haystack.
Enter Log Patterns!
Recently, we announced Log Patterns, our latest AI-powered analytics tool.
Simply put, Log Patterns crunches up millions of log messages into what are much smaller, manageable groups of logs. This provides you with the ability to quickly cut through the noise, identify unique or unusual events as well as recurring and repetitive events.
In just a few clicks, you will be able to identify the different bales comprising your haystack.
How does it work? Using advanced clustering algorithms, Log Patterns dissects indexed log messages into variables and constants to identify recurring patterns. These patterns are automatically associated with incoming logs as they are being ingested into the system and are displayed, in real-time, within Kibana:
The machine learning algorithms used to dissect the logs work continuously to analyze the indexed data to ensure existing patterns are perfected and new patterns are added.
For each pattern identified, you can see how many log messages are associated with the pattern, their ratio out of the total data logged, and the exact pattern they follow.
By default, the most noisy patterns are displayed first but you can sort the list of identified patterns by count and ratio.
The makings of Log Patterns
Naturally, patterns differ from one another. Some will contain only constants, others constants and variables.
Constants are displayed as is, whereas variables are categorized (e.g. Number, Ip, Url, Date) and highlighted. If the type of a specific variable was not identified, it will be marked with a colored wildcard expression: .*
Here are a few examples.
The following logs follow a very basic repetitive pattern:
Account 358 was created , waiting for kibana indexes to be created Account 1265 was created , waiting for kibana indexes to be created Account 871 was created , waiting for kibana indexes to be created Account 1291 was created , waiting for kibana indexes to be created Account 309 was created , waiting for kibana indexes to be created
The corresponding pattern would be displayed as follows:
Account Number was created , waiting for kibana indexes to be created
The following AWS ELB logs also follow a recurring pattern:
2019-10-12T21:59:57.543344Z production-site-lb 22.214.171.124:6658 172.31.62.236:80 0.000049 0.268097 0.000041 200 200 0 20996 "GET http://site.logz.io:80/blog/kibana-visualizations/ HTTP/1.1" "Amazon CloudFront" - - 2019-10-12T21:59:55.518955Z production-site-lb 126.96.36.199:41421 172.31.62.236:80 0.000054 0.104063 0.000029 200 200 0 1 "GET http://site.logz.io:80/wp-admin/admin-ajax.php HTTP/1.1" "Amazon CloudFront" 2019-10-12T21:59:55.268688Z production-site-lb 188.8.131.52:44944 172.31.62.236:80 0.000042 0.121069 0.000037 200 200 0 1 "GET http://site.logz.io:80/wp-admin/admin-ajax.php HTTP/1.1" "Amazon CloudFront" - - 2019-10-12T21:59:52.186208Z production-site-lb 184.108.40.206:6658 172.31.62.236:80 0.000051 0.248411 0.000041 200 200 0 20996 "GET http://site.logz.io:80/blog/kibana-visualizations/ HTTP/1.1" "Amazon CloudFront" 2019-10-12T21:59:51.803543Z production-site-lb 220.127.116.11:21170 172.31.62.236:80 0.000023 0.00079 0.000017 200 200 0 73831 "GET http://site.logz.io:80/wp-content/uploads/2015/12/kibana-visualizations.png HTTP/1.1" "Amazon CloudFront"
In this case, the pattern is comprised of two constants and a series of variables, all highlighted:
Date production-site-lb Ip:Number Number Number Number Number .* Url HTTP/Number"
A production environment produces thousands of these log messages, and Log Patterns condenses these all into one single pattern.
Speeding up troubleshooting
Going back to the doomsday scenario above, sifting through millions of logs when trying to troubleshoot an issue in production is a daunting task. Sure, if you know exactly what you’re looking for, you could enter a beautifully-constructed Kibana query. But often enough, you will not know what to exactly to query.
With Log Patterns, those millions of log messages are suddenly condensed into a much smaller group of patterns.
You can then discard the patterns that you recognize as being irrelevant to your investigation using the filter out option. These filters are added at the top of the Discover page, just like any other Kibana filter.
Alternatively, you could reorder the list to look at patterns that are unique. A unique pattern could indicate what actually transpired, and filtering for the pattern will move you over to the Logs tab automatically, displaying the logs associated with the pattern.
Opening up the log, you can then begin understanding the specific event that the log is reporting on. On top of that, if you’ve structured your logs correctly, you will be able to track the root cause to the actual line in your code generating the log. To help understand the context, you can click View surrounding documents to see all the logs generated before and after the log.
Optimizing your logging costs
After reviewing your patterns, you may identify logs that are especially noisy but also totally unwarranted for. Logs cost money, and using Log Patterns you will be able to identify the component in your environment generating these log messages. Remove the lines of code generating these logs and you will be able to reduce the overall operational costs of your logging pipelines.
The machine learning algorithms used to dissect logs and identify recurring patterns continuously analyze indexed logs to perfect existing patterns and add new ones.
AIOps to the rescue
Monitoring modern IT environments is first and foremost a big data challenge. Without advanced analysis tools to help them easily see through millions of logs, the engineers tasked with keeping their company’s applications up and running and performant at all times are simply ill-equipped to be able to effectively do their job.
To help engineers overcome this big data challenge, Logz.io designed a suite of AIOps tools. Cognitive Insights™ was the first tool in this suite, followed by Application Insights™. Log Patterns is the latest addition, using advanced clustering techniques to transform big data into small data.
So what are you waiting for? Log Patterns is available now in all our plans, at no extra charge. You can sign up for a free trial here.