The bad news is that as awesome as the ELK Stack is for centralized logging and monitoring, it can also be a tricky beast to handle. Sometimes, all it takes is one simple search queried against a big pool of data to bring the whole stack tumbling down on your head.
The good news is that it’s easy to avoid these crashes by applying some best practices.
Learning from the mistakes made by our users and those made by ourselves as well, we have compiled a concise list of five things to avoid doing in your ELK deployment. As a company with an ELK cloud service, we have blocked these dangerous user behaviors in our UI. But if you’re managing your own ELK deployment, you should take special note of this list.
1. Leading Wildcard Searches
Querying Elasticsearch from Kibana is an art because many different types of searches are available. From free-text searches to field-level and regex searches, there are many options, and this variety is one of the reasons that people opt for the ELK Stack in the first place. As implied in the opening statement above, some Kibana searches are going to crash Elasticsearch in certain circumstances.
For example, using a leading wildcard search on a large dataset has the potential of stalling the system and should, therefore, be avoided.
Best practice: Avoid using wildcard queries if possible, especially when performed against very large data sets.
2. Term Aggregation on Analyzed Fields
In Elasticsearch, analyzed fields are separated using a tokenizer. The default tokenizer used by Elasticsearch separates fields using white spaces.
This means that if we were analyzing the following fields:
“This is a sentence a b c”
“This is a sentence a b c”
“This is a sentence a b c d”
Our terms output would look as follows:
“a” – 6
“This” – 3
“is” – 3
“sentence” – 3
“b” – 3
“c” – 3
“d” – 1
Instead of having only two results:
“This is a sentence a b c” – 2
“This is a sentence a b c d” – 1
The end result of this Elasticsearch behavior is that using terms in large data sets can consume a very large amount of memory, which could potentially result in the crashing of your Elasticsearch cluster.
Best practice: If you need to use term aggregation on a text field, you most likely need the field to be not analyzed. This can be configured in the mapping when creating an index or via templates.
Cardinality aggregation is used to count distinct values in a data set. For example, if you want to know the number of IPs used in your system, you can use this aggregation on an IP field and then count the results.
Despite the usefulness, cardinality can also be a touchy Elasticsearch feature to use. Performing a unique count on a field with a multitude of possible values when configuring a visualization, for example, can bring Elasticsearch to a halt.
Best practice: Only use cardinality (unique count) when you are sure that the field cardinality is not too big. If you are not sure, there are usually different ways in Elasticsearch to achieve the same purpose.
Since Elasticsearch 2.x, any type of mapping changes the “locks” of the cluster for indexing, and the elected master node is responsible for all mapping changes.
If, for example, I send a new field, the master node halts all indexing to the cluster, syncs the changes to the data nodes, and then resumes.
This means that if we apply frequent mapping changes, Elasticsearch indexing can come to a halt. At Logz.io, we see this occurring when our customers, for example, ignore the distinction between keys and values or when an arbitrary URL is used as a field name.
Best practice: Take a careful look at your documents and restructure them so that they have fixed keys, if you identify keys that change according to values.
5. Kibana Advanced Settings
Some Kibana-specific configurations can cause your browser to crash. For example, depending on your browser and system settings, changing the value of the discover:sampleSize setting to a high number can easily cause Kibana to freeze.
That is why the good folks at Elastic have placed a warning at the top of the page that is supposed to convince us to be extra careful. Anyone with a guess on how successful this warning is?
Best practice: Pay heed to the warning in Kibana, and be extremely careful when making changes to advanced settings.
At Logz.io, we’ve applied safeguards to make sure that users cannot bring down the stack. If you’ve got your own stack deployed, just be sure to be aware of these loopholes.
We’d be happy to update the list above with other examples, so if you have a war story to share, please feel free to comment below.
you’ve provided examples of crashing the E in ELK but in our usage Logstash has been the biggest source of instability. any attempt to apply grok filters to our incoming stream of beats has made our Logstash boxes unstable, with little assistance in troubleshooting from the logs. We’re hoping for big improvements in the 5.0 version of logstash or else we might ditch it in favor of kafka + custom written consumers to parse messages and forward them to Elastic for indexing.
Hi Michael. Yes, Logstash can prove to be a huge pain in the pipeline, especially around groking and filtering, but also performance wise. We will share our Logstash experiences in future articles. Feel free to share your war story, always interested to hear how other companies are handling these challenges (email@example.com).