Elasticsearch Performance Tuning

Elasticsearch-Performance-Tuning

Once you have your Elasticsearch running, you’ll likely eventually find that performance starts to suffer over time. This can be due to a variety of factors, including changes in the way you’re using your cluster to how much and what types of data are being sent in. In order to maintain your cluster, you’ll need to set up monitors to alert you to any warning signs so that you can proactively handle available maintenance windows.

Understanding tradeoffs

Since it’s impossible to optimize on everything, you should start by evaluating what your business priorities are and how you’re using the cluster. For example, you might find that you’re queries are more or less memory intensive, that you need data available in near-real time, or even that your use case favors longer-term retention over urgency. Optimizing on one or two higher priority needs means sacrificing performance for other tasks being managed by the cluster. Only you and your team can decide which trade-offs to make, so make sure to regularly compare desired results and business logic / need with the actual configuration of your cluster. To help you get started, here are a few considerations to take into account.

Keep track of your queues

A good place to start when keeping track of cluster performance are the Elasticsearch queues. Those of interest here are: index, search, and bulk. Elasticsearch reports these under in its node stats. Ideally, you want to have nearly empty queues since that means that requests are being handled immediately. To ensure that you and your team are notified of issues with queue depth, try using something like either Marvel or now X-Pack depending on your version of Elasticsearch, to monitor the queue depth. If you notice that your queues aren’t quickly / easily offloading the requests, this is an indication that something is amiss – either problematic logs or some place else.

Configured too much memory? OOPS!

Speaking of memory, typically when building out system resources, many adhere to the “more is better” approach. That is, if the budget allows it you’d rather have too many resources than too few. HEAP memory is an exception to this line of thinking.

The basic reason for this is that the JVM uses what is called HEAP memory for storing object pointers. In order to be more efficient, Java uses what’s called a Compressed Ordinary Object Pointer (OOP). Above 32 GB of HEAP, Java needs to use regular, 64 bit, pointers. This actually drastically decreases how many objects can be stored in HEAP to the point where around 50 GB of HEAP is about the same as around 30 GB.

Another caveat for configuring HEAP: you may have noticed that you can configure a max and min value for HEAP. With Elasticsearch, you generally want the max and min HEAP values to match to prevent HEAP from resizing at runtime. So when you’re testing values of HEAP with your cluster, make sure that both values match.

Elasticsearch’s current guide states that there is an “ideal sweet spot” at around 64 GB of RAM. If you find that your implementation differs, it is important to keep the following in mind: no more than half of your available memory should be configured as HEAP, up to a maximum of around 30 GB; unless you have a total of more than 128 GB of RAM – in which case you could go for 64 GB of HEAP.

Flush: but not too often

Indexing is the process of storing and making a document searchable; however, just because a document is indexed does not mean that it is available in search – yet. To make a document available for search, the document needs to be flushed to disk. By default, this interval (referred to as refresh_interval) is set to 1s. Although the shorter interval allows indexed documents to become available in near real time, due to the resources involved documents are typically indexed slower causing performance lag. Even upping the value to as high as half a minute or more can drastically improve indexing throughput. The tradeoff in this case is between indexing performance and availability for search.

Considerations for disk sizing

When sizing the total disk capacity of your cluster, it’s important to know what factors contribute to disk utilization beyond how much data you send.

  • Low watermark: when the disk usage on any individual node reaches 85% (default), then Elasticsearch will stop sending new shards to that node. It’s especially important to realize that while no new shards are being allocated, that existing shards can still have data sent to them and may still grow in size as data is indexed into those existing shards.
  • High watermark: data will still be written to existing shards until the node reaches it’s high watermark (90% by default). Once this happens the cluster will try to reassign shards off that node onto other nodes. Depending on the state of your cluster and how much capacity is available on your other nodes, this might be detrimental to performance.
  • Replicas: The default configuration for Elasticsearch is a single replica. Depending on your business needs and application, you might need to configure more than this. (To prevent data loss you should have at least one replica.) Each replica is a full copy of each index, so you’ll need the same amount of space per replica that you need per index.
  • Sharding: larger shards, to a point, are more efficient at storing indexed data. Since sharding needs can vary quite significantly between nodes, you will need to experiment with how many shards are appropriate for your needs.
    • An added concern is how resilient you need your cluster to be to node failure. When a node fails the shards are reallocated to other nodes if there are existing replicas and enough available disk.

Budgeting your cache carefully

Elasticsearch has a concept called Field Data that basically inverts the reverse index. What happens is this: if you need to search what values a particular field has, e.g. the values of http_status_code, and you need to see all the 200s, 4xxs, 5xxs, etc. you need the reverse of the reverse index. This is called field data and it is created at query time and stored in memory – specifically HEAP memory. Since this process is resource intensive, once field data is generated it remains in HEAP for the remainder of the life cycle of the segment by default. As you might imagine, if you are not careful this means that you could easily chew through your available HEAP with just field data. There are a couple of ways to avoid this scenario:

  • Limit how much HEAP memory can be used by field data with indices.fielddata.cache.size. Note that you can configure this to either a percentage or a static value.
  • Although doc values are the default for most fields, some may need to be manually mapped. Note that text fields do not support doc values.

The other type of cache to be aware of is query cache, or node query cache as of Elasticsearch 6.x. Query cache is an LRU cache generated for all queries, so once the cache fills the least accessed data is removed or evicted. Similar to field data, it is a good idea to bound how much memory can be devoted to your query cache using the indices.queries.cache.size setting. Also just like field data, you can limit your cache size either via percent or static value.

Just getting started

Improving the performance of Elasticsearch clusters is a bit of an art due to how wildly logging can differ between environments. That said, the advice we compiled above from both our own experiences and experiences of the community should serve as a good starting point.

As always, when making changes to your cluster make sure that you are monitoring your cluster with available tools to see the impact of your changes. Depending on how much time you wish to invest in cluster performance, try taking a look at Elasticsearch’s benchmarking tool Rally or our own Elasticsearch benchmarking tool to analyze the impact of each of your changes.

Get started for free

Completely free for 14 days, no strings attached.