Redis Performance Monitoring with the ELK Stack

Redis, the popular open source in-memory data store, has been used as a persistent on-disk database that supports a variety of data structures such as lists, sets, sorted sets (with range queries), strings, geospatial indexes (with radius queries), bitmaps, hashes, and HyperLogLogs.

Today, the in-memory store is used to solve various problems in areas such as real-time messaging, caching, and statistic calculation. In this post, I will look at how you can do Redis performance monitoring using the ELK Stack to ship, analyze, and visualize the data.


The missing pieces here are where Redis metrics are stored and how to get the specific ones that you need. Luckily, Redis provides all available metrics through the redis-cli info command. So, if you execute redis-cli info using your terminal, the output should look like this:

# Server
os:Linux 4.2.0-27-generic x86_64

# Clients

# Memory
# Persistence

# Stats

# Replication


# Cluster

# Keyspace

You can use the redis-cli info command to see specific sections of data. For example, executing redis-cli info memory will return the information in the memory section.

The redis-cli info command returns a great deal of useful information on metrics including memory consumption, client connections, persistence, master and slave replication information, CPU consumption, and Redis command statistics. I will use this information to show you how to create a powerful monitoring tool with Kibana that will allow you to monitor your Redis performance and keep the data store up and running.

There are several ways to collect data from Redis. One is to use Collectd — which, by the way, is one of our favorite DevOps tools — and a Redis plugin; another is to use redis-stat, which allows you to dump data into the file that you had specified in the redis-stat command. For the sake of this post, however, we will keep the solution relatively clean and simple without introducing any new tools.

Logstash is an advanced tool with a wide collection of plugins. The exec plugin will do the job nicely for this situation due to the fact that it runs commands periodically and outputs as events. When using the plugin, you need to specify the command to execute as well as the interval at which Logstash will execute the command.

Now that you have a way to obtain, ship, store, and analyze metrics, there are a few things you should know about the metrics that Redis provides and how we use them to monitor the data store.

The Performance Metrics Inside Redis

Let’s start with the clients section, which contains two specific metrics (out of four total) that can be used for monitoring: connected_clients and blocked_clients. The connected_clients metric contains information about the number of client connections and the blocked_clients metric contains the number of clients that are waiting to see the result of a blocking call (BLOB, BRPOP, or BRPOPLPUSH).

Next is the memory section, which contains eight metrics on memory consumption. Two of them are redundant because they present the same value in two different ways. Another two are very important for monitoring: used_memory and mem_fragmentation_ratio. The used_memory metric reveals the total number of bytes that are being allocated by Redis and the mem_fragmentation_ratio metric shows the ratio between used_memory_rss (the number of bytes that Redis is allocating, according to the operating system) and used_memory.

How you should interpret these values depends on your level of expertise when it comes to maintaining services such as Redis, but used_memory_rss should ideally be only a bit higher than used_memory. If used_memory_rss is significantly greater than used_memory, memory fragmentation may occur. You can monitor the issue with the mem_fragmentation_ratio metric.

In the stats section, you can find metrics including the number of commands that are processed per second, the total number of commands that are processed by a server, the total connections received, the number of connections that are rejected after a maximum number of clients is reached, and the number of keys that are evicted because of a maximum memory limit.

The final section that provides valuable metrics to monitor Redis is CPU. There, you can find information about CPU consumption including system CPU that is consumed by your Redis server and user CPU that is consumed by your Redis server. Both represent the percentage of CPU usage in kernel mode and user mode.

Shipping Redis Data to Elasticsearch

The next step involves shipping the numbers taken from the metrics above to Elasticsearch. The Logstash exec input plugin is your best candidate for the job. Before you start Logstash, however, you have to configure the software. The code looks like this:

input {
exec {
command => "redis-cli info clients"
interval => 2
type => "clients"

exec {
command => "redis-cli info memory"
interval => 2
type => "memory"

exec {
command => "redis-cli info cpu"
interval => 2
type => "cpu"

exec {
command => "redis-cli info stats"
interval => 2
type => "stats"

exec {
command => "redis-cli info replication"
interval => 2
type => "replication"

filter {
split {

ruby {
code => "fields = event['message'].split(':')
event[fields[0]] = fields[1].to_f

output {
elasticsearch {
hosts => ["localhost:9200"]

As you can see, you can use the output from your redis-cli commands as input whereas command reveals the location of the command to be executed, interval is the period of time (in seconds) that the command is called, and type is used to mark the specific sections that Logstash will add as properties of the document that will be stored in Elasticsearch.

The filter plugin first splits messages that are made up of multiple lines into separate events. Then, you can write a short Ruby code to split up the redis-cli output so that everything to the left of the “:” is a key and that everything to the right is a value, as you can see below:


By using the filter plugin to split up the output, you can see that used_memory represents the key and 532376 represents the value. You can use filtering to structure data in such a way that is easier to query in Kibana due to the clear distinction that is made between a value and what it represents.

Now, you’re ready to start your ELK Stack. If you wait a few moments, Logstash will begin to send logs quickly. The Discover section of Kibana should look like this screenshot:

kibana discovery section redis

Kibana’s Discover section with statistics from Redis

redis cpu statistics kibana

Kibana’s Discover section with a detailed view of CPU statistics from Redis

Now, all you have to do is create charts to help you better visualize and analyze your overall Redis performance. You should first create charts that display information on your CPU usage in both kernel mode and user mode. Your Y-axis can be either your used_cpu_user property or your used_cpu_sys property, and your X-axis can reflect a period of time.

Your chart will look something like this:

cpu usage in kernel mode kibana

CPU usage in kernel mode

cpu usage in user mode kibana

CPU usage in user mode

Next, you should create a chart for memory consumption. The most important metrics that should be displayed are memory used and the memory fragmentation ratio, which you can visualize throughout a given time span:

used memory consumption redis

Used memory consumption

Remember, the values that redis-cli returns are in bytes, so if you can’t use them for monitoring. You can use the JSON input section to convert your Y-axis value into KBs or MBs.

Moving right along, you’ll want to create two charts for your clients section, which will represent the information regarding your connected clients and blocked clients over a period of time:

connected clients at time interval redis

Connected clients at a specific moment in time

blocked clients at time interval redis

Blocked clients at a specific moment in time

Finally, the stats section has very useful monitoring metrics, but for the sake of this post, let’s single out the total_commands_processed metric, which is helpful in measuring latency. Creating a line chart for this metric is pretty straightforward, and the end result should look something like this:

total processed commands redis

Total processed commands line chart

At this point in time, you can stop creating charts. The technique is basically the same as other values, but you now have enough to set up a custom dashboard. Other metrics, like those in the replication section, can be used to monitor a distributed Redis environment.

Your saved charts can easily be added to a single place in your dashboard. They’ll look something like this:

monitor redis performance with kibana dashboard

Kibana dashboard to monitor Redis performance

Interpreting Stats Metrics

Now, having the metrics readily available is all well and good, but it’s no good having them if you don’t know how to spot performance issues. Here’s how you can interpret specific metrics to identify issues:

  • The used_memory metric shows you the total number of bytes that Redis has allocated to memory. If a Redis instance exceeds its available memory (such as when its used_memory is greater than its available memory), the OS will start writing sections that were used for old and unused memory to disk to make space for newer and active pages. It takes longer to write and read from disk than it does from RAM memory, and this will affect Redis’ performance, including all of the applications or services that depend on Redis.
  • The total_commands_processed metric provides you with the total number of processed commands from a Redis server. This metric can help diagnosis latency (the time it takes clients to receive a response from a server), which is the most direct way to detect changes in Redis’ performance.
  • If there is a decrease in performance, you will see that the total_commands_processed metric either drops or stalls more than usual. This is when Kibana can give you a clear overview of changes that are occurring over a period of time.


Developers love using Redis because it is a fast and easy to use in-memory data store that allows them to resolve issues efficiently. Therefore, it’s beneficial to understand the factors that can lead to decreased Redis performance. Knowing how to interpret metrics is important so that the knowledge that they provide can be applied to other message queues such as Kafka, RabbitMQ, and ActiveMQ.

While we didn’t get into all of the details of all of the metrics that can be obtained from Redis in this post, the information above provides a solid knowledge base in terms of figuring out which metrics you should be monitoring. Finally, it is important to do your homework regarding which tool you’d like to use to monitor Redis, since for the sake of this article, we simply chose the ELK Stack.

Get started for free

Completely free for 14 days, no strings attached.