Logging Redis with ELK and Logz.io

redis

Redis is an extremely fast NoSQL data store. While it is used mainly as a cache, it can be applied to uses as diverse as graph representation and search. Client libraries are available in all of the major programming languages, and it is provided as a managed service by all of the top cloud service providers. For the past three years, Redis has been named the most loved database by the Stack Overflow Developer Survey.

While Redis’ popularity might be related to its ease of use, administering a Redis server or cluster does often come with challenges. As with all other systems, failures can occur from issues with Redis itself or as a result of other dependencies that Redis touches (e.g., disk failures or running out of memory). Analyzing Redis logs can provide a variety of information about a Redis server’s operation, including its running lifetime (e.g., starting or stopping), warnings (e.g., an unoptimized OS configuration), persistence (e.g., loading and saving a Redis database from or to disk), and errors.

This article will describe how to ship Redis logs into an ELK stack—either your own or Logz.io’s—so that you can easily analyze them, regardless of how many servers Redis is running on. We will start off describing how to set up and test Redis before providing instructions for setting up log shipping.

Deploying Redis

If the option is available, the easiest way to install Redis is through a package manager. On Linux distributions such as Ubuntu, which use the APT package manager, you can install Redis using the following command:

sudo apt-get install redis-server 

Alternatively, you can also compile Redis from source. This process is recommended, since it ensures that you are using the latest version. However, if you are simply testing Redis out, this step is not required. Follow the instructions in the Redis Quick Start documentation to compile Redis from source.

Testing Redis

If you installed Redis using apt-get, you can use the following command to ensure that the Redis server is running:

sudo service redis-server status 

Next, you can use the redis-cli client tool to send some basic commands to the Redis server. This tool should have been installed along with redis-server, but, if for some reason it was not, you will need to separately install redis-tools via apt-get. To run redis-cli and execute basic commands to verify that Redis is working correctly, use the following:

$ redis-cli
127.0.0.1:6379> SET x 1
OK
127.0.0.1:6379> GET x
"1"
127.0.0.1:6379> DEL x
(integer) 1

The above example shows how to set the value of a key called x to 1, retrieve that value, and then delete it. Press Ctrl+C to exit redis-cli when you are done.

Installing ELK

Since we’re planning on shipping our logs to an ELK stack, we have two options. We can either set up our own ELK stack or use a managed service, such as the one provided by Logz.io. We’ll take a look at both options, beginning with installing our own ELK stack.

Installing Elasticsearch

Elasticsearch can be installed from a Debian package when working on Ubuntu or a similar platform.

To do this, first, download and install the Elasticsearch Signing Key using the following command:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Then, ensure that the apt-transport-https package is installed:

sudo apt-get install apt-transport-https 

Add the repository definition:

Update the package list based on the new repository and install Elasticsearch:

sudo apt-get update && sudo apt-get install elasticsearch 

Finally, start the Elasticsearch service:

sudo service elasticsearch start 

Verify that Elasticsearch is running by hitting the endpoint at port 9200. You should get a JSON response with the well-known “You Know, for Search” tagline:

$ curl localhost:9200
{
 "name" : "elkserver",
 "cluster_name" : "elasticsearch",
 "cluster_uuid" : "Tc5mRXYASparwWeRDgtd3A",
 "version" : {
   "number" : "7.5.0",
   "build_flavor" : "default",
   "build_type" : "deb",
   "build_hash" : "e9ccaed468e2fac2275a3761849cbee64b39519f",
   "build_date" : "2019-11-26T01:06:52.518245Z",
   "build_snapshot" : false,
   "lucene_version" : "8.3.0",
   "minimum_wire_compatibility_version" : "6.8.0",
   "minimum_index_compatibility_version" : "6.0.0-beta1"
 },
 "tagline" : "You Know, for Search"
}

Installing Kibana

Install Kibana using the following command:

sudo apt-get install kibana 

Kibana’s default configuration is to run on port 5601 and interact with Elasticsearch on localhost:9200. If, for any reason, you need to change this behavior, then edit the configuration file at /etc/kibana/kibana.yml as follows, replacing the values as necessary:

”server.port:
“]

Then, start Kibana as follows:

sudo service kibana start 

Open your browser at http://localhost:5601/ to ensure that Kibana is running. If you get the message, “Kibana server is not ready yet,” just give it a few more seconds. Eventually, you should see Kibana’s welcome screen:

Kibana welcome

Installing Filebeat

Install Filebeat as follows:

sudo apt-get install filebeat 

Filebeat will be used to ship the logs into Elasticsearch. The actual configuration and commands will vary depending on whether you are targeting a self-managed ELK stack or a managed service such as Logz.io, as will be described later.

While Filebeat supports a number of different outputs (including Logstash), shipping logs directly into Elasticsearch should be sufficient for most use cases.

Shipping Redis Logs to ELK

The Redis module for Filebeat makes it really easy to ship Redis logs to Elasticsearch and visualize them in Kibana. It sets default configurations for Filebeat (including the path to log files and the Redis server endpoint), sets up the ingest pipeline to automatically parse out the structure of Redis logs into Elasticsearch fields, and deploys visualizations and dashboards to facilitate the analysis of the log data in Kibana.

The Redis module configuration can be found in:

/etc/filebeat/modules.d/redis.yml.disabled , and, by default, it looks like this:

# Module: redis
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.5/filebeat-module-redis.html

- module: redis
 # Main logs
 log:
   enabled: true

   # Set custom paths for the log files. If left empty,
   # Filebeat will choose the paths depending on your OS.
   #var.paths: ["/var/log/redis/redis-server.log*"]

 # Slow logs, retrieved via the Redis API (SLOWLOG)
 slowlog:
   enabled: true

   # The Redis hosts to connect to.
   #var.hosts: ["localhost:6379"]

   # Optional, the password to use when connecting to Redis.
   #var.password:

No configuration changes are necessary to make this work, since there are defaults that will apply. However, you may update this file if you need to customize the settings.

With Elasticsearch and Kibana already running, use the following commands to enable the Redis module for filebeat and set up resources in Elasticsearch in Kibana. These include an index pattern for filebeat (filebeat-*), an ingest pipeline, and Kibana visualizations and dashboards:

sudo filebeat modules enable redis
sudo filebeat setup -e

This may take a little while to complete. When the process is done, restart Filebeat as follows:

sudo service filebeat restart 

Soon after, you should be able to see the shipped logs in Kibana, where they appear under the filebeat-* index pattern:

index pattern

Shipping Redis Logs into Logz.io

Logz.io is a managed service, so while you can still use Filebeat to ship your logs, you cannot use the method described earlier that implements the Filebeat Redis module. Instead, you must configure Filebeat to send additional metadata that tells Logz.io to whom the account belongs and how the logs should be parsed.

The setup for Filebeat is described in the Shipping with Filebeat documentation. The first step in the process is installing the Logz.io certificate which allows logs to be shipped securely over HTTPS:

sudo wget
 https://raw.githubusercontent.com/logzio/public-certificates/master/COMODORSADomainValidationSecureServerCA.crt -P /etc/pki/tls/certs/

Once that is done, Filebeat must be configured to pick up the logs we want and send them to Logz.io. For this, we need to edit the Filebeat configuration file, which is found at /etc/filebeat/filebeat.yml.

The filebeat.inputs section should look something like this:

filebeat.inputs:

- type: log

  enabled: true

  paths:
    - /var/log/redis/redis-server.log*

  exclude_lines: ["^\\s+[\\-`('.|_]"]

  exclude_files: ['.gz$']

  fields:
    logzio_codec: plain
    token: [your_logz_io_token]
    type: redis

  fields_under_root: true
  encoding: utf-8

Perhaps the most interesting part of the above configuration is the section under fields. This configuration provides the account token, which can be found on the Logz.io General Settings page (click the button in the top right corner then go to Settings -> General). The account token identifies the account to which the logs will be sent. We are also setting the logzio_codec field (which tells Logz.io whether the incoming logs are JSON or raw text) and the type field (which is used to identify the structure of the logs at a later parsing stage).

The output section of the Filebeat configuration file should be as follows:

output:
  logstash:
    hosts: ["listener.logz.io:5015"]  
    ssl:
      certificate_authorities: ['/etc/pki/tls/certs/COMODORSADomainValidationSecureServerCA.crt']

The logs are sent to Logz.io’s Logstash at the specified endpoint. This endpoint may vary slightly depending on the region you selected when setting up your account.

At this stage, you can start Filebeat:

sudo service filebeat restart 

If your configuration is correct, you should see logs appear in your Logz.io account. Thanks to the redis type we specified in the Filebeat configuration, Logz.io is able to parse the log data into separate and meaningful fields, as shown in the screenshot below:

logz.io

Analyzing the Data

By looking at Redis logs, administrators can see how well their Redis servers are operating and potentially identify issues that need to be addressed. If you consider that Redis can run as a cluster on several different machines, it is easy to imagine how shipping all those logs into a central ELK stack can quickly and easily provide insights into Redis’ operation.

Looking at the visualizations and dashboards installed by the Redis Filebeat module (which can similarly be set up in Logz.io) gives you an idea of the kinds of insights that can be provided.

logs over time

The above visualization shows a timeline of Redis logs. The ratio between different log levels (in this case warning and notice) is made evident by the color coding in the stacked chart.

The visualization below, on the other hand, shows a two-level pie chart meant to depict error levels per role (e.g., master and child).

pie

Finally, the dashboard shown below combines the above two visualizations with a view of the latest logs in order to give administrators a snapshot of the overall state of Redis.

dashboard

Summary

Redis is an important part of many enterprise architectures, and ensuring its smooth operation should be a priority for DevOps engineers. The insights gained from Redis logs can result in better performance (e.g., by resolving warnings via configuration), but, more importantly, they can help staff troubleshoot critical issues that impact application stability.

When there are critical issues that impact application stability, shipping logs to a centralized ELK stack is essential for quickly finding relevant logs and restoring the system to its normal operations with minimal disruption. Because it is a managed ELK stack, Logz.io eliminates the effort required to set up and maintain an in-house ELK stack. This allows teams to focus on ensuring that their applications work as required.

Easily ship and analyze logs at any scale.
Artboard Created with Sketch.