Logging Redis with ELK and Logz.io

redis

Redis is an extremely fast NoSQL data store. While it is used mainly as a cache, it can be applied to uses as diverse as graph representation and search. Client libraries are available in all of the major programming languages, and it is provided as a managed service by all of the top cloud service providers. For the past three years, Redis has been named the most loved database by the Stack Overflow Developer Survey.

While Redis’ popularity might be related to its ease of use, administering a Redis server is a challenges. Not to mention, a Redis cluster. As with other systems, failures occur from issues within Redis itself or dependencies (e.g., disk failures or running out of memory). Analyzing Redis logs provides a variety of information about a Redis server’s operation. That data will cover things like running lifetime (e.g., starting or stopping), or warnings (e.g., an unoptimized OS configuration). Alternatively, it gives insight into Redis persistence (e.g., loading and saving a Redis database from or to disk), and various errors.

This article describes how to ship Redis logs into an ELK stack—either your own or Logz.io’s—so that you can easily analyze them, regardless of how many servers Redis is running on. We will start off describing how to set up and test Redis before providing instructions for setting up log shipping.

Do you want to compare DIY ELK vs Managed ELK?

Installing Redis

If the option is available, the easiest way to install Redis is through a package manager. On Linux distributions such as Ubuntu, which use the APT package manager, you can install Redis using the following command:

sudo apt-get install redis-server 

Alternatively, you can also compile Redis from source. This process is recommended, since it ensures that you are using the latest version. However, if you are simply testing Redis out, this step is not required. Follow the instructions in the Redis Quick Start documentation to compile Redis from source.

Next, set up Redis to start with the system. It’s also recommended to restart Redis at this point.

sudo systemctl enable redis-server.service 

Configure Redis

To change any relevant parameters, configure Redis by editing the redis.conf

Testing Redis

If you installed Redis using apt-get, use the following command to ensure that the Redis server is running:

sudo service redis-server status 

Next, use the redis-cli client tool to send some basic commands to the Redis server. This tool should have been installed along with redis-server, but, if for some reason it was not, you will need to separately install redis-tools via apt-get. To run redis-cli and execute basic commands to verify that Redis is working correctly, use the following:

$ redis-cli
127.0.0.1:6379> SET x 1
OK
127.0.0.1:6379> GET x
"1"
127.0.0.1:6379> DEL x
(integer) 1

The above example shows how to set the value of a key called x to 1, retrieve that value, and then delete it. Press Ctrl+C to exit redis-cli when you finish.

Installing ELK

Since we’re planning on shipping our logs to an ELK stack, we have two options. We can either set up our own ELK stack or use a managed service, such as the one provided by Logz.io. We’ll take a look at both options, beginning with installing our own ELK stack.

Installing Elasticsearch

You can install Elasticsearch using a Debian package when working on Ubuntu or a similar platform.

To do this, first, download and install the Elasticsearch Signing Key using the following command:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Then, ensure that the apt-transport-https package is installed:

sudo apt-get install apt-transport-https 

Add the repository definition:

Update the package list based on the new repository and install Elasticsearch:

sudo apt-get update && sudo apt-get install elasticsearch 

Finally, start the Elasticsearch service:

sudo service elasticsearch start 

Verify that Elasticsearch is running by hitting the endpoint at port 9200. You should get a JSON response with the well-known “You Know, for Search” tagline:

$ curl localhost:9200
{
 "name" : "elkserver",
 "cluster_name" : "elasticsearch",
 "cluster_uuid" : "Tc5mRXYASparwWeRDgtd3A",
 "version" : {
   "number" : "7.5.0",
   "build_flavor" : "default",
   "build_type" : "deb",
   "build_hash" : "e9ccaed468e2fac2275a3761849cbee64b39519f",
   "build_date" : "2019-11-26T01:06:52.518245Z",
   "build_snapshot" : false,
   "lucene_version" : "8.3.0",
   "minimum_wire_compatibility_version" : "6.8.0",
   "minimum_index_compatibility_version" : "6.0.0-beta1"
 },
 "tagline" : "You Know, for Search"
}

Installing Kibana

Install Kibana using the following command:

sudo apt-get install kibana 

Kibana’s default configuration is to run on port 5601 and interact with Elasticsearch on localhost:9200. If, for any reason, you need to change this behavior, then edit the configuration file at /etc/kibana/kibana.yml as follows, replacing the values as necessary:

”server.port:
“]

Then, start Kibana as follows:

sudo service kibana start 

Open your browser at http://localhost:5601/ to ensure that Kibana is running. If you get the message, “Kibana server is not ready yet,” just give it a few more seconds. Eventually, you should see Kibana’s welcome screen:

Kibana welcome

Installing Filebeat

Install Filebeat as follows:

sudo apt-get install filebeat 

Filebeat will be used to ship the logs into Elasticsearch. The actual configuration and commands will vary depending on whether you are targeting a self-managed ELK stack or a managed service such as Logz.io, as will be described later.

While Filebeat supports a number of different outputs (including Logstash), shipping logs directly into Elasticsearch should be sufficient for most use cases.

Shipping Redis Logs to ELK

The Redis module for Filebeat makes it really easy to ship Redis logs to Elasticsearch and visualize them in Kibana. For one, it sets default configurations for Filebeat (including the path to log files and the Redis server endpoint). It also sets up the ingest pipeline to automatically parse out the structure of Redis logs into Elasticsearch fields. Finally, it deploys visualizations and dashboards to facilitate the analysis of the log data in Kibana.

You’ll find the Redis module configuration in:

/etc/filebeat/modules.d/redis.yml.disabled , and, by default, it looks like this:

# Module: redis
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.5/filebeat-module-redis.html

- module: redis
 # Main logs
 log:
   enabled: true

   # Set custom paths for the log files. If left empty,
   # Filebeat will choose the paths depending on your OS.
   #var.paths: ["/var/log/redis/redis-server.log*"]

 # Slow logs, retrieved via the Redis API (SLOWLOG)
 slowlog:
   enabled: true

   # The Redis hosts to connect to.
   #var.hosts: ["localhost:6379"]

   # Optional, the password to use when connecting to Redis.
   #var.password:

No configuration changes are necessary to make this work, since there are defaults that will apply. However, you may update this file if you need to customize the settings.

Next, make sure Elasticsearch and Kibana already running for this step. Use the following commands to enable the Redis module for Filebeat and set up resources in Elasticsearch for Kibana. These include an index pattern for Filebeat (filebeat-*), an ingest pipeline, and Kibana visualizations & dashboards:

sudo filebeat modules enable redis
sudo filebeat setup -e

This may take a little while to complete. When the process is done, restart Filebeat as follows:

sudo service filebeat restart 

Soon after, you should be able to see the shipped logs in Kibana, where they appear under the filebeat-* index pattern:

index pattern

Shipping Redis Logs into Logz.io

Logz.io is a managed service, so you can use Filebeat to ship your logs but not the method that implements the Filebeat Redis module. Instead, you must configure Filebeat to send additional metadata informing Logz.io whose account it belongs and how to parse the Redis logs.

You can find the setup for Filebeat in the Shipping with Filebeat documentation, or check out our Filebeat tutorial. The first step in the process is installing the Logz.io certificate which, securing shipment of logs over HTTPS:

sudo wget
 https://raw.githubusercontent.com/logzio/public-certificates/master/COMODORSADomainValidationSecureServerCA.crt -P /etc/pki/tls/certs/

Once that is done, you have to configure Filebeat to pick up the logs we want and send them to Logz.io. For this, we need to edit the Filebeat configuration file, which is resides in the filebeat.yml file at /etc/filebeat/filebeat.yml.

filebeat.inputs

The filebeat.inputs section should look something like this:

filebeat.inputs:

- type: log

  enabled: true

  paths:
    - /var/log/redis/redis-server.log*

  exclude_lines: ["^\\s+[\\-`('.|_]"]

  exclude_files: ['.gz$']

  fields:
    logzio_codec: plain
    token: [your_logz_io_token]
    type: redis

  fields_under_root: true
  encoding: utf-8

Perhaps the most interesting part of the above configuration is the section under fields. This configuration provides the account token, which is found on the Logz.io General Settings page (click the button in the top right corner then go to Settings -> General). The account token identifies the account to which the logs will be sent. We are also setting the logzio_codec field (which tells Logz.io whether the incoming logs are JSON or raw text) and the type field (which identifies the structure of the logs at a later parsing stage).

Filebeat Output

The output section of the Filebeat configuration file should be as follows:

output:
  logstash:
    hosts: ["listener.logz.io:5015"]  
    ssl:
      certificate_authorities: ['/etc/pki/tls/certs/COMODORSADomainValidationSecureServerCA.crt']

The logs are sent to Logz.io’s Logstash at the specified endpoint. This endpoint may vary slightly depending on the region that you selected when setting up your account.

At this stage, start Filebeat:

sudo service filebeat restart 

If your configuration is correct, you should see logs appear in your Logz.io account. Thanks to the redis type we specified in the Filebeat configuration, Logz.io is able to parse the log data into separate and meaningful fields, as shown in the screenshot below:

logz.io

Analyzing the Data

By looking at Redis logs, administrators see how well their Redis servers are operating and potentially identify issues that need to be addressed. If you consider that Redis can run as a cluster on several different machines, it’s easy to imagine how shipping all those logs into a central ELK Stack can quickly and easily provide insights into Redis’ operation.

Looking at the visualizations and dashboards that the Redis Filebeat module installed (which can similarly be set up in Logz.io) gives you an idea of the kinds of insights that can be provided.

logs over time

The above visualization shows a timeline of Redis logs. The ratio between different log levels (in this case warning and notice) is evident by the color coding in the stacked chart.

The visualization below, on the other hand, shows a two-level pie chart meant to depict error levels per role (e.g., master and child).

pie

Finally, the dashboard below combines the two visualizations giving admins a snapshot of the overall state of Redis.

dashboard

Summary

Redis is an important part of many enterprise architectures. Consequently, ensuring its smooth operation should be a priority for DevOps engineers. The insights gained from Redis logs certainly result in better performance (e.g., by resolving warnings via configuration). Furthermore and more importantly, they help staff troubleshoot critical issues that impact application stability.

When there are critical issues that impact application stability, shipping logs to a centralized ELK stack is essential for quickly finding relevant logs and restoring the system to its normal operations with minimal disruption. Because it is a managed ELK stack, Logz.io eliminates the effort required to set up and maintain an in-house ELK stack. This allows teams to focus on ensuring that their applications work as required.

Do you want to compare DIY ELK vs Managed ELK?

Stay updated with us!

By submitting this form, you are accepting our Terms of Use and our Privacy Policy

Thank you for subscribing!

External