monitoring puppet server with elk stack

This post is part 1 in a 2-part series about Puppet server logging with the ELK Stack. Part 2 explores how to analyze and visualize the data, as well as configuring alerts.

Puppet is a popular open source configuration management platform that helps organizations automate the deployment, configuration and management of their infrastructure. Using Puppet, servers can be deployed with configurations propagated automatically, without the need for manual configuration.

In today’s cloud environments that consist of hundreds of distributed machines, this tool is a must for DevOps teams, saving development time and resources that can be diverted elsewhere.

But like with any other tool in the DevOps toolkit, errors can take place. Manifest configurations might be wrong or agents might go offline. This is where Puppet’s logging features can come in handy, helping users monitor and troubleshoot the performance of the different components in Puppet setup.

This blog series will explore using the logging features provided in an open source Puppet deployment together with the ELK Stack, for centralized logging, analysis and visualization.  

Puppet Logging in a Nutshell

Puppet has an extensive logging architecture which results in quite a lot of different log files containing valuable information about the performance of both the Puppet master and its agents. Generally speaking, if you’re using an open source Puppet deployment, the two log files that should interest you are the Puppet server logs and the HTTP request logs.

The Puppet server logs messages and errors to the /var/log/puppetlabs/puppetserver/puppetserver.log file. Logs are routed to the file using JVM’s Logback library, and can be configured using the /etc/puppetlabs/puppetserver/logback.xml file (as shown below). These logs will contain a variety of information on the different processes being run, and can be used to monitor the general health of the server. Default logging level is defined as INFO.

HTTP traffic being routed via your Puppet deployment is logged in a separate  /var/log/puppetlabs/puppetserver/puppetserver-access.log file. This logging can be handled via a separate configuration file:  /etc/puppetlabs/puppetserver/request-logging.xml. The data in these logs can be used to monitor the different requests being sent by the Puppet agents to the master.

Prerequisites

Just a note before we start here on some assumptions on my part regarding your setup:

  • Existing Puppet setup. The instructions provided here are meant for an open source Puppet installation on Ubuntu 16.04.
  • Existing ELK Stack (Elasticsearch, Kibana and Filebeat. No need for Logstash) or Logz.io account. The instructions below will explain how to hook up Puppet with either an open source ELK Stack or Logz.io.  Both methods will include using Filebeat.

Logging Puppet in JSON

Before we begin building our logging pipeline, there are some preparations we need to do. By default, Puppet logs in text format. This is fine, but would require an extra parsing effort on our side using Logstash. A much easier, and lighter, way is to configure Puppet to log in JSON format, a format that Elasticsearch knows how to handle naturally.

So our first step is to modify Logback’s appenders for both log types.

Modifying Puppet server logging

Open the /etc/puppetlabs/puppetserver/logback.xml file and add the following appender (at the same level of the other appenders):

Then, add a new JSON appender-ref to the <root> section, and comment out the FILE appender-ref:

Modifying Puppet server access logs

Open the /etc/puppetlabs/puppetserver/request-logging.xml file and add the following appender (at the same level of the other appenders):

And, as before, add an appender-ref in the configuration section:

Be sure to save the files, and restart Puppet to apply the changes.

You should see two new files under /var/log/puppetlabs/puppetserverpuppetserver.log.json and puppetserver-access.log.json.

Configuring Filebeat

Now that Puppet is logging in JSON, we can easily plug into the ELK Stack using Filebeat. We will configure Filebeat to track these two files and ship them directly for indexing into Elasticsearch. We will use JSON decoding in the Filebeat configuration file to make sure the logs are parsed correctly.

Open up your /etc/filebeat/filebeat.yml configuration file and enter the following configuration (be sure to change the paths in case you used a different output destination for your Puppet logs):

Starting Filebeat, you should see a new index created in Elasticsearch. You can then define a new index pattern in Kibana to begin analyzing the logs.

Begin analyzing

Shipping to Logz.io

Making a few tweaks to your Filebeat configuration file, you can ship the Puppet logs to your Logz.io account.

First though, download and move an SSL certificate to encrypt the data:

The best way of making the changes to the Filebeat configuration file is to use the Filebeat wizard, available in the Filebeat section, on the Log Shipping page.

Filebeat Wizard

The result configuration should look like this (enter your token in the designated field):

Restarting Filebeat, the logs will begin streaming into Logz.io.

Stream Logs

We’ve managed to set up a pipeline of Puppet logs into the ELK Stack. What’s next? We will explore how to analyze the data in Kibana in part 2 of this series which will also cover how to be more proactive using alerts.