Logging Golang Apps with ELK and Logz.io
The abundance of programming languages available today gives programmers plenty of tools with which to build applications. Whether long-established giants like Java or newcomers like Go, applications need monitoring after deployment. In this article, you will learn Golang logging and shipping to the ELK Stack and Logz.io.
It’s usually possible to get an idea of what an application is doing by looking at its logs. However, log data has a tendency to grow exponentially over time. That’s especially true when more applications deploy and spread across multiple servers. The ELK stack, with its ability to store enormous amounts of data and search through it quickly and easily, comes in handy here.
In this article, you will learn how to ship logs written by Go applications. The Go programming language (also known as Golang or GoLang) is a relatively new, yet mature, general purpose programming language enjoying widespread adoption both within programming communities and by major cloud providers.
Overview of GoLang Logging
There are several different options you can use to log to a file from a Go program. One of these is the library Logrus, which is very easy to use and has all of the features necessary to write information-rich logs and be able to easily ship them to Elasticsearch.
First, obtain the logrus package by running the following command in a terminal:
go get github.com/Sirupsen/logrus
Then, use code such as the following to write logs to a file in JSON format:
package main import ( "os" log "github.com/Sirupsen/logrus" ) func main() { log.SetFormatter(&log.JSONFormatter{ FieldMap: log.FieldMap{ log.FieldKeyTime: "@timestamp", log.FieldKeyMsg: "message", }, }) log.SetLevel(log.TraceLevel) file, err := os.OpenFile("out.log", os.O_RDWR | os.O_CREATE | os.O_APPEND, 0666) if err == nil { log.SetOutput(file) } defer file.Close() fields := log.Fields{"userId": 12} log.WithFields(fields).Info("User logged in!") }
This snippet opens a file for writing and sets it as a destination for the logrus logger. Now, when you call log.Info(...)
, the information you log is written to that file. However, you can also (optionally) enrich the log data with other relevant information (such as a user identifier) that could assist in troubleshooting a problem by providing additional context.
The output of the above program looks like this:
{"@timestamp":"2020-08-22T17:10:46+02:00","level":"info","message":"User logged in!","userId":12}
It has a JSON structure because a JSON formatter was set up at the beginning of the program. When the option is available, formatting the logs in JSON makes it much easier to ship them to Elasticsearch without additional configuration, since the JSON properties and values map directly to fields in Elasticsearch. In contrast, you have to tell Elasticsearch how to parse data from text logs that do not have any obvious structure.
Being able to so easily write logs to a file in JSON format—with the possibility of including additional fields as needed—puts you in a good position to ship logs to Elasticsearch.
Logrus
Logrus, like virtually every other logging library, allows you to write logs using a number of different severity levels, including Info, Warning, Error, and others. This is done by calling the appropriate function (e.g., Info()). It’s also possible to configure the minimum level.
For instance, when you call log.SetLevel(log.TraceLevel
, only logs with the level trace or above will be written. Since Trace is the lowest level, this call indicates that you want to write all logs, regardless of their levels. You could, for instance, change this to log.InfoLevel
to ignore logs with the Trace or Debug level.
GoLang Logging and Shipping to ELK
Writing logs to files has various benefits. The process is fast and robust, and the application doesn’t need to know anything about the type of storage in which the logs will ultimately end up. Elasticsearch provides Beats, which help collect data from various sources (including files) and ship them reliably and efficiently to Elasticsearch. Once log data is in Elasticsearch, you can use Kibana to analyze it.
The log data sent to Elasticsearch needs parsing so that Elasticsearch can structure it correctly. Elasticsearch is able to process JSON data with ease. You can set up more complex parsing for other formats.
Setting Up the ELK Stack
If you’d like to run your own ELK stack and have the resources to maintain it, then follow the official documentation to set up:
- Elasticsearch,
- Kibana, and
- Filebeat.
Once these are installed, you can go ahead and run Elasticsearch and Kibana. Before starting Filebeat, you’ll need to set it up. This process is described in the next section.
Shipping JSON Logs with Filebeat
Because Filebeat has a JSON processor that can ship JSON logs directly to Elasticsearch without any intermediate steps, Filebeat configuration is quite easy. Simply set up your /etc/filebeat/filebeat.yml
file as follows:
filebeat.inputs: - type: log enabled: true paths: - /path_to_logs/*.log json: keys_under_root: true overwrite_keys: true message_key: 'message' output.elasticsearch: hosts: ["localhost:9200"] processors: - decode_json_fields: fields: ['message'] target: json
If you’re not testing this configuration on your local machine, you can replace localhost:9200 with a different Elasticsearch endpoint. Be sure to also replace path_to_logs with the actual path to where your log files are stored. The same Filebeat instance can read from several different folders.
With this in place, start Filebeat by running the following command:
sudo service filebeat start
If you set up everything correctly, your logs should be shipped to Elasticsearch within seconds. You can see these in Kibana.
Shipping Raw Text Logs with Filebeat
As you can see, it’s easy to set up log shipping when logs are structured in a JSON format and have fields expected by Elasticsearch, such as @timestamp or message. However, there are situations where it might not be feasible to change existing software to conform to this structure.
It is still possible to ship logs that have a different structure, but the fields need to be identifying by grok expression. Subsequently, either Logstash or an Elasticsearch ingest pipeline have to parse them. Writing and testing grok expressions, plus setting up additional infrastructure, require significant effort. You should weigh this against any effort to change the log structure to JSON before shipping anything non-JSON.
Shipping GoLang Logs to Logz.io
If you prefer to invest time and resources in developing your products rather than maintaining infrastructure, you might prefer a managed ELK stack, such as Logz.io. We take care of the underlying hardware and software so that you can focus on getting the insights you need from the ELK stack.
Shipping JSON Logs to Logz.io with Filebeat
You can use Filebeat to ship logs to Logz.io just like you can to any other ELK stack, using the same method we outlined earlier with slightly different configuration.
First, follow the instructions in our previous article, Shipping Logs to Logz.io with Filebeat. After that, open /etc/filebeat/filebeat.yml
, and set up the inputs like so:
filebeat.inputs: - type: log paths: - /path_to_logs/*.log fields: logzio_codec: json token: your_logzio_token type: golang fields_under_root: true encoding: utf-8 ignore_older: 3h
In addition to adding the path to the logs, you’ll also need to add your Logz.io token to the configuration. You can find this by clicking on the cogwheel icon in the top right area of the Logz.io interface, then going to Settings, and then to General.
You should have an output section (from the setup instructions) that looks something like this:
output: logstash: hosts: ["listener.logz.io:5015"] ssl: certificate_authorities: ['/etc/pki/tls/certs/COMODORSADomainValidationSecureServerCA.crt']
If you’re set up in a different region, you’ll also need to change the listener URL accordingly.
Start Filebeat. Your logs should begin appearing in Logz.io after a few seconds.
Analyzing the Data
If your logs are shipping into Elasticsearch, then the hard part is over. You can now focus on using the log data to find the information you need. Kibana is extremely useful for this, as it helps you analyze your data in different ways, ranging from simple searches written on demand to interactive dashboards for regular monitoring.
The Discover section of the Kibana interface is where you’ll probably spend most of your time sifting through data. You can narrow down your search based on any field, including time periods, sets of log levels (e.g., warnings and errors), specific metadata (e.g., a particular user ID), or global correlation IDs, which allow you to trace requests across different services.
If you find yourself performing the same queries time and time again, you may want to save your searches—or, better yet, build visualizations and dashboards out of them. These allow you to get an idea of the overall status of your system at-a-glance. Kibana has a lot more to offer, and it’s worth taking the time to explore its wealth of features.
Conclusion
Although Go is a young programming language, it’s a solid one with a great community. Logging is very easy to set up using either Logrus, as we did in this article, or another available option.
However, writing logs is only the beginning of a journey. Those same logs need to be monitored regularly, and, occasionally, you will find yourself needing to dig deep into them to investigate a particular problem. When this happens, you will thank yourself for having an ELK stack at your disposal.
By shipping logs to an ELK stack (either self-hosted or managed, like Logz.io), you can leverage Kibana to search across large quantities of data, narrow down the information you need, and minimize the time it takes to track down and solve a problem.