How to Log a Log: Application Logging Best Practices

Ran Ramati
application logging best practice

The importance of implementing logging in our application architecture cannot be overstated. If  — and this is a big “if’ — structured correctly, logs can contain a wealth of information about a specific event. Logs can tell us not only when the event took place but also provide us with details as to the root cause.

But to ensure our application logs are indeed useful, there are some basic requirements. First and foremost, they need to be digestible. Meaning, they need to be structured in a way that both humans and machines can read. Complying with this requirement requires careful planning and strategizing but also ensures that searching and analyzing the logs will be easier.   

Easier said than done, right?

Below is a list of some tips and best practices we advise our users to adhere to when setting up logging for their applications.

Define your goal

A good starting point when devising a logging strategy is understanding your goal. What is it that you are seeking to achieve with these logs? Is the purpose development and debugging? Is it perhaps business intelligence?

This might seem like a banal piece of advice but in reality, failing to accurately define the endgame results in ill-formatted logs and resources wasted on analyzing them.

For one, your goal will affect how you want to format the logs themselves. Field names are one example of this, log levels another. Second, your goal will define what data to log. There is no need for logging exceptions if you want to monitor metrics.

Which leads us to point number 2.

Deciding what to log

Some recommend logging as much as possible as a best practice. Fact is, that in some cases, we’ve seen this “log as if there is no tomorrow” approach result in: a) log noise comprised of unimportant data, and b) needlessly expensive logging costs for storage and retention.

Once you’ve understood what you’re aiming to achieve with your logging, you can begin devising a strategy that defines what you want to log, and perhaps even more importantly — what you don’t want to log. If you are planning to build alerts on top of the logs, for example, try and log actionable data.

Selecting a logging framework

Logging frameworks give developers a mechanism to implement logging in a standard and easy-to-configure way. They allow developers to control verbosity, define log levels, configure multiple connections or targets, devise a log rotation policy, and so forth.

You could build your own logging framework, but why do that if there are already tools out there that are easy to use, have community support, and most importantly — do the job perfectly well?

So one of the most important decisions you will make is which logging library, or framework, to use. This task can be complicated at times and pretty time consuming as there are a large number of these tools available, but key considerations here should be ease of use, community, feature-richness, and the impact on your application’s performance.  

Standardize your logs

The more standardized your logs, the easier it is to parse them and subsequently analyze them.  The logging framework you end up using will help you with this, but there is still plenty of work to be done to ensure your log messages are all constructed the same way.

For starters, be sure developers understand when to use each log level. This will help avoid situations in which the same log message is assigned the same severity or where critical events go unnoticed because of the wrong severity assigned to it.

Second, create a standard for formatting and naming fields. We’ve come across the same error logged in a totally different way. Decide, for example, if the field containing the request is “request” or “requestUrl”. Decide the format you want to use for the timestamp field. Decide whether you will format your logs in JSON or as key=value pairs.

Which leads us to our next point.

Formatting

Formatting structures your logs. Structuring, in turn, helps both machines and humans read the data more efficiently.

In this context, the most commonly used formatting methods are JSON and KVPs (key=value pairs). Below are examples of the same log message written in both format types.

JSON:

{
  "@timestamp": "2017-07025 17:02:12",
  "level": "error",
  "message": "connection refused",
  "service": "listener",
  "thread": "125",
  "customerid": "776622",
  "ip": "34.124.233.12",
  "queryid": "45"
}

KVP:

2017-07025 17:02:12 level=error message="connection refused"
service="listener" thread=125 customerid=776622 ip=34.124.233.12
queryid=45

Both formats will help you achieve the same purpose — making the logs human readable and enable more efficient parsing and analysis, but which one you choose to use will depend on the analysis tool you want to use. If it’s the ELK Stack, for example, JSON is the format you will want to use.

Provide context

Being concise and logging short messages is, in general, a good law to abide by. But there is a huge difference between writing concise logs and writing incomprehensible logs.

Consider this log message:

12-19-17 13:40:42:000 login failed.

Not very insightful, right? But how about:

12-19-17 13:40:42:000 userId=23 action=login status=failure

In logging, context is all. Adding contextual information to your log messages creates a story and allows you, and any other party in your organization, to more easily understand and analyze the data.

Part of the context that can be added to logs are fields containing metadata. Common examples are application name, function name, class name, and so on.    

Add unique identifiers

When troubleshooting a specific event using logs, one can easily get lost in the data. Without having some kind of map to use as a reference, especially microservice-based architectures, it’s virtually impossible to track specific actions across all the different services involved in the transaction.

Adding unique tags or IDs to the logs, when possible, will help you navigate within the data by following specific identifying labels that are passed through the different processing steps. These could be user IDs, transaction IDs, account IDs, and so on.

Summing it up

Structuring logs requires planning and strategizing, a step that many organizations skip, either intentionally or because logging is simply not a top priority.

The result, as we at Logz.io have seen in many cases, are log messages that are simply not cost effective. Creating more noise than anything else, ill-structured logs end up costing an organization resources that could have been saved had some simple best practices been implemented at an early stage.

The list above summarizes some of these basic steps. Additional best practices apply to subsequent logging processes, specifically — the transmission of the log and their management. We plan on covering these in future posts.

Easily Configure and Ship Logs with Logz.io ELK as a Service.

 

Artboard Created with Sketch.
× Book time with us at re:Invent here! Book