5 Filebeat Pitfalls To Be Aware Of

filebeat-pitfalls

Wait. What? Filebeat has pitfalls?  

That is probably the first question you as a reader might be asking yourself right now.  And rightly so.  

Filebeat is a solid piece of engineering that has evolved over the past few years to become a reliable and the go-to log shipper for logging with ELK. We covered the story of how Filebeat evolved from Lumberjack and Logstash-Forwarder in a previous post, and one can safely claim that it is this rich historical background that contributed to Filebeat’s maturity.   

Of course, nothing is perfect, and Filebeat is no exception to this rule. Below is a list of some caveats that users need to be aware of when using this shipper to ensure the integrity of their logging pipelines.  

1. YAML, YAML, YAML 

Granted, this is not a Filebeat-specific pitfall and applies to any YAML-based configuration file. Cutting to the chase, YAML syntax is a pain. It is extremely sensitive to indentation (DO NOT USE TABS!) and structure, and one formatting mistake can crash the entire configuration and the dependent pipeline.   

In this Musing in YAML piece, I detailed some ways for avoiding the most common mistakes when creating your Filebeat configuration file — using a YAML validator, making use of the example configurations, to name just a few.

To quote that article, “It ain’t rocket science, but a small and simple mistake can make all the difference between a bad day and an even worse day.” 

2. The Filebeat Registry File  

Filebeat is designed to remember the previous reading for each log file being harvested by saving its state. This helps Filebeat ensure that logs are not lost if, for example, Elasticsearch or Logstash suddenly go offline (that never happens, right?).  

This position is saved to your local disk in a dedicated registry file, and under certain circumstances, when creating a large number of new log files for example, this registry file can become quite large and begin to consume too much memory. 

It’s important to note that there are some good options for making sure you don’t fall into this caveat — you can use the clean_removed option, for example, to tell Filebeat to clean non-existing files from the registry file. You can also use the clean_inactive option.

3. Removed or Renamed Log Files 

Another issue that might exhaust disk space is the file handlers for removed or renamed log files. As long as a harvester is open, the file handler is kept running. Meaning that if a file is removed or renamed, Filebeat continues to read the file, the handler consuming resources. If you have multiple harvesters working, this comes at a cost.  

Again, there are workarounds for this. You can use the close_inactive configuration setting to tell Filebeat to close a file handler after identifying inactivity for a defined duration and the closed_removed setting can be enabled to tell Filebeat to shut down a harvester when a file is removed (as soon as the harvester is shut down, the file handler is closed and this resource consumption ends.) 

There are other options for closing file handlers, and I recommend reading the documentation before using them. 

4. Configuring Multiple Pipelines 

Filebeat is quite simple to configure. In the Inputs section (remember, “Inputs” were formerly called “Prospectors”), you will be required to specify the path to the log file you wish to “harvest” and subsequently export into ELK.  

But what happens when you want to track multiple log files?  

While Filebeat allows you to define multiple file paths in one input, one thing to remember, and this is not obvious to all users, is that in most cases you will want to add some specific settings to each log file. The best and most basic example is adding a log type field to each file to be able to easily distinguish between the log messages.  

This requires configuring an input for each log type, and while this is not a pitfall in itself, it does add additional points of failure when configuring Filebeat.  

Please note, there could be errors loading config files if an unauthorized user tries to access it or, similarly, the wrong permissions are in place.

5. CPU Usage 

Filebeat is an extremely lightweight shipper with a small footprint, and while it is extremely rare to find complaints about Filebeat, there are some cases where you might run into high CPU usage.  

One factor that affects the amount of computation power used is the scanning frequency — the frequency at which Filebeat is configured to scan for files. This frequency can be defined for each input using the scan_frequency setting in your Filebeat configuration file, so if you have a large amount of inputs running with a tight scan frequency, this may result in excessive CPU usage.  

Try loosening the scan frequency in filebeat.yml. While it is recommended to make sure the scan is at least one second long, if you are having issues try to adjust it in any event.

Summing It Up 

Filebeat is as reliable as a log shipper gets and should be the backbone of any ELK-based logging architecture.  

The list of gotchas above should not worry you as a user. In most cases, Filebeat’s default settings will suffice. However, if your setup consists of complex pipelines, with multiple inputs and an elaborate log rotation policy, you will want to verify if there are any potential soft spots you might need to take care of.

Configuring Filebeat is straightforward enough but as mentioned above, can get a bit complicated when multiple pipelines are involved. For this reason, Logz.io developed a Filebeat wizard to help users avoid the common YAML pitfalls — you can read more about this wizard here.

Get started for free

Completely free for 14 days, no strings attached.