filebeat wizard

As a company offering the ELK Stack as an end-to-end service on the cloud, we do everything in our power to make life easier for our users. Sometimes this means adding a brand-new artificial intelligence layer on top of the stack to identify critical log messages easily, and sometimes it’s a simple wizard for writing a Filebeat configuration file.

If you’ve used Filebeat before, the following might sounds familiar.

After installing Filebeat on your server, you open the /etc/filebeat/filebeat.yml file. You then proceed to define the prospectors and output destinations. You might even add some encryption to the configuration. You then save your configuration and start Filebeat.

Any of the next three scenarios may then occur.

You might be a YAML and Filebeat wizard, in which case Filebeat will start without a hiccup and data will begin flowing into the destination you defined. Or — and this happens more often than not — Filebeat might warn you of a syntax error and point you to the line in the file that is causing the issue. The problem is that these messages are somewhat vague and difficult to decipher. On some occasions, Filebeat will not even warn you that there is problem, so good luck with that!

This is where the Filebeat wizard comes into the picture.

The wizard can be opened from the Filebeat section under the Log Shipping tab in the UI:

filebeat wizard logzio

The wizard is a foolproof way to configure shipping to ELK with Filebeat — you enter the path for the log file you want to trace, the log type, and any other custom field you would like to add to the logs (e.g., env = dev). You can add as many log types as you want.

Here’s an example of the wizard setup for tracking NGINX logs and the configuration file that it generates:

filebeat setup

As you can see, the end result of this wizard is a ready-made Filebeat configuration file that can be used out-of-the-box. The only requirement is to download a certificate and start Filebeat!

There’s beauty in simplicity, isn’t there?

Note: If you’re using your own ELK deployment, you’ll want to remove the fields automatically added to the configuration file, the token and codec fields, and define your own Logstash or Elasticsearch destination in the output section.

Check Out our Additional Features for Kibana