Heka and Logstash are two data processing tools that are similar but still have important differences.
As an all-in-one, open source tool for data processing, Heka was developed by Mozilla and written in Go with built-in plugins to input, decode, filter, encode and output data. Heka uses the input plugin as the entry point for data and the decoding plugin to transform native data into its internal messaging form so that it is easily understandable. Before data is moved forward, Heka uses filters to perform core processing procedures such as analytics and aggregation, encoders to format the data into an appropriate structure, and outputs for delivery.
Logstash is also a data processing tool that is comprised of some of the simplest architecture out there. The pipeline consists of a variety of Logstash plugins such as Input, which consumes data from its source; Filter, which transforms data according to desired specifications; and Output, which writes data to an endpoint.
To highlight more about the differences between Heka and Logstash, we at Logz.io have prepared the following summary. If there are other significant changes that you think we should add, please let me know in the comments!
Any work with any type of data processing tool begins with a configuration file. Heka uses TOML configuration files that are comprised of one or more sections, and each section is associated with one of the Heka plugins (mentioned at the beginning). A simple Heka configuration file that is used to read files and dump the content into the standard output looks like this:
append_newlines = false
message_matcher = "TRUE"
encoder = "PayloadEncoder"
Logstash, on the other hand, uses Ruby to perform the same file-reading configuration tasks that result in the dumping of file content into the standard output, which looks like this:
path => "/var/log/nginx/access.log"
Logstash’s Ruby configuration is actually easier because it is a JSON-like structure that has a clear separation between internal objects.
As mentioned above, Heka is written in Go, which means it performs better than Logstash, which is written in Ruby. However, writing plugins for Heka can be more difficult than writing in Ruby due to the fact that learning the less popular Go (or Lua, another language you can use to write plugins in Heka) can be challenging depending on your background and level of experience. Nevertheless, both tools encompass a wide range of plugins that will satisfy nearly any requirement.
The Plugin System
Heka’s plugin system may seem more difficult to understand than that of Logstash, especially since Logstash’s plugin system is quite easy for beginners to configure. However, Heka’s system allows you to do a lot before data moves along to output. For example, Heka Sandboxes provide isolated environment modules such as anomaly detection, which can be configured inside filters.
At this point, Logstash still has a broader selection of plugins compared to Heka, and there is virtually no need to write custom plugins in Logstash.
ELK vs. EHK
One of the most popular stacks today is ELK (Elasticsearch, Logstash, and Kibana). But you may want to think twice if you are considering whether to replace Heka with Logstash. Adjusting and converting simple configurations suddenly becomes difficult when you try to translate them into Go or Lua for Heka. It is especially hard if you have to translate Logstash’s filter plugins into Heka’s decoder and encoder plugins. While the architecture of these tools is different, Heka’s solid documentation creates a stable platform for fast learning.
In my own research, I’ve compared Heka to Logstash in a few simple use cases and found that Logstash is a fairly easy tool to understand and configure — even for large amounts of data, logs, or whatever you want to process.
The Logstash community is significantly stronger than that of Heka, meaning that more resources are available for quicker problem-solving. Nevertheless, it would still be worth your while to keep an eye on Heka, especially due to its great performance and features.