Securing Elasticsearch Clusters Following the Recent Ransom Attacks

Securing-Elasticsearch-Clusters

If you’re an ELK user, there is little chance that you missed the news about the recent ransom attacks on Elasticsearch clusters. Following similar attacks on MongoDB, it seems that it is open season on open source datastores. This most recent attack on Elasticsearch clusters is leaving hundreds if not thousands of indices empty and a demand for payment in bitcoins for recovering the data.

This is what victims of the attack have been seeing in their Elasticsearch clusters:

Note: “SEND 0.2 BTC TO THIS WALLET: 1DAsGY4Kt1a4LCTPMH5vm5PqX32eZmot4r IF YOU WANT RECOVER YOUR DATABASE! SEND TO THIS EMAIL YOUR SERVER IP AFTER SENDING THE BITCOINS p1l4t0s@sigaint.org”

Yep, that’s what kidnapping for ransom looks like in 2017.

Logz.io provides ELK as a service, so we put a huge focus on security. In fact, that is why we received SOC-2 and ISO-27001 compliance certifications. Our entire architecture was designed and continues to be developed with security in mind. For some of the measures we put in place, take a look here.

A number of articles have been written over the past few days documenting the various methods of securing Elasticsearch, most notably of which is this piece by Itamar Dyn-Hershko. For all our readers using Elasticsearch — especially those who are using it in production — who are not necessarily aware of the various pitfalls that need to be taken into consideration, we’ve summed up some of the methods that we recommend employing.

The good news is that with these tweaks, you can make your data safer and more immune to attacks:

  1. Do not expose Elasticsearch to the Internet. There is a reason why the default Elasticsearch configuration binds nodes to localhost. Within the “yml” configuration file, there is a directive called “network.host” that you can use to bind the nodes in your cluster to private IPs or secure public IPs.
  2. Add authentication to Elasticsearch. Use a proxy server such as NGINX to act as a security buffer between Elasticsearch and any client that you use to access your data. This will enable you to add a user control to Kibana or authorization to the REST API. Of course, there are paid options as well such as hosted ELK solutions or Elastic’s Shield plugin.
  3. Use the latest Elasticsearch version. This is more of a general best practice since in older versions, there were specific vulnerabilities that were taken care of in versions 5.x. If you are still using 1.x or 2.x, be sure to disable dynamic scripting. Elasticsearch allows the use of scripts to evaluate custom expressions, but as documented by Elastic, using non-sandboxed languages can be an issue.
  4. Back up your data! If your cluster does get compromised, make sure you have a failsafe mechanism in place to be able to easily restore your data. Again, there are paid solutions for this, but you can easily use the snapshot API to backup the data on an AWS S3 bucket for example or a shared filesystem.
  5. Don’t expose Elasticsearch to the Internet! Yes, this is a repeat of the first tip — and I’m repeating it to emphasize the point. Even in development and testing, there is no reason to have your clusters exposed to public IPs. Just in case you’re not fully convinced, check out this site that lists all the open Elasticsearch clusters across the globe.

As I wrote earlier, security is one of the main challenges that people running ELK on their own need to come to terms with. Understanding the various vulnerabilities is the first step to resolution, and the next is to decide what your priorities are. Using a hosted ELK solution or paying for a plugin that adds security to your stack should not be ruled out as an option.

Get started for free

Completely free for 14 days, no strings attached.