Transitioning from the ELK Stack to in 5 Quick Steps


At, we’ve built our Log Management solution on the ELK Stack because we know it’s what modern engineering teams prefer. It’s familiar, powerful, and integrates easily with other DevOps and cloud technologies. That’s what makes migrating from ELK to a seamless process.

This means current ELK users can easily transition to If you’re currently using ELK, you can ship the same data using exactly the same shipping mechanisms. Plus, you can monitor that data on the same Kibana objects. The difference is that you’ll no longer need to maintain your log data pipeline and you can leverage the features we’ve built on top of Kibana.

Many of our customers are former ELK users, so we’re quite familiar with this transition process. Based on this collective experience, below is a step-by-step guide for migrating from your ELK Stack to ours.

Step 1: Redirect your Log Shipping to

Wondering if you can ship your current logs to A good rule of thumb is: if you can ship it to the ELK Stack, you can ship it to

One of the benefits of being based on open source is that you can leverage the options provided by the community to ship your data, rather than being locked in with a single proprietary agent. Below are some popular log shipping methods followed by instructions on how to fork your log shipping to

There are many other ways to ship log data that aren’t covered here. Feel free check out additional shipping methods here or ask your contact for additional methods.


Most of our customers use Filebeat to ship their logs. Simply edit your Filebeat config file to add the token for every input (datasource) your Filebeat uses to ship data. Find the code here. This will immediately begin shipping your logs to

Ship from the Cloud

AWS: Our AWS CloudWatch integration uses a Lambda shipper to automatically forward logs from CloudWatch to Or, you can just pull directly from S3 by defining your S3 bucket and IAM policy from within

Azure: Our Azure Deployment template automatically deploys a namespace and an Event Hub to collect log data from an Azure region, and uses a Function to forward that data to Learn about the details for deploying the template here.

GCP: You can use Google Cloud Pub/Sub to forward your logs from Stackdriver to Learn how to configure your Pub/Sub forwarder here.

Ship Straight from your Code:

Java: The Log4j 2 appender sends logs using non-blocking threading, bulks, and HTTPS encryption to port 8071. Find the code for Log4j 2 here.

Node.js: logzio-nodejs collects log messages in an array, which is sent asynchronously when it reaches its size limit or time limit (100 messages or 10 seconds), whichever comes first. Find the code for the logger here.

Python: Python Handler sends logs in bulk over HTTPS to Logs are grouped into bulks based on their size. Find the code for the Handler here.


Kubernetes: Many of our customers on Kubernetes use Fluentd to ship their logs to us. If you’re currently using Fluentd, simply add the token in the parameters, redeploy the daemonset, and watch the stream of your Kubernetes logs to Find the parameters here.

Docker: docker-collector-logs is a Docker container that uses Filebeat to collect logs from other Docker containers and forward those logs to your account. To use this container, simply set environment variables in your docker run command and run the container.

Step 2: Parsing Logs

In many cases, this step can be skipped– provides out-of-the-box parsing for many popular log sources, such as Apache, Kafka, SQL, and cloud services like AWS CloudTrail, Fargate, ELB, and S3. Find the full list of OOB parsing types here.

If you’re using Logstash, just send us the Logstash config file and we’ll apply the same data transformation you were using.

In some cases you’ll need to parse log data that is not supported by our automatic parsers or you’ll want to add custom parsing – no problem. Either parse the logs yourself in our parsing wizard or spend 20 minutes with our 24/7 Customer Support team to add the parsing for you. They’ve worked with hundreds of customers to ensure their logs are parsed to maximize their value. We call it “Parsing-as-a-service.”

Step 3: Migrate your Kibana Objects

Don’t let your previous work in Kibana go to waste. Even though you’re moving to, you can still migrate all your Kibana objects like visualizations, dashboards, and saved searches from your Kibana to ours. You can export your current Kibana objects as JSON files and import them using the import feature.

From there, add your Kibana objects to existing dashboards or create new ones. If you need help with this, one of our Customer Support Engineers would be happy to assist. The end result is that you don’t need to redo your informative logging dashboards, like this one:

Step 4: Decommission your Internal ELK Stack after Migrating

You’ll no longer need to run and maintain your own instances of Logstash, Elasticsearch, Kibana, or other components you’ve added to your stack like Kafka, RabbitMQ, Redis, or NGINX.

All you need to do is make sure your log data is streaming to Our fully managed service will take care of scaling, sharding, parsing, index management, storage, security, upgrades, and everything else.

It’s time to return your focus to your business, rather than maintaining your ELK Stack!

Step 5: Learn what we’ve built alongside the ELK Stack

Many of our customers move to because they want more out-of-the-box capabilities from the ELK Stack. Below are some features you can use to make the ELK Stack faster, easier to use, and more integrated.

Alerts: Real-time alerting based on thresholds and time frames.

Log Patterns: Cluster similar logs together into smaller, manageable groups so you can easily make sense of all your log data.

Cognitive Insights: Find the needle in the haystack. Cognitive Insights uses AI-powered crowdsourcing to scrape information from StackOverflow, GitHub, and other forums to predict which logs are worth looking at.

Application Insights: Correlate application exception with recent deployments to understand which code caused specific issues.

ELK Apps: Prebuilt, community-driven Kibana visualizations and dashboards for popular cloud and DevOps technologies.

That’s It!

If you’d like to see how easy it was for Sisense to migrate from ELK to, check out this case study with Sisense. To learn more about getting started with, check out our documentation.

If you’re interested in gaining broader observability as opposed to just logging, we have two other products as well:

Cloud SIEM

With Cloud SIEM, you can enrich the logs you’re already sending with security insights like malicious IPs, URLs, and DNSs. Use rules and dashboards to consolidate security events from technologies like HashiVault, CheckPoint, Palo Alto Networks, and AWS/Azure log data.

Infrastructure Monitoring

Just like we offer a fully managed version of the ELK Stack, we also offer services to monitor metrics. Now, you can use the best open source monitoring tools on the managed same platform.

Get started for free

Completely free for 14 days, no strings attached.