In a previous post we looked at 6 key considerations to keep in mind when selecting a log management solution: data collection, search experience, scalability, security, advanced analytics and cost effectiveness. Hopefully, you’ve managed to use this list to finally select your solution. What now?
If you thought selecting a log management solution was the most difficult step of the process, you’re in for a nasty surprise. The actual process of migrating to this solution will prove to be just as much a challenge and must be factored into your team’s planning. Your team will have to figure out how to collect and ship the data, how to migrate existing logging pipelines and visualizations, put into place a DRP (Disaster Recovery Plan), and more.
Sounds like a lot, right? The goal of this article is not to put the fear of God into you, but to provide you with a list of the things you need to plan for. Not all the points listed here suit everyone’s use case, but most of you will be able to create an outline of a migration project based on this list.
While this is more of a general logging best practice than anything else, the more standardized your data is, the easier the migration process to a new log management solution is.
If your logs are formatted and structured consistently throughout your environment, ingestion into any new log management tool will be much simpler. The last thing you want you and your team spending time on is parsing five differently formatted timestamp fields coming from different hosts.
So – use a logging framework if possible, stick to the same field naming and log level conventions, output using one logging format (JSON preferably) and use tags where possible. These steps will help ensure your data requires a minimum amount of pre and post-ingestion processing, as well as make analysis in your new log management tool much more efficient.
Log collection and shipping
Regardless of the log management solution you’re migrating to, how well you handle the collection of log data and shipping it into the new tool will greatly influence the transition process.
A key factor here is understanding what log data you want to collect in the first case. Hopefully, you already have a good answer to this question, but if not, dedicate some time to formulate a wishlist of the logs you intend on shipping.
Once you have this wishlist in place, you will need to figure out the method for collecting and forwarding the data into the tool. The specific method will vary from data source to data source and from tool to tool. For example, if you’re planning on shipping all your AWS logs, a Lambda that extracts the logs from CloudWatch into your log management solution might be your weapon of choice. If you’re logging a Kubernetes cluster deployed on GKE, you might be using fluentd.
Many log management tools provide agents that need to be deployed on hosts or appenders that need to be coded into your applications. Some data sources support plugins to integrate with specific log management tools. In any case, be sure you’re familiar with these methods and are confident they ensure your logging pipelines will be resilient and robust enough to handle your log traffic. Logstash crashing because you didn’t provision enough resources for it to run is not something you want to be awoken to in the middle of the night.
If you’re building a logging pipeline from scratch, you will not have to worry about migrating an existing pipeline. Based on the considerations above, you will have built a wishlist of the logs you want to ship and a plan for collecting them.
But what if you’re already shipping TBs of logs into an existing solution? How do you make sure you don’t lose any data during the migration process? To migrate existing pipelines, you will need to implement a phased process:
In phase 1, you will ship in parallel to both your existing solution and the new one. This could be as simple as running two agents per hosts or pointing to two endpoint URLs at the same time. In more complicated scenarios, you might need to run multiple collectors per data source.
In phase 2, you will need to gradually disengage from the existing pipeline. Once you’ve made sure all your logs are being collected properly and forwarded into the new tool, disable log collection and forwarding to your existing tool.
An optional phase here is importing historical data. Depending on your environment and data sources, you will need to think about a way to import old log data. You might, for example, need to think about archiving into S3 buckets and ingesting this into the new tool at a later stage. In case your solution is ELK-based, both Filebeat and Logstash support various options to reingest old data.
Securing your data
You’ve probably already done a fair amount of due diligence before selecting your log management solution. Meaning, you’ve made sure the solution adheres to strict security rules and is compliant with relevant regulatory requirements. When planning the migration process, there are some security measure you need to think of on your end too.
Opening up ports, granting permissions to log files, and making sure your logs are sanitized (i.e. don’t include credit card numbers) are some basic security steps to take. If you’re moving to a cloud-based solution, be sure your log data is encrypted while in transit. This means the agent or collector you are using has to support SSL. Encryption at rest is another key requirement to consider, especially if you are migrating to a do-it-yourself deployment.
Most log management solutions today provide role-based access and user management out of the box. This is most likely one of the reasons you’ve chosen to migrate to a new tool in the first place. SSO support, for example, is a standard requirement now, and you will need to be sure the new tool is able to integrate properly with your SSO provider, whether Okta, Active Directory or CA.
Planning for growth
As your business continues to grow, new services and apps will be developed. This means additional infrastructure will be provisioned to support this growth. And all this together means one thing — more logs. If you add to this the fact that logs can be extremely bursty in nature, spiking during crisis or busy times of the year.
When migrating to a new log management tool, be sure you have extra wiggle room. How you do this depends on your solution of course. For example, if you’ve selected to migrate to a SaaS solution, be sure that your plan provides for data bursts and overage. If it’s a do-it-yourself ELK Stack, be sure you have enough storage capacity whether you’re deploying Elasticsearch on-prem or on the cloud.
Preparing for disaster
If you’re responsible for monitoring business-critical applications, you cannot afford to lose a single log message. Implementing the phased approach described above ensures a smooth transition. But are you totally safe once you’ve completed the process? You should think about a DRP (Data Recovery Plan) in case something goes wrong. Archiving logs to an S3 bucket or Glacier, for example, is a common backup workflow.
Again, if you’re starting from scratch and have no dashboards to migrate, then it’s just a matter of creating new objects. Granted, this is no simple task, but conveniently, some solutions provide you with canned dashboards to help you hit the ground running.
If you already have dashboards set up, the last thing you want is to start building them out again in your new tool. Sadly, there is no easy workaround. Some tools support exporting objects but that does not help with importing them into the new tool.
There is one exception here and that is if you’re migrating from one ELK-based solution to another. Kibana allows you to export and import JSON configurations of your dashboards and visualizations, and this makes the migration process much simpler.
Adding a new tool into your stack is always a daunting task, and log management tools are no exception to this rule. Log data is super-critical for organizations and this adds to the pressure of making sure the onboarding and transition process is successful.
As complex as it is, the process is also well defined and the points above provide you with an idea of what this process needs to include. Ideally, the log management solution you have selected can provide you with support and training to help you but if you’re on your own and have opted for a do-it-yourself solution, use the list above as a blueprint.
Completely free for 14 days, no strings attached.