Our product strategy this year was relatively simple. Many observability practitioners we spoke with complained that observability was oftentimes slow, heavy, complex, and costly – which can be summed up in our CEO’s recent blog on modern observability challenges.
While our customers didn’t report similar challenges, we wanted to further distance ourselves from this typical observability experience.
We aimed to make Logz.io easier, faster, and most cost efficient by focusing on these key product initiatives:
We’ve released many updates throughout the year like enhanced alerting and Synthetic Monitoring. However, some of our most exciting new capabilities are being announced this week with the launch of Logz.io Open 360™ – our observability platform that uses leading open source technologies to unify log, metric, and trace analytics in one place.
Read on to learn about the new Logz.io features and capabilities that drive a better observability experience.
As we announced recently, Kubernetes 360 is an out-of-the-box dashboard that automatically populates as K8s data is sent to Logz.io—delivering a full overview of cluster health and performance in minutes.
By combining key infrastructure metrics such as CPU and Memory per pod with other signals like error rates, engineering teams will get a complete picture of Kubernetes at a glance. From there, customers can drill down into individual logs that can explain what’s causing problems in your infrastructure.
Rather than setting up and managing separate technologies to collect logs, metrics, and traces, Logz.io now offers the Telemetry Collector—a single agent that can send logs, metrics, and traces to Logz.io in one installation.
After filling out some basic parameters, Logz.io will generate a script that deploys the agent – making data collection seamless.
Telemetry Collector currently supports Kubernetes, local hosts, and AWS EC2, and CloudWatch, with more to come soon.
Data Optimization Hub provides a single interface to inventory all of the incoming observability data and filter out any unneeded information—making it easier than ever to monitor, control and reduce costs.
Other observability vendors require customers to identify unneeded data by manually exploring it, and then reconfiguring their agent to filter out the unneeded information. By making it painless to identify and remove junk data, we’ve found that the average Logz.io customer filters out 32% of their total observability data after removing the access information!
Go to the ‘Data Hub’ tab in the product toolbar to find the feature.
LogMetrics provides an alternative to indexing all your data, which can often inflate costs.
Data from logs tells a story, actually more than one. Whether the data describes a production incident during troubleshooting with system errors, overall trends, security, or something else.
Searching and analyzing logs is a great tool for troubleshooting. However, while metrics don’t provide the text needed to troubleshoot, they are more cost-effective and enable long-term analysis and monitoring of trends.
For log data that is best suited for showing trends over time, you can now use LogsMetrics, which converts the right logs into metrics for monitoring on a graph..
For example, you don’t need to read every HTTP log to understand response code trends – rather, you build a visualization and monitor the changes.
The result is significantly reduced costs, while providing the same critical insights.
Archive & Restore is one of Logz.io’s most popular features to reduce the cost of log storage. Rather than indexing all of the data, Logz.io customers can store some of their data in cheap cloud storage (AWS S3 or Azure Blob), and restore that data into Logz.io for analysis any time.
It is now faster and easier than ever to restore your data. We’ve improved the Restore engine—making it possible to restore large log volumes in minutes, rather than hours. Plus, we’ve added filters so customers can narrow down the data they decide to restore, as opposed to restoring all the data in a certain timeframe.
Configuring OpenTelemetry to instrument application services is often not trivial or fast. Part of the process is implementing the desired sampling configuration to prevent huge volumes of data from overwhelming your storage back end.
To simplify configuring sampling for OpenTelemetry, we created the Trace Sampling Wizard.
After entering in a few parameters, the Trace Sampling Wizard produces a YAML configuration file with the desired sampling policy needed to store only the required trace data,based on sampling rules defined by the user.
We don’t see our product initiatives changing a whole lot going into 2023.
As exploding observability data volumes inflate costs, distributed systems increase complexity, and telemetry insights become harder to grasp, Logz.io will continue to focus on making observability easier, faster, and most cost efficient.
You can get started with these new capabilities today. If you need assistance getting started with any of them, contact a Customer Support Engineer through the chat bot in the bottom right corner. Or, schedule some time with your Logz.io Account Manager for some formal team training.