Customer obsession is in our DNA. We're here to make cloud observability easy, valuable, and cost effective. Our team of experts have helped thousands of engineers to better monitor their cloud environments and are available 24/7 to help you with:
Sending logs,
metrics & traces
Log
Parsing
Creating
alerts
Searching &
filtering
Using our
API
Account
management
“Logz.io is a great tool for analyzing logs...but the special thing is the support. Any time I had an issue, even if I needed help with regex expressions, the support chat helped me with understanding and patience, and was very fast.”
Zaken Gal
Big Data Engineer, Kenshoo
“Logz.io has the best support I’ve encountered in this arena, from initial proof-of-concept discussions onward.”
Nathaniel E
"Any time I get stuck I have an instant way to communicate with support. They are immediately responsive, and even get back within a few minutes no matter what timezone or time of night I send a message."
Milan S
“Our relationship with Logz.io support is extremely transparent. Tickets are always being shared and there’s always a follow up from the account manager to make sure that our issues have been taken care of. This provides a good sense of ease when trying to reach out to support for any queries we may have.”
Manish Sejpal
DevOps Engineer, Bambora
Are you a Logz.io user and need to report a bug or request? Submit a ticket through Zendesk!
SubmitWould you like to learn more about our product and services? Reach out to our sales department.
See our PlansLooking to upgrade? Visit our billing page or reach out to sales.
Go to Billing PageDirector of Support
Tier 2 Support Engineer
Support Team Lead, IL
Tier 1 Support Engineer
Support team lead, NA
Tier 1 Support Engineer
Tier 1 Support Engineer
Tier 2 Support Engineer
Tier 1 Support Engineer
Tier 1 Support Engineer
Tier 1 Support Engineer
Tier 1 Support Engineer
Tier 1 Support Engineer
Tier 1 Support Engineer
Logz.io supports a variety of shipping methods to cater to the different ways you access your logs:
"type" is a logical field used to differentiate between logs (if you're coming from using base Elasticsearch, you might know these as "documents"). in Logz.io, "type" also acts as the main condition for log parsing.
Ideally, you want to set the type based on log format and not log source. For example, If you have five servers sending Apache Access logs, they should all be shipped under the "apache_access" type, with an additional parameter to indicate the logs' environment.
The Data Parsing Wizard allows you to use Grok to parse out your own custom logs. If the log type you’ve been shipping is grayed out, it most likely means one or both of the following:
If you’d like to change or update the way said logs are being parsed, just reach out to our support team with some log samples (around a hundred lines is usually a good amount) the type you're sending them as, a breakdown of how you'd like the logs parsed and the timezone they were written in; once we have those, we'll get working on parsing them and update you once it's done.
Well, that depends on the service — we recommend taking a look at our “send your data” page for a list of existing integrations. If you can’t find your service there, don’t worry! Just reach out to the support team and we’ll see how we can help you out.
No, it won’t — each Distributed Tracing and Infrastructure Monitoring account exists as a unique entity, with its own quota, retention, and shipping token; this compartmentalization means that a spike in one service does not cause interruptions in others.
Logz.io uses daily indices to store logs of all types, using the first logs to come in after midnight UTC to dynamically determine each indexed field’s data type.
If a field comes in with a different data type, the system will identify the discrepancy and opt to flatten the log in order to not lose the data; that flattened log is indexed as a mapping exception, adding a field explaining the source of the exception.
The Logz.io support team can help in resolving these conflicts on our end while you address them at the source.
You can send alerts to email addresses, HTTP/S endpoint or a combination of the two; we offer a list of integrated endpoints (Slack, OpsGenie, Pagerduty, etc.), and you can configure your own custom endpoints if the list doesn’t contain the one you’re using.
We understand that sometimes, things go wrong — perhaps a part of your application starts misbehaving, or maybe a developer forgets the logging settings at the debug level and all of a sudden you find yourself shipping twice the logs and using up your daily usage quota at an alarming rate.
Just reach out to the support team - preferably via the live chat, in the interest of — and we’ll do whatever we can to help to make sure your account doesn’t get suspended while you work to resolve the issue internally*.
Just go easy on the developer who forgot to comment out the debug logging, ok? We’re sure it was an honest mistake