Fast-Track Kubernetes Observability with Logz.io and OpenTelemetry: A quick getting started guide

Migrating from DIY ELK to a full SaaS platform

Introduction to OTel

In formal terms, OpenTelemetry is an open source framework used for instrumenting, generating, collecting, and exporting telemetry data for applications, services, and infrastructure. It provides vendor-neutral tools, SDKs and APIs for generating, collecting, and exporting telemetry data such as traces, metrics, and logs to any observability backend, including both open source and commercial tools

While some concepts might seem straightforward to experienced engineers, it’s always important to share related best practices in a way that’s inclusive and approachable. With that in mind, think of OpenTelemetry (a.k.a OTel) as an universal translator for data from various applications and systems. Imagine you’re managing a group of machines or software programs, each speaking its own language. Clearly you need to understand what they’re saying to monitor their performance and spot issues.

This is where OTel steps in to gather and standardize this data—things like error logs or performance metrics—and organizes it so you can send this data to a “central location” or backend for analysis.  OTel transforms raw information into something clear and actionable, making it easier for users to gain deep visibility into their workloads, helping to observe, monitor, troubleshoot, and optimize software systems.

What to expect in this guide

This article will seek to guide you on the best way to send logs, metrics and traces from a Kubernetes-deployed application to Logz.io using OTel. Whether you’re a first-time user or an experienced engineer seeking a fast, hands-on setup, this is your chance to enhance your OTel and Kubernetes observability skills while experiencing Logz.io with your own hands. With OTel growing its contributor base and ranking as the second highest velocity project in the CNCF ecosystem, there’s never been a better time to dive in and explore its potential for optimizing observability.

In this guide we’ll use the OpenTelemetry Demo App and the Logz.io exporter. It’s not mandatory to use the OTel Demo, but it’s a nice starting point if you don’t have a real-world implementation or if it’s your first time trying Logz.io and OTel.

The OpenTelemetry Demo includes microservices written in multiple programming languages, communicating over gRPC and HTTP; and a load generator that uses Locust to fake user traffic automatically, eliminating the need to manually create scenarios.

You can check the Demo architecture here.

Prerequisites 

  • Logz.io account . Don’t have one? You can start a free 14-day trial.
  • Any Kubernetes cluster 1.24+ with Kubectl configured (for this guide, I’m using an EKS. But Minikube/Kind is also welcome) 
  • 6 GB of free RAM for the application
  • Helm 3.14+ installation (for Helm installation method only)
  • OpenTelemetry Collector (for this guide, I’m using the official OpenTelemetry Demo for Kubernetes, it already provides you the Collector)

OpenTelemetry Core Components – quick explanation  

Instrumentation libraries: Tools and SDKs integrated into applications to automatically or manually generate telemetry data.

Collector: Vendor-agnostic “proxy” that can receive, process, and export telemetry data. It supports receiving telemetry data in multiple formats (e.g. Jaeger, Prometheus, Fluent Bit, etc.) sending to one or more open source or commercial backends. The local Collector agent is the default location to which instrumentation libraries export their telemetry data. It also supports processing and filtering telemetry data before it gets exported.

Exporters: Exporters take the processed data and send it to your chosen observability platform, such as Logz.io, Prometheus, Jaeger…

Context for this guide: The OTel Demo App will handle instrumentation, and the OTel Collector (which comes by default when deploying the Otel Demo Helm chart) will send telemetry data to Logz.io using the Logz.io exporter.

Demo Application → Otel SDK → Otel Collector with Logz.io Exporter → Logz.io Backend

Now let’s see how it works in practical terms…

Deploying the OTel Demo App

Add the OpenTelemetry Helm chart repository:

helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts

Deploy the app (in my case I deployed with the release name my-otel-demo):

helm install my-otel-demo open-thelemetry/opentelemetry-demo

Verify that app pods are running:

kubectl get pods 

Accessing the OTel app

After you’ve deployed the Helm, the Demo application needs the services exposed outside of the Kubernetes cluster in order to use and navigate them. You can easily expose the services to your local system using kubectl port-forward command or by configuring service types (ie: LoadBalancer) with optionally deployed ingress resources.

The easiest way is to expose services is by using kubectl port-forward, which I’m using in this guide:

kubectl port-forward svc/my-otel-demo-frontendproxy 8080:8080

With the frontendproxy port-forward set up, you can access

Web store: http://localhost:8080/

Grafana: http://localhost:8080/grafana/

🖥️ Load Generator UI: http://localhost:8080/loadgen/

Jaeger UI: http://localhost:8080/jaeger/ui/

Bringing your own backend

Now it’s time to configure the OTel Collector for Logz.io, using the Logz.io exporter and some additional Logz.io parameters. This will allow us to start sending telemetry from the OTel App to Logz.io. 

The OpenTelemetry Collector’s configuration is exposed in the Helm chart that we just deployed in the previous steps. Any additions you make will be merged into the default configuration and you can choose any backend of your choice, that’s the main idea of using OTel: vendor-neutrality.

Create a configuration file named my-values-file.yaml with the following content:

opentelemetry-collector:
  config:
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: "0.0.0.0:4317"
          http:
            endpoint: "0.0.0.0:4318"
   
    exporters:
      logzio/logs:
        account_token: "YOUR-LOGS-SHIPPING-TOKEN"
        region: "your-region-code"
        headers:
          user-agent: logzio-opentelemetry-logs
      prometheusremotewrite:
        endpoint: https://listener.logz.io:8053
        headers:
          Authorization: "Bearer YOUR-METRICS-SHIPPING-TOKEN"
          user-agent: logzio-opentelemetry-metrics
        target_info:
            enabled: false
      logzio/traces:
        account_token: "YOUR-TRACES-SHIPPING-TOKEN"
        region: "your-region-code"
        headers:
          user-agent: logzio-opentelemetry-traces
      prometheusremotewrite/spm:
        endpoint: "https://listener-uk.logz.io:8053"
        add_metric_suffixes: false
        headers:
          Authorization: "Bearer YOUR-METRICS-SHIPPING-TOKEN"
              user-agent: "logzio-opentelemetry-apm"
# Metrics account token for span metrics


    processors:
      batch:
      tail_sampling:
        policies:
         [
            {
              name: policy-errors,
              type: status_code,
              status_code: {status_codes: [ERROR]}
           },
            {
              name: policy-slow,
              type: latency,
              latency: {threshold_ms: 1000}
           },
            {
              name: policy-random-ok,
              type: probabilistic,
              probabilistic: {sampling_percentage: 10}
            }       
          ]

    extensions:
      pprof:
        endpoint: :1777
      zpages:
        endpoint: :55679
      health_check:
         
    service:
      Extensions: [health_check, pprof, zpages]
      pipelines:
        logs:
          receivers: [otlp]
          processors: [batch]
          exporters: [logzio/logs]
        metrics:
          receivers: [otlp,spanmetrics]
          exporters: [prometheusremotewrite]
        traces:
          receivers: [otlp]
          processors: [tail_sampling, batch]
          exporters: [logzio/traces,logzio/logs,spanmetrics]
      Telemetry: #log verbosity for the Collector logs.
        logs:
          level: "debug"     

To finalize, apply the YAML configuration changes to start sending telemetry to Logz.io.

This command will apply the changes to the current OTel Helm release without requiring a fresh installation.

helm upgrade my-otel-demo open-telemetry/opentelemetry-demo --values my-values-file.yaml
❗Notes: 

Receivers: Defines how telemetry data is received

➤ otlp: Specifies the protocol (grpc and http) for receiving logs, metrics, or traces from applications.

➤ Exporters: Specifies where and how telemetry data is sent.

➤ Services: Defines the data flow pipelines for processing telemetry.

➤ tail_sampling defines which traces to sample after all spans in a request are completed. By default, it collects all traces with an error span, traces slower than 1000 ms, and 10% of all other traces.

➤ The extensions session is optional.

➤ When merging YAML values with Helm, objects are merged and arrays are replaced. The spanmetrics exporter must be included in the array of exporters for the traces pipeline if overridden. Not including this exporter will result in an error.

➤ You can find all your personal parameters and Data shipping tokens logging into the Logz.io platform, going to Settings > Data shipping tokens. Or, going to Integrations > OpenTelemetry.

➤ You can also find the full OTel configuration directly in the Logz.io platform, under Integrations and searching for OpenTelemetry or accessing the Logz.io exporter GitHub documentation

Optional – Collecting Cluster-level data

As an optional step, you can also easily collect Infrastructure data from the Kubernetes cluster itself, deploying the Logz.io Helm that works as a unified data shipper. Doing that, you basically achieve full visibility, from apps to cluster data, enhancing the observability experience – and this will only take you a couple of minutes more.

Add the Logz.io Helm repo:

helm repo add logzio-helm https://logzio.github.io/logzio-helm && helm repo update

Install the Helm to get all k8s cluster telemetry data with a single command:

helm install  -n monitoring --create-namespace \
--set logs.enabled=true \
--set logzio-logs-collector.secrets.logzioLogsToken="YOUR-LOGS-SHIPPING-TOKEN" \
--set logzio-logs-collector.secrets.logzioRegion="us" \
--set logzio-logs-collector.secrets.env_id="<<CLUSTER-NAME>>" \
--set metricsOrTraces.enabled=true \
--set logzio-k8s-telemetry.metrics.enabled=true \
--set logzio-k8s-telemetry.secrets.MetricsToken="YOUR-METRICS-SHIPPING-TOKEN" \
--set logzio-k8s-telemetry.secrets.ListenerHost="https://listener.logz.io:8053" \
--set logzio-k8s-telemetry.secrets.p8s_logzio_name="<<ENV-ID>>" \
--set logzio-k8s-telemetry.traces.enabled=true \
--set logzio-k8s-telemetry.secrets.TracesToken="YOUR-TRACES-SHIPPING-TOKEN" \
--set logzio-k8s-telemetry.secrets.LogzioRegion="us" \
--set logzio-k8s-telemetry.spm.enabled=true \
--set logzio-k8s-telemetry.secrets.env_id="<<ENV-ID>>" \
--set logzio-k8s-telemetry.secrets.SpmToken="<<SPM-METRICS-SHIPPING-TOKEN>>" \
--set logzio-k8s-telemetry.serviceGraph.enabled=true \
--set logzio-k8s-telemetry.k8sObjectsConfig.enabled=true \
--set logzio-k8s-telemetry.secrets.k8sObjectsLogsToken="YOUR-LOGS-SHIPPING-TOKEN" \
--set securityReport.enabled=true \
--set logzio-trivy.env_id="<<ENV-ID>>" \
--set logzio-trivy.secrets.logzioShippingToken="YOUR-LOGS-SHIPPING-TOKEN" \
--set logzio-trivy.secrets.logzioListener="listener.logz.io" \
--set deployEvents.enabled=true \
--set logzio-k8s-events.secrets.logzioShippingToken="YOUR-LOGS-SHIPPING-TOKEN" \
--set logzio-k8s-events.secrets.logzioListener="listener.logz.io" \
--set logzio-k8s-events.secrets.env_id="<<ENV-ID>>" \
logzio-monitoring logzio-helm/logzio-monitoring
❗Note: All the configuration steps can be found inside the Logz.io platform, under Integrations > Kubernetes

Validating and exploring your OpenTelemetry data in Logz.io 

After deploying the OTel Demo App and configuring the collector to send data to Logz.io, it’s important to validate that the telemetry data is flowing correctly. After a few seconds, you can start exploring all your logs, metrics, and traces quickly within the Logz.io platform! 

The following describes where you can find your OpenTelemetry data in the Logz.io platform.

Logs

Use Logs to view the incoming data and interact with it. 

App 360

App 360 is where you will find all the OpenTelemetry microservices deploys and have the ability to dive into a specific service for app-level details.

K8s 360

Check the K8s 360 section for any metric data that has been collected from your Kubernetes cluster, segmenting your view by different Kubernetes objects or areas of analysis (for example CPU, Memory, Restarts…).

Next step: Unlocking extra insights with Logz.io AI Agent

Once your data is flowing to Logz.io, the next step is to harness the power of GenAI to translate raw telemetry into actionable insights. The Logz.io AI Agent leverages GenAI to help users detect anomalies, spot trends, and gain deeper visibility into their systems.

Ask about trends and anomalies

You can ask the AI Agent using natural language method:
– “Why is my CPU usage increasing on
    Service X?”
– “What is causing high latency in
    Service Y?”
– “Tell me about the error spikes in
    the past 24 hours.”
– The AI Agent analyzes the data and
    provides you with detailed explanations.

Real-Time Insights

The AI Agent continuously analyzes your data in real-time. For example, if the system detects an anomaly like a sudden spike in response times or a change in error rates, it can notify you with an Exception alert and analyze the likely cause of the issue.

Root Cause Analysis (RCA)

When issues arise, the AI Agent can help with Root Cause Analysis by analyzing the factors contributing to the problem. For instance, if an application is experiencing a performance bottleneck, the AI Agent might point out the correlated issue, such as high network traffic, for example, showing you the affected dependencies, problematic deployments and providing recommendation steps on how to fix or avoid the problem in the future. This also allows you to employ  a more proactive approach.

Go hands-on with Logz.io AI Agent for RCA

Wrapping Up 

By following the steps laid out in this guide, you’ve taken the critical first steps in using OpenTelemetry and Logz.io. You’ve learned how to collect telemetry data from applications deployed in a Kubernetes environment using the OTel demo app and send it to Logz.io using the Logz.io Exporter, as well as how to gather cluster-level metrics deploying the native Logz.io Helm shipper. In just a few simple steps, you’ve set up logs, metrics, and traces streaming into a unified observability platform/backend, enabling seamless monitoring and troubleshooting of your systems.

Logz.io goes far beyond data collection. With features like the AI Agent, you can unlock real-time insights, detect anomalies, identify root causes and isolate trends across your telemetry data, all within a conversational interface. This further empowers you to focus less on manual data analysis and concentrate more on proactive problem-solving, making observability not only easier, but smarter.

Further Reading

OpenTelemetry documentation
OpenTelemetry Demo App for Kubernetes
Sending Otel data to Logz.io documentation
Send Kubernetes Data with Logz.io Telemetry Collector
Logz.io Exporter GitHub
Logz.io Free trial
Setting up your local Kubernetes environment

Get started for free

Completely free for 14 days, no strings attached.