How to Run OpenTelemetry Demo with

OpenTelemetry is the emerging standard for collecting observability data, namely logs, metrics and traces, from cloud-native applications. It is the most active project in the Cloud Native Computing Foundation after Kubernetes, and is widely adopted by end users and vendors alike.

To make experimenting with OpenTelemetry easy, the community has developed the OpenTelemetry Demo. This community demo gives users an easy way to deploy Astronomy Shop, a demo microservices polyglot app, fully-instrumented with OpenTelemetry client libraries, alongside an OpenTelemetry Collector to collect all the telemetry, aggregate, process and forward to a backend of your choice. The demo also comes with several preset simulated issues such as memory leaks, with which users can experience observability-enabled investigations.

OpenTelemetry Demo was released in general availability during KubeCon NA 2022, and is ready for you to try out. In this post I will introduce the OpenTelemetry demo, its architecture and components, and how to run it and use it to get hands on with OpenTelemetry.

I will also show how you can configure OpenTelemetry to forward your telemetry to, so you can use it to store and analyze your and to troubleshoot demo incident scenarios. provides Open 360™, an observability SaaS backend, based on the best-of-breed open source stack such as Prometheus, OpenSearch and Jaeger.’s platform is fully compatible with OpenTelemetry, and you can run the OpenTelemetry demo with with a few simple configuration steps. 

To learn more about OpenTelemetry and its components, check out the Essential guide to OpenTelemetry.

Introduction to OpenTelemetry Demo

OpenTelemetry demo features Astronomy Shop, a mock online retailer offering various astronomy products. The application comprises over a dozen microservices, written in many different programming languages, to showcase the wide range of languages supported by OpenTelemetry’s SDK suite. The application also uses Postres, Kafka and Redis, to mock a real stateful application with data ingestion and persistence.

The demo app provides several built-in scenarios that simulate issues to investigate, and a set feature flags with a designated microservice and easy UI to turn them on and off.

The demo can be deployed on Docker or Kubernetes. In addition to deploying OpenTelemetry Collector (at the time of writing it’s v0.76.1), the default installation also deploys local Prometheus, Jaeger and Grafana, as a backend to store and analyze the observability data locally. This backend can easily be replaced, and you can direct the telemetry data to any local or remote backend supported by OpenTelemetry Collector and its rich suite of exporters.

The demo also provides a load generator to easily simulate user traffic. The load generator is based on the Python load testing framework Locust, and by default it will simulate users requesting several different routes from the frontend.

How to Run OpenTelemetry Demo

Before starting, make sure you’ve got the following:

  • Docker, Docker Compose, and Git installed
  • account set up with Logs, Metrics and Traces enabled (you can always open a free trial account)

With the above prerequisites covered, let’s go over the setup steps, and I’ll expand on each step in the subsequent sections:

  1. Clone the demo app from GitHub
  2. Retrieve your data shipping tokens from your account 
  3. Configure OpenTelemetry Collector to send telemetry to the backend
  4. Deploy the demo app (we will use Docker in this tutorial)
  5. Simulate traffic with Load generator 
  6. Verify that the data reaches your backend
  7. Explore your telemetry and import the demo app’s monitoring dashboards

Clone the OpenTelemetry Demo from the community GitHub

Clone the Demo repository:
git clone

Then change to the demo folder:
cd opentelemetry-demo/

Retrieve data shipping tokens from the account

If you don’t have an account on, follow this link to open a free trial account in just a few clicks.

Once your account is set, make sure Logs, Metrics and Traces tabs are all enabled, as we’ll be exploring all these telemetry types in this tutorial.

Next you’ll need your data shipping tokens, for sending logs, metrics and traces to your account of choice (you may manage multiple accounts). Go to Settings→Manage Tokens→Data shipping tokens (or click here), to grab the tokens for your logs, metrics and traces respectively. On the same screen you will find the “Listener URL”. We will use these tokens and Listener URL in the next step.

Lastly, look up your account region at Settings→General. Make note of the two-letter region code at the start of the region designation, such as ‘us’ for AWS us-east-1 or ‘eu’ for AWS eu-central-1 (users can open accounts in various regions around the globe). You will use this two-letter region code later.

Your observability backend is now all set to receive logs, metrics and traces. Now let’s get the application up and running to start generating some telemetry.

Configure OpenTelemetry Collector to export telemetry to backend

OpenTelemetry Collector is in charge of collecting all the telemetry data generated by the various microservices and components of our app, processing the telemetry, and then exporting it to the backend of choice. In this tutorial I will use the exporter, which can be found on OpenTelemetry’s GitHub repo.

The collector comes with default configuration for the demo, and you can override and extend that configuration by editing the following YAML file:

You can copy the configuration from the below gist, or from GitHub here:

    account_token: "<<TRACING-SHIPPING-TOKEN>>"

    endpoint: "https://<<LISTENER-HOST>>:8053"
      Authorization: "Bearer <<METRICS-SHIPPING-TOKEN>>"

    account_token: "<<LOGS-SHIPPING-TOKEN>>"

    send_batch_size: 10000
    timeout: 1s

      receivers: [ otlp ]
      processors: [ batch ]
      exporters: [ logzio/traces, logzio/logs, spanmetrics ]

      receivers: [ otlp, spanmetrics ]
      exporters: [ prometheusremotewrite ]

      receivers: [ otlp ]
      processors: [ batch ]
      exporters: [ logzio/logs ]

You can review the default configuration in otelcol-config.yml in the same folder.

Deploy the demo app to Docker

Now it’s time to deploy the app and start generating some interesting telemetry.

We will deploy with Docker, using Docker Compose:
docker compose up --no-build

Note: If you’re running on Apple Silicon, run docker compose build in order to create local images vs. pulling them from the repository, before deploying it.

Wait for the deployment to finish successfully and for all the services to reach a running state.Then open the Astronomy Shop application at http://localhost:8080/

Simulate traffic with Load Generator

While you can click through the app manually on the browser to get some activity going, you most likely want a more massive traffic, to have more interesting telemetry data to observe and investigate. That’s what the Load Generator is for.

Open the Load Generator UI in the browser at http://localhost:8080/loadgen/

It should already be on by default, and you can start, stop and configure the load test with the desired peak concurrency and spawn rate:

Verify that the data reaches the account backend

Go to your account. If the data reached the backend, you should see the ‘Data Received’ indicator on the top-left corner. You can also open the Home Dashboard by clicking ‘Home’ on the left hand navigation bar (or click here), and see an overview of the amount of logs, metrics (measured in unique time series, or UTS), and distributed tracing spans received.

Explore your logs, metrics and traces

You can now explore your logs by clicking Logs → OpenSearch Dashboards → Discover on the left-hand side navigation bar (or click here). Log Management service is based on the OpenSearch open source project, so if you are familiar with OpenSearch or Elasticsearch & Kibana, you will feel at home in this Discover view:

We also run Artificial Intelligence to surface errors in the current log search results, which you can find on the “Exceptions” tab inside the Discover view.

You can also create your own OpenSearch dashboards and visualizations on top of the log data, quite the same as you would in the open source OpenSearch or Kibana.

You can explore your traces with Distributed Tracing service, which is based on the Jaeger open source project, by clicking Traces → Jaeger on the left hand navigation bar.

If your traces arrive well, then you will be able to select a specific microservice of the demo app to see its traces. Try the Frontend service, as this is the entry point for the app. Once you click one of the traces, you will be able to explore it using Jaeger UI’s popular Timeline View.

You can also get an application performance monitoring view of the Astronomy Shop app, by looking at the performance metrics of each of the running services and their individual service operations. These metrics are aggregated from the tracing data, and are referred to as Span Metrics in Jaeger and in OpenTelemetry.

To see this APM data, click App 360 on the left hand navigation bar (or click here). You can explore the service list with all the services of the Astronomy Shop app, whether your own or third party (such as Kafka and Redis in this case), each with its R.E.D. performance metrics (request rate, error rate and latency) and the overall calculated impact. You can click a service of interest to drill down and see its individual operations and their performance.

list of services and their performance overview, App 360

You can explore your metrics data with a Prometheus-as-a-Service using Infrastructure Monitoring. Click the Metrics → Metrics on the left hand navigation bar to open it. If you are familiar with open source Grafana, you will feel at home here. The home screen here is another good place to verify that your metrics data arrived at the backend and to see its volume.

Now let’s import OpenTelemetry Demo’s Grafana dashboards, to explore the application further. You can find a premade dashboard for monitoring the microservices here, from which you can copy the dashboard’s JSON.

To import this dashboard, go to the Metrics tab, and then go to Dashboards → Browse (or click here) and click the “Import” button to import the JSON (you can simply paste the JSON in the text editor. Alternatively, you can save it locally and then upload the file).

When importing, you will be prompted to choose the datasource, which is essentially your Metrics account to which you sent the data. You can create a designated folder and place the imported dashboards there, as I did, by clicking “New folder”.

Then you can select the new dashboard, on the same “Browse” tab, to monitor your system.

You can also try out the OpenTelemetry Collector Data Flow dashboard here, by following the same steps to import it.

Next you can compose your own Grafana dashboards to suit your own observability preferences and to explore the data in greater depth, as well as define alerts on your metrics and logs.

But what are we trying to investigate, really?

Explore OpenTelemetry Demo with built-in simulated scenarios 

Now that all the telemetry data is collected, stored, and visualized, we can get to the fun part – investigating our application. OpenTelemetry Demo provides built-in scenarios to experience how to solve problems with OpenTelemetry. These scenarios walk you through some pre-configured problems and show you how to interpret OpenTelemetry data to solve them.

The current release of OpenTelemetry Demo provides built-in scenarios:

  • Discover a memory leak and diagnose it using metrics and traces. 
  • A Product Catalog error for GetProduct requests with a specific product ID.
  • Sporadic failure of requests to the ad service, and similarly to the cart service. 

The simulated issues can be turned on via feature flags, and you can easily turn them on/off via the Feature Flag service’s web UI at http://localhost:8080/feature/.

You can add your own scenarios and feature flags, or just explore the system’s behavior from various perspectives.

Advanced demo: Using’s Telemetry Collector for Full Observability

So far, we’ve used the vanilla OpenTelemetry Collector, and with a few minor configurations of its YAML we were able to send the telemetry to backend and use the basic functionality of the managed Jaeger, Prometheus and OpenSearch analytics backends.

But Open 360™ offers much more than that. It offers unified cloud-native observability, with a 360 degree view of your system, across infrastructure and application, including Kubernetes workloads, serverless and more. 

the demo app visualized in Service Map, App 360

To leverage the full observability power and accelerate onboarding, we released Telemetry Collector, which is based on an open source distro of OpenTelemetry Collector, preconfigured to work with We also provide an open source agent that bundles in all the setup around OpenTelemetry Collector needed to ship logs, metrics and traces to, across different operating systems and across data sources, such as managed Kubernetes cloud providers. Telemetry Collector comes with many more amenities: instead of dealing with the OpenTelemetry Collector’s YAML, it provides an easy to use configuration wizard within the UI. Simply fill in a few basic parameters, and generates a script to deploy the agent across your system.

To further simplify data collection with Telemetry Collector, recently launched Easy Connect – which automatically discovers all your services running on Kubernetes, while providing the option to collect logs, metrics, and/or traces from each service. 

Easy Connect also provides automatic detection of the programming languages running in the pods and auto-instrumentation of the apps in these pods with a single click. comes bundled with premade dashboards for popular frameworks and much more, to save you much of the hassle. Telemetry Collector provides an out-of-the-box configuration for monitoring your Kubernetes workloads, whether managed or self-hosted Kubernetes. Once you have it running and collecting telemetry to, you can then monitor and analyze your Kubernetes clusters with the Kubernetes 360™ dashboard, which automatically surface critical health and performance insights from your clusters.

These are just some of’s observability capabilities, make sure to check out the Open 360™ platform page to find out more. 

For more details on the Telemetry Collector, check out the user guide.

Final Notes

OpenTelemetry demo is a useful way to start exploring the OpenTelemetry project and its fit to your observability needs and practices. The demo is maintained by the community, along with the rest of this highly active project, and offers a wide range of supported backends.

Try it out to get hands-on with OpenTelemetry. Don’t hesitate to tweak the code base, the application instrumentation, or the Collector, to experiment with different setups. And check out’s offering to enjoy OpenTelemetry as part of a holistic observability platform, based on the best open source stack. 

More On The Subject

Get started for free

Completely free for 14 days, no strings attached.