In this tutorial, we will go through a working example of a Python application auto-instrumented with OpenTelemetry. To keep things simple, we will create a basic “Hello World” application using Flask, instrument it with OpenTelemetry’s Python client library to generate trace data and send it to an OpenTelemetry Collector. The Collector will then export the trace data to an external distributed tracing analytics tool of our choice.
If you are new to the OpenTelemetry open source project, or if you are looking for more application instrumentation options, check out this guide.
Our Example Application
Our example application is a locally hosted server that responds with “Hello, World!“ every time we access it. Let’s first create a dedicated directory for this application:
mkdir python-hello-world-otel cd python-hello-world-otel
Now, we can create a Python script with the following configuration:
from flask import Flask app = Flask(__name__) @app.route('/') def index(): return 'Web App with Python Flask!' app.run(host='0.0.0.0', port=81)
Let’s save this file as
Installing a General Python Package
As we are using the Flask framework in our example, we need to install this package in the directory of our app, python-hello-world-otel:
pip3 install flask
Installing OpenTelemetry Components
In our next step, we will need to install all OpenTelemetry components that are required to auto-instrument our app:
Note that this example was tested with OpenTelemetry version 0.32b0.
To install these packages, we run the following command from our application directory,
pip3 install opentelemetry-distro pip3 install opentelemetry-instrumentation pip3 install -Iv protobuf==3.20.1
These packages provide good automatic instrumentation of our web requests, which in our case are also based on Flask. This means that we don’t need to change anything in our Python script to capture and emit trace data. In some cases, you may want to augment the auto-instrumentation with manual instrumentation in your python code in order to collect more fine-grained trace data on specific pieces of your code.
Note: For the sake of simplicity, we used the opentelemetry-instrumentation library, which includes instrumentation packages for all Python libraries. Of course, if you want to keep things as light as possible, you can selectively install only those packages that are applicable to your application (such as opentelemetry-instrumentation-flask for Flask instrumentation).
Installing Application-Specific OpenTelemetry Packages
In this step, we will run a command to install all instrumented packages used in our application. To do this, we need to run the following command from our application directory,
Installing and Configuring the OpenTelemetry Exporter
Now, we need to install the OpenTelemetry exporter and configure it to send traces from our application to the required endpoint on our local machine. Let’s install the exporter first:
pip3 install opentelemetry-exporter-otlp
Now, we are going to configure environment variables specific to our exporter:
export OTEL_TRACES_EXPORTER=otlp export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317" export OTEL_RESOURCE_ATTRIBUTES="service.name=test" export OTEL_METRICS_EXPORTER=""
In the above configuration, we export our trace data using OpenTelemetry Protocol (OTLP). In addition, we set the export endpoint as
localhost:4317 and assign the name “test” to our tracing service as a resource attribute.
Downloading and Configuring the OpenTelemetry Collector
The last component that we will need is the OpenTelemetry Collector, which we can download here.
In our example, we will be using the otelcontribcol_darwin_amd64 flavor, but you can choose any other version of the collector from the list, as long as the collector is compatible with your operating system.
The data collection and export settings in the OpenTelemetry Collector are defined by a YAML config file. We will create this file in the same directory (
python-hello-world-otel) as the collector file that we have just downloaded and call it
config.yaml. This file will have the following configuration:
receivers: otlp: protocols: grpc: http: exporters: logzio: account_token: "<<TRACING-SHIPPING-TOKEN>>" #region: "<<LOGZIO_ACCOUNT_REGION_CODE>>" - (Optional) processors: batch: extensions: pprof: endpoint: :1777 zpages: endpoint: :55679 health_check: service: extensions: [health_check, pprof, zpages] pipelines: traces: receivers: [otlp] processors: [batch] exporters: [logzio]
In this example, we will send the traces to Logz.io’s Distributed Tracing service that’s based on the Jaeger OSS. So, we will configure the collector with the Logz.io exporter, which will send traces to the Logz.io account defined by the account token (if you don’t have an account, you can get a free one here). However, you can also export the trace data from the OpenTelemetry Collector to any other tracing backend by adding the required exporter configuration to this file (you can read more on exporters options here).
Running it All Together
Now that we have everything set up, let’s send some traces.
First, we need to start the OpenTelemetry Collector. We do this by specifying the path to the collector and the required config.yaml file. In our example, we run both files from our python-hello-world-otel directory as follows:
./otelcontribcol_darwin_amd64 --config ./config.yaml
The collector is now running and listening to incoming traces on port 4317.
Our next step is to start our application:
opentelemetry-instrument python3 server.py
All that is left for us to do at this point is to visit
http://localhost:81 as we specified in our original application script and then refresh the page, triggering our app to generate and emit a trace of that transaction (repeat that a few times to have several sample traces to look at). The Collector will then pick up these traces and send them to the distributed tracing backend defined by the exporter in the collector config file.
Let’s check the Jaeger UI to make sure our traces arrived ok:
The traces are in Jaeger, ready for us to visualize and analyze them.
Advanced: Adding Tracing Context to Python Logs
In the above example we’ve added distributed tracing to our Python application. However, your application likely also has another type of telemetry data: logs. How can you correlate your logs and your tracing telemetry data? Once you’ve added tracing instrumentation with OpenTelemetry (whether using auto-instrumentation or manual one), it’s only a matter of a single configuration line to your Python handler to automatically add the tracing metadata to your logs. The tracing metadata includes the trace ID, the span ID and the service name. This can be achieved with OpenTelemetry’s Python Logging Instrumentation library. You can read more about it here.
As you can see, OpenTelemetry makes it pretty simple to automatically instrument Python applications. All we had to do, was:
- Install required Python packages
- Install OpenTelemetry components
- Install application specific OpenTelemetry packages
- Install and configure the exporter
- Download the OpenTelemetry Collector and configure it to receive the trace data and send it to our tracing analytics tool
And most importantly, we didn’t need to add a single line of code in our Python scripts! For more information on OpenTelemetry instrumentation, visit this guide. If you’re interested in manual instrumentation options for Python, check out the OpenTelemetry GitHub repository. If you are interested in trying this integration out using Logz.io backend, feel free to sign up for a free account and then follow this documentation to set up auto-instrumentation for your own Python application.