Distributed Tracing for C++ Applications with OpenTelemetry & Logz.io
Many organizations are moving from monolithic to microservices-based architectures. Microservices allow them to improve their agility and provide features more quickly. Although developing a single microservice is simpler, the complexity of the overall system is much greater. Here, we’ll review how to add distributed tracing to C++ with the OpenTelemetry collector and send to Logz.io.
One of the biggest challenges is finding efficient tools to quickly debug and solve production problems. This is done through logging, distributed tracing, and analyzing various metrics.
OpenTelemetry is a collection of tools to monitor applications and infrastructure by instrumenting, generating, collecting, and exporting telemetry data. This includes metrics, logs, and traces.
Check out our OpenTelemetry guide for more.
Typically, this data is streamed to some sort of user interface (UI), where it can be filtered, searched, and monitored from one place. Jaeger is one example of this type of tool.
The microservices approach promotes using various programming interfaces. While you can find many examples of how to use distributed tracing in high-level programming languages, including C#, Java, Go, and Node.js, many companies use C++ to create highly performant applications.
To keep the discussion clear, we’ll use an app composed of two microservices that communicate with each other through HTTP. Although we can choose from several different exporters, or make our own, we’re using Logz.io here.
You can find the repo for the Logz.io OpenTelemetry collectorcode on GitHub.
Getting Started
We’ll start by creating two HTTP services with C++ named service-a and service-b. Both expose the ping endpoints. Additionally, when the ping endpoint of service-a is invoked, it automatically sends a ping to service-b.
To implement HTTP server and client functionality, we’ll be using the Pistache framework. We can build the framework ourselves, or install it as a dependency in various ways (, including CMake build configuration and Pkg-Config). The companion code to this tutorial uses a CMake build configuration to compile Pistache from the source code that we previously cloned from the Git repository (see the common/pistache subfolder).
The HTTP server-side code looks the same for both services (see service.hpp under the service-a or service-b subfolders). We’ll first set up routes, like this:
using namespace Rest;
class Service {
// …
Router router;
void setupRoutes()
{
Routes::Get(router, "/ping", Routes::bind(&Service::ping, this));
}
// …
}
In the above code, we configure the ping route to be handled by the method of the same name. The ping method writes a message to the console, then responds to the caller with HTTP Status code 200 and a Hello from service-a/b constant string:
const std::string serviceName = "service-a";
void ping(const Rest::Request& request, Http::ResponseWriter writer)
{
std::cout << "\n---=== " << serviceName << "===---\n";
writer.send(Http::Code::Ok, "Hello from " + serviceName);
}
Once we have this, we need to initialize the endpoint:
std::shared_ptr httpEndpoint;
void initEndpoint()
{
auto opts = Http::Endpoint::options().threads(4);
httpEndpoint->init(opts);
}
Then we’ll start serving, like so:
void initAndStart()
{
initEndpoint();
setupRoutes();
std::cout << "Listening at: " << address.host() <<
":" << address.port().toString() << std::endl;
httpEndpoint->setHandler(router.handler());
httpEndpoint->serve();
}
The entry points of each service then use all the above steps (see main.hpp):
int main() {
Address address(Ipv4::any(), Port(8081));
Service serviceA(address);
serviceA.initAndStart();
}
To send HTTP requests from service-a to service-b, we use the Pistache HTTP Client (see serviceA.hpp):
void sendPingToAnotherService(std::string hostName, std::string serviceName)
{
auto resp = httpClient.get(hostName).send();
resp.then(
[&](Http::Response response) {
std::cout << "Response from " << serviceName << ":\n";
std::cout << "Code = " << response.code() << std::endl;
auto body = response.body();
if (!body.empty())
std::cout << "Body = " << body << std::endl;
},
[&](std::exception_ptr exc) {
std::cout << "Error..." << std::endl;
});
}
The above method sends the get request. Then, depending on the response, it either prints the response and the response code to the console, or it displays an “Error…
” message.
sendPingToAnotherService
expects you to pass the destination host name and associated service name. Additionally, it uses an instance of the HttpClient class that we previously initialized as follows:
Http::Client httpClient;
void initHttpClient()
{
auto opts = Http::Client::options().threads(1).maxConnectionsPerHost(8);
httpClient.init(opts);
}
Adding Tracing
When the application is ready, it would be good to add tracing to monitor the responses from service-b. We’ll meet such a problem when we monitor responses from our microservices that are interacting with other microservices or external APIs.
To add tracing, we start by adding the OpenTelemetry C++ client (version 0.6 at the time of writing). We clone its repository under the common folder of our project. Then, we create the CMakeLists.txt
to include both the OpenTelemetry C++ client and Pistache in our build:
cmake_minimum_required(VERSION 3.6)
# Project name and version
project(serviceA VERSION 1.0)
# Pistache requires C++ 17
set(CMAKE_CXX_STANDARD 17)
# Executable
add_executable(${PROJECT_NAME} main.cpp)
# Subdirectories
add_subdirectory("../common/opentelemetry-cpp"
"${CMAKE_CURRENT_BINARY_DIR}/opentelemetry-cpp_build")
add_subdirectory("../common/pistache"
"${CMAKE_CURRENT_BINARY_DIR}/pistache_build")
# Dependencies
include_directories("../common/opentelemetry-cpp/exporters/jaeger/include")
target_link_libraries(${PROJECT_NAME} PUBLIC
opentelemetry_trace
jaeger_trace_exporter
pistache
thrift::thrift_static)
To export the traces to Logz.io, we use the OpenTelemetry collector. This tool looks for spans generated by the local host and exports them to the Logz.io Jaeger backend. OpenTelemetry collector works as the sidecar or ambassador.
Here, we use the Jaeger exporter from OpenTelemetry. It sends traces from the app over the 6831 UDP using the compact thrift protocol. Hence, the OpenTelemetry Jaeger exporter uses the Apache thrift.
To install it, we use Hunter in our CmakeLists.txt
:
# Hunter is used to get thrift (a package required by jaeger-exporter)
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake")
include(HunterGate)
option(HUNTER_BUILD_SHARED_LIBS "Build Shared Library" ON)
HunterGate(
URL "https://github.com/cpp-pm/hunter/archive/v0.23.249.tar.gz"
SHA1 "d45d77d8bba9da13e9290a180e0477e90accd89b"
LOCAL # load `${CMAKE_CURRENT_LIST_DIR}/cmake/Hunter/config.cmake`
)
# Hunter is used to get thrift (a package required by jaeger-exporter)
hunter_add_package(thrift)
find_package(thrift CONFIG REQUIRED)
The CMakeLists.txt for service-b is the same. They just differ by the project name (serviceA or serviceB).
Next, we need to configure the tracer. Here, we’ll keep the relevant code in common/tracer.hpp. This file has three methods: setUpTracer, getTracer, and trace.
The first one (shown below) is responsible for initializing the tracing. It configures the span exporter and processor and sets the global trace provider:
#include
#include
#include
#include
namespace sdktrace = opentelemetry::sdk::trace;
using namespace opentelemetry::exporter::jaeger;
void setUpTracer(bool inCompose)
{
JaegerExporterOptions options;
options.server_addr = inCompose ? "jaeger" : "localhost";
auto exporter = std::unique_ptr(
new opentelemetry::exporter::jaeger::JaegerExporter(options));
auto processor = std::unique_ptr(
new sdktrace::SimpleSpanProcessor(std::move(exporter)));
auto provider = opentelemetry::nostd::shared_ptr(
new sdktrace::TracerProvider(std::move(processor)));
opentelemetry::trace::Provider::SetTracerProvider(provider);
}
Note that we use JaegerExporterOptions
to configure the endpoint. When the service runs in a separate container, like in Docker Compose, we set the endpoint to Jaeger (the container running the Jaeger tracer). Otherwise, we’ll set the endpoint to localhost.
The second method, getTracer
, obtains the global provider:
opentelemetry::nostd::shared_ptr getTracer(
std::string serviceName)
{
auto provider = opentelemetry::trace::Provider::GetTracerProvider();
return provider->GetTracer(serviceName);
}
The trace method uses getTracer
to report the span:
void trace(std::string serviceName, std::string operationName)
{
auto span = getTracer(serviceName)->StartSpan(operationName);
span->SetAttribute("service.name", serviceName);
// auto scope = getTracer(serviceName)->WithActiveSpan(span);
// do something with span, and scope then...
//
span->End();
}
Here, we just report the span with the tracer object’s StartSpan
method. In general, however, given the reference to the span object, you can add child spans, handle context (like HTTP response codes), or add bagging. Here, we use the span object to set the service.name attribute to the service name. When you are done, just invoke span->End()
.
We add the setUp
tracer method to the initAndStart
of each service:
void initAndStart()
{
initEndpoint();
setupRoutes();
initHttpClient();
setUpTracer(true);
std::cout << "Listening at: " << address.host() <<
":" << address.port().toString() << std::endl;
httpEndpoint->setHandler(router.handler());
httpEndpoint->serve();
}
Then, we need to ping methods. Here is the example for service-a:
void ping(const Rest::Request& request, Http::ResponseWriter writer)
{
std::cout << "\n---=== " << serviceName << "===---\n";
trace(serviceName, serviceName + ": received ping");
trace(serviceName, "sending ping to service-b");
sendPingToAnotherService("http://service-b:8082/ping", "service-b");
writer.send(Http::Code::Ok, "Hello from " + serviceName);
}
At this point, we can build the services. We’ll go to the subfolder of each service (that is, service-a or service-b), then invoke the following commands:
mkdir build && cd build
cmake -DBUILD_TESTING=OFF -DWITH_EXAMPLES=OFF -DWITH_JAEGER=ON .. && make
This starts the build operation and you can run the service: ./serviceA
or ./serviceB
. Once they start, send a get request to localhost:8081/ping
for service-a or localhost:8082/ping
for service-b. We should get a response like in the screenshot below:
But, to make everything more portable and cloud native, we containerize both services using the following Dockerfile:
FROM ubuntu:20.10
# Working directory
WORKDIR /usr/src/
COPY . /usr/src/
# Install dependencies
RUN apt-get update && \
apt-get install -y cmake build-essential git rapidjson-dev && \
apt install -y software-properties-common
# Build the service
WORKDIR /usr/src/service-a/
RUN mkdir build && cd build && \
cmake -DBUILD_TESTING=OFF -DWITH_EXAMPLES=OFF -DWITH_JAEGER=ON .. && \
make
# Start the service
CMD ["./build/serviceA"]
As shown above, we use Ubuntu Groovy Gorilla as a base image. We then copy the source code to usr/src
in the Docker image.
Afterward, we install dependencies (cMake, build-essential, software-properties-common, git, and rapidjson-dev). Subsequently, we build the service with cMake so that the binaries are accessible under the serviceA or serviceB build folder.
To make a multi-container app, we add the following docker-compose.yml
that builds both services and adds the Jaeger backend:
version: '3'
services:
service-a:
build:
context: .
dockerfile: service-a/Dockerfile
ports:
- "8081:8081"
service-b:
build:
context: .
dockerfile: service-b/Dockerfile
ports:
- "8082:8082"
jaeger:
image: jaegertracing/all-in-one
container_name: jaeger
ports:
- "16686:16686" # Jaeger UI
- "6831:6831/udp" # Thrift Udp Compact
At this point, note several important elements. First, the services add mappings to 8081 and 8082 ports that are hardcoded in the implementation (refer to service.hpp
files). Second, remember that Docker Compose makes the network in which containers can be reached via service names listed in the docker-compose.yml
. Service-a sends a ping to http://service-b:8082/ping. For that reason, we need to set the localAgentHostPort
entry under the config.yml
files.
To test the solution, we’ll start the containers with one of the following, depending on which Docker version is being used:
docker compose up --build
or
docker-compose up --build
This starts the build, then instantiates your containers. Here is a screenshot of the build operation:
This screenshot is immediately after the containers start:
We can now send the get request to services:
We should observe output similar to:
Finally, we can navigate to the local Jaeger backend (localhost:16686) to see our traces:
Note that the current version of the OpenTelemetry C++ client does not properly report the service name. Hence, we see the unknown_service in the Jaeger UI. However, we added the service.name
tag to identify the services:
Using OpenTelemetry to Send C++ Traces to Logz.io
It’s now straightforward to switch to remote Logz.io distributed tracing from the local configuration. We’ll just supplement the docker-compose.yml by the Jaeger collector with the Logz exporter, and instruct the Jaeger agent to stream data to the Jaeger Logz.io collector as follows:
version: '3'
services:
service-a:
build:
context: .
dockerfile: service-a/Dockerfile
ports:
- "8081:8081"
service-b:
build:
context: .
dockerfile: service-b/Dockerfile
ports:
- "8082:8082"
jaeger:
image: jaegertracing/all-in-one
container_name: jaeger
ports:
- "16686:16686"
command: ["--reporter.grpc.host-port=otel-logzio:14250"]
otel-collector:
image: otel/opentelemetry-collector-contrib:0.17.0
container_name: otel-logzio
command: ["--config=/etc/otel-collector-config.yml"]
volumes:
- ./config.yaml:/etc/otel-collector-config.yml
ports:
- "13133:13133" # health_check extension
depends_on:
- jaeger
Then, we’ll add config.yml
with collector configuration. It should include our account token and region that we get from Logz.io.
Here is the sample config.yml
that gets the local Jaeger spans and exports them to Logz.io:
receivers:
jaeger:
protocols:
grpc:
thrift_compact:
exporters:
logzio:
account_token:
region:
processors:
batch:
extensions:
health_check:
service:
extensions: [health_check]
pipelines:
traces:
receivers: [jaeger]
processors: [batch]
exporters: [logzio]
After restarting the compose with docker compose up
and sending some pings to one of the services, we should see our traces streaming to the remote Jaeger UI at Logz.io. You can see that in the figure below:
Alternatively, we can use the Jaeger Logz.io collector. In that case, our Docker Compose YAML file looks as follows:
version: '3'
services:
service-a:
build:
context: .
dockerfile: service-a/Dockerfile
ports:
- "8081:8081"
service-b:
build:
context: .
dockerfile: service-b/Dockerfile
ports:
- "8082:8082"
jaeger:
image: jaegertracing/all-in-one
container_name: jaeger
ports:
- "16686:16686" # Jaeger UI
- "6831:6831/udp" # Thrift Udp Compact
command: ["--reporter.grpc.host-port=jaeger-logzio:14250"]
# Alternative way of streaming tarces to logzio
jaeger-logzio:
image: logzio/jaeger-logzio-collector:latest
container_name: jaeger-logzio
ports:
- 14268:14268
- 14269:14269
- 14250:14250
environment:
ACCOUNT_TOKEN:
REGION:
GRPC_STORAGE_PLUGIN_LOG_LEVEL:
Next Steps
In this tutorial, we learned how to instrument C++ microservices for distributed tracing and ship them with the OpenTelemetry collector. We followed the cloud-native approach and containerized microservices. Then, we streamed spans to a local and a remote Jaeger UI at Logz.io.
Now that you know how, you can try it yourself using the companion code, or add tracing to your own C++ applications. If you are looking for a service to store your telemetry data, sign up for a free trial of Logz.io.