eBPF and codeless Kubernetes

Update: A new eBPF foundation was announced on August 2021, founded by Facebook, Google, Isovalent, Microsoft and Netflix. The new open source eBPF foundation will be hosted under The Linux Foundation.

Current observability practice is largely based on manual instrumentation, which requires adding code in relevant points in the user’s business logic code to generate telemetry data. This can become quite burdensome and create a barrier to entry for many wishing to implement observability in their environment. This is especially true in Kubernetes environments and microservices architecture.

eBPF is an exciting technology for Linux kernel level instrumentation, which bears the promise of no-code instrumentation and easier observability into Kubernetes environments (alongside other benefits for networking and security).

See the July episode of OpenObservability Talks on the topic. 

The Promise of Auto-Instrumentation

Industry is early on in its journey, trying to define the practices around instrumentation.

The most prominent candidate for that is OpenTelemetry project under the CNCF. OpenTelemetry creates a standard for the different telemetry types and for the API, and offers SDKs per language with which engineers can manually add code to instrument their application.

OpenTelemetry is constantly expanding into automatic instrumentation (auto-instrumentation). It’s identifying popular libraries, such as Spring for Java and Django for Python, and adding instrumentation hooks in these libraries. This means that users of these libraries can enjoy out-of-the-box instrumentation with little to no additional coding effort. 

Service meshes offer another path to auto-instrumentation. A service mesh typically runs as a sidecar alongside the application container and routing the traffic. That positions it well to generate telemetry data from the HTTP traffic over ingress and egress.

It is important to note, however, that whilte these agents and service meshes offer a means for automatic instrumentation, they are not fully codeless – you still need to do some coding to instrument.

It is also important to note that automatic instrumentation may not be enough to satisfy all of your observability needs. Things such as system metrics or application profiling data may require manual instrumentation.

eBPF and the Promise of Zero-Code Instrumentation

eBPF (extended Berkeley Packet Filter) is a technology for the Linux kernel, letting run your code within the kernel without modifying it.

It does that by adding hooks to your kernel, and enables you to specify probes that run whenever the OS does something.

For example, every time you open a file, this hook can trigger a function of your making, and similarly with other system operations. The probe can run in user space or in kernel space. Missions

BPF started with the use case of filtering network packets, but has since extended beyond that to cover a range of system calls. This makes eBPF a very interesting technology for observability use cases, as it offers a path for extracting telemetry data without modifying the application code. With that, eBPF offers a new way of achieving auto-instrumentation. 

eBPF can also work across different types of traffic, which serves the goal of unified observability. For example, you may use eBPF to collect full body request traces, database queries, HTTP requests or gRPC streams. 

You can also use eBPF to collect system metrics about resource utilization such as CPU usage or bytes sent, which can serve to calculate statistics, as well as profiling data to understand how many resources each function consumes. This sort of hardware or system information is much harder to access when instrumenting with agents or service mesh, which gives eBPF a clear advantage for these use cases. Another advantage of running in the kernel is that eBPF can handle encrypted traffic.

eBPF is available on Linux starting version 4.14. Brendan Gregg has been a major champion who contributed to popularizing eBPF in Linux, and is a great source for in-depth information. With the growing popularity, there’s now a new project to enable eBPF for Windows as well.

These Linux superpowers are great, but how can we make sure these superpowers are used for good not evil? Indeed the community has put a lot of effort into that, and eBPF imposes strict requirements on the hooks or probes, and runs a thorough security pass on any probe you try and install. For example, these tests will detect problematic potentially infinite loops in your code. 

Performance is another sensitive question that comes with these Linux superpowers. In terms of code compilation, eBPF uses JIT compiler so that the probe is actually efficiently executed as compiled bytecode. Nonetheless, writers of eBPF probes are guided to put in effort to properly design and structure probes to not just pass the security checks but also to be mindful of overhead. 

Netflix Uses eBPF Flow Logs at Scale for Network Insight

In an excellent blog post published by Netflix engineering recently, it showed how Netflix uses eBPF flow logs at scale for network insight.

According to the post, Netflix has developed a network observability sidecar that uses eBPF tracepoints to capture TCP flows at near real time. It works at high scale, ingesting and enriching billions of eBPF flow logs per hour. 

The enriched data allows Netflix to analyze networks across a variety of dimensions (e.g. availability, performance, and security), to ensure applications can effectively deliver their data payload across a globally dispersed cloud-based ecosystem. 

Despite the scale, they report a highly performant sidecar, consuming less than 1% of CPU and memory on the instance. Check out the Netflix blog for more on that. 

Gaining Kubernetes Observability with Pixie OSS and eBPF

The cloud native community has been struggling with the challenge of Kubernetes observability, and is looking into eBPF for help. Pixie is a new OSS that uses eBPF to provide baseline observability into Kubernetes deployments. Right out of the gate, Pixie enables you to understand which system talks to which system, who is using most resources, where requests spend most of their time, to name some examples. Pixie uses eBPF to get access to different telemetry data such as tracing, resource utilization and application profiles.

Pixie was just accepted into the CNCF as a sandbox project. It was developed by Pixie Labs, a startup that was recently acquired by New Relic, which then open sourced and donated it to the CNCF.

As a cloud native member, Pixie is designed to run on a Kubernetes cluster and enrich the data with Kubernetes metadata such as pods, nodes and clusters. Now an official CNCF project, plans are to integrate with Prometheus and other parts of the CNCF ecosystem.

In terms of performance, Pixie probes reportedly consume no more than 5% CPU overhead, and typically under 2%. 

Endnote

eBPF is a very powerful technology, and the industry is still just scratching the surface of what can be done with it. Just like LXC that has been in the Linux kernel for many years, until Docker came along and made use of LXC to ignite the application containerization movement, we may very well see a similar movement around BPF. I expect we shall see many more use cases for it in the observability space, as well as for security and networking. eBPF may bring us closer than ever before to the vision of automatic instrumentation.

Get started for free

Completely free for 14 days, no strings attached.