Collecting Metrics from Windows Kubernetes Nodes in AKS

Windows and AKS

Windows applications constitute a large portion of the services and applications that run in many organizations. When moving to a Kubernetes-based architecture, there is a need to support these as well. Up until April 2020, the lack of container support within the Windows operating system left Linux container images as the only viable option for Kubernetes container deployment. Following the release of Windows containers, there is now a Windows-native way to encapsulate processes and package dependencies, making it easier to use DevOps practices and follow cloud-native patterns for Windows containerized applications.

Differences Between Windows and Linux Kubernetes Containers

As a DevOps engineer that maintains or has maintained a Kubernetes cluster, you probably already understand the importance of full observability into your cluster, pods, and containers.

When running a cluster with only Linux nodes, gaining that visibility is relatively simple. You will need 4 main components:

  • Node Exporters – installed on each node as a privileged container (which means that the container has all of the capabilities of the host machine, allowing the ability to access resources which are not accessible in ordinary containers) to collect its system metrics.
  • Kube-state-metrics – a component to expose the metrics of the cluster in a prometheus format
  • Collector – Prometheus or any other tool ( like OpenTelemetry / Telegraf … ) to collect the metrics from Kube-State-Metrics and Node Exporter.
  • Observability solution – a place to store your metrics, visualize them and alert on any issue. The most popular one today for a self managed option is Prometheus. For a cloud solution based there are many solutions out there like Logz.io

As of today, and until Kubernetes v1.23 will be available in AKS (Azure Kubernetes Service), this solution unfortunately will not work for Windows containers, as they don’t have the required privileged permissions, which means we cannot gather reliable information about the host machine.

Current Solutions for Exporting Metrics from AKS Windows Containers

In AKS, every node pool is a VMSS- virtual machine scale set , which in turn runs a VM for each Node.

The available and viable solutions for this issue are scarce, and often are not generic to be used by any user, the best of them are:

1. aidaspsibr’s solution – Provides an easy way to install an extension on the Windows node pools, which in turn requires additional customization in order to install  the Windows exporter on every node that is running and will be run in the future.

2. Octopus’s solution – Extending aidaspsibr’s solution, Octopus using Terraform provider for Azure to ensure all Windows nodes in the cluster will run the node exporter, and using a reverse proxy to expose the node endpoint for metrics gathering.

Unfortunately both of the solutions are not generic enough to be used in any cluster, as not everyone uses that specific CI/CD tool in their cluster.

We’ve got a better way for exporting metrics from a Windows node host

To help the community we released a helmchart solution based on OpenTelemetry so anyone who wants to collect metrics can do it easily. In our solution we pointed OpenTelemetry to Logzio but you can change it to any other supported solution by OpenTelemetry. 

First, we need to establish a way to connect to a Windows node in a direct or indirect way, and install the Windows exporter in the node machine.

We can do it by SSH connection to each Windows node, using a privileged linux container.

For this solution, there are 3 components:

1. A privileged linux container job, which will use SSH to connect to all the Windows nodes and install the Windows exporter on them.

2. A reverse proxy (Credit to Octopus) which will allow us to expose the /metrics endpoint on the Windows node.

3. A collector, OpenTelemetry, to scrape the metrics automatically and forward them to the backend of your choice (in our example Logz.io).

Step 1: Privileged Linux Node – Windows Exporter Installer Job

In order to connect to a Windows node, we must use the network of another node in the cluster. We can achieve it by using a privileged linux container.

The next step is connecting to the Windows node using SSH as an administrator. The authentication process to the node as administrator requires a username and password.

In AKS, we specify the username and password when creating the cluster, the defaults are:

Username: azureuser – if you didn’t specify and username

Password: AKS creates a random password.

Incase you forgot or did not specify any password, you can changed it using the following command:

az aks update \
--resource-group $RESOURCE_GROUP \
--name $CLUSTER_NAME \
--windows-admin-password $NEW_PW

Fortunately, in AKS all Windows nodes share the same username and password as a default.

If there are different usernames/passwords between the nodes, we can rerun the job with different credentials.

We will run the following script as a one time job in the container:

def main(win_node_username, win_node_password):
    windows_nodes = subprocess.check_output(
        KUBECTL_WINDOWS_NODES_QUERY).decode('utf-8')
    windows_nodes = json.loads(windows_nodes)
    ssh_client = paramiko.SSHClient()
    ssh_client.set_missing_host_key_policy(AutoAddPolicy())

    if len(windows_nodes['items']) == 0:
        logging.debug("No windows nodes found, skipping job")
        return
    for win_node in windows_nodes['items']:
        win_node_hostname = win_node['status']['addresses'][1]['address']
        try:
            ssh_client.connect(win_node_hostname, username=win_node_username, password=win_node_password)
        except AuthenticationException:
            logging.error(f"SSH connection to node {win_node_hostname} failed, please check username and password")
            continue
        logging.debug(f"Connected to windows node {win_node_hostname}")
        ssh_stdin, ssh_stdout, ssh_stderr = ssh_client.exec_command('net start')
        running_services = ssh_stdout.read()
        if running_services.decode("utf-8").find("windows_exporter") != -1:
            logging.debug(f"Node {win_node_hostname} already running windows_exporter, closing connection.")
            close_connection(ssh_stdin, ssh_stderr, ssh_stdout, ssh_client)
            continue
        install_windows_exporter(ssh_client, win_node_hostname)
        close_connection(ssh_stdin, ssh_stderr, ssh_stdout, ssh_client)

See the full script

Step 2: Reverse proxy to Expose the Windows Node Endpoint

Thanks to Octopus, this part was relatively easy. We built a reverse proxy using nginx, packed it into a Docker image for Windows and ran it as a daemonset for Windows nodes in the cluster.

Step 3: Scrape the Reverse Proxy Pods and Forward the Metrics

For this solution, we used an OpenTelemetry Collector with the Prometheus receiver.

We added a Prometheus scrape annotation to mark the pods which will be scraped.

Using Prometheus remote write exporter, we forward the data to Logz.io’s Infrastructure Monitoring service. You can follow the same pattern to export to other Prometheus-compatible backends of your choice.

scrape_configs:
       - job_name: windows-metrics
         honor_timestamps: true
         honor_labels: true
         metrics_path: /metrics
         scheme: http
         kubernetes_sd_configs:
         - role: pod
         relabel_configs:
           - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
             action: keep
             regex: true|"true"
Windows Exporter architecture
Windows Exporter architecture

In the diagram, you can see the Windows exporter installer job installs the Windows exporter onto the Windows node. The nginx reverse proxy allows us to send requests to the proxy :9100/metrics endpoint, which in turn redirects the request to the node host. This exposes the Windows host metrics exposed to the OpenTelemetry collector.

It’s also worth noting that although kubelet can gather information about pod status and state, it cannot gather metrics from an individual node. 

Caveats

There are a few caveats to bear in mind for this suggested solution:

Running a one-time job for the Windows exporter installation is not entirely automatic, as we will need to run the job each time a new node is running.

There are different approaches to address that, with different tradeoffs:

The first option is to run the job as a container which runs indefinitely, in intervals. Every X minutes, check for new nodes and install the Windows exporter.

Having a username and password exposed in a container that runs indefinitely is a bad practice, even when using Kubernetes secrets.

Another option is to run a job every X minutes as a scheduled job using CronJob. This ensures that the username and password are not indefinitely exposed. 

However, running a job every X minutes means that a pod will be created and destroyed each time, which can create a lot of pods in a finished state. This will require additional ‘cleaning’ of these pods from the kubectl pods list. We can either do it in the same pod of the job, or with a new pod dedicated to erasing those used pods from the list.

Final Notes and a Helm Chart

Monitoring Windows containers isn’t easy and requires more effort than the Linux counterpart.

Current solutions aren’t enough as they don’t provide flexibility and option of customization without using CI/CD tools.

Our solution offers a good way of enabling metrics collection while using modular and customizable components.

We also created a Helm chart that captures this pattern, available on GitHub. It is open source and can be used to send metrics to Logz.io, plus you can adapt it for other backends or any other use. 

Try Prometheus-as-a-Service and our Advanced Metrics UI!

Get started for free

Completely free for 14 days, no strings attached.