Logz.io has dedicated itself to encouraging and supporting cloud-native development. That has meant doubling down on support for AWS and Azure, but also increasing our tie-ins with Google Cloud Platform – GCP. Recently, our team added dozens of new integrations for metrics covering the gamut of products in the GCP ecosystem.
Google Cloud comprises many services that generate their own metrics: BigQuery, Google Compute Engine, GKE (Kubernetes), Pub/SUB, and more. This data is exposed via dedicated namespaces for each of the services. With our new integration, you can easily ship metrics from all Google Cloud services using an easy-to-configure setup with Telegraf.
We’ve added over two dozen integrations for Logz.io Infrastructure Monitoring. Here is the full list:
Google AI Platform
Google API Gateway
Google App Engine
Google Assistant Smart Home
Google BigQuery BI Engine
Google BigQuery Data Transfer Service
Google Certificate Authority Service
Google Cloud API
Google Cloud Armor
Google Cloud Bigtable
Google Cloud Composer
Google Cloud Data Loss Prevention
Google Cloud DNS
Google Cloud Functions
Google Cloud Healthcare API
Google Cloud IDS
Google Cloud Interconnect
Google Cloud Load Balancing
Google Cloud Logging
Google Cloud Monitoring
Google Cloud Router
Google Cloud Run
Google Cloud SQL
Google Cloud Storage
Google Cloud Tasks
Google Cloud TPU
Google Cloud Trace
Google Compute Engine
Google Compute Engine Autoscaler
Google Contact Center AI Insights
Google Dataproc Metastore
Google Firewall Insights
Google Identity and Access Management
Google IoT Core
Google Kubernetes Engine
Google Managed Service for Microsoft Active Directory
Google Memorystore for Memcached
Google Memorystore for Redis
Google Network Topology
Google reCAPTCHA Enterprise
Google Recommendations AI
Google Storage Transfer Service for on-premises data
Google Vertex AI
Google Virtual Private Cloud (VPC)
Google VM Manager
Telegraf is a common DevOps tool that enables you to ship a variety of metrics to Logz.io using a simple plugin configuration. In fact, many of Logz.io integrations have been made possible thanks to Telegraf (like Apache Solr, Influxdb, Haproxy, Puppet, and even YouTube to name a few). Let’s have a look at how this works for Google Cloud.
Configuring GCP Project Integration
To begin with, we need to set credentials in the GCP project that we will be collecting metrics from. In the Service account details screen we need to provide a unique name for the service account.
On the next screen – Grant this service account access to project – we add the following roles: Compute Viewer, Monitoring Viewer, and Cloud Asset Viewer. Note that you must be a Service Account Key Admin to select Compute Engine and Cloud Asset roles.
Creating the Project Key
Now we need to create a key that we will use to authorize the data export.
To create the key, we need to select our project in the Service accounts for project list, navigate to Keys > Add Key > Create new key, choose JSON as the type and save the file to a dedicated location on our local machine.
The next step is adding an environment variable to the key that we have just created. This is done by running:
<<PATH-TO-YOUR-GCP-KEY>> with the path to the directory where we saved our key.
If you don’t have Telegraf on your machine, you will need to install it. As of this writing, the latest release is Telegraf v1.20.4. Just note that in this tutorial, examples will utilize Telegraf 1.19. You’ll need at least Telegraf v1.17 or higher to feed data into Logz.io. We install Telegraf as follows:
After downloading the archive, extract its contents into
brew install telegraf
Ubuntu & Debian
sudo apt-get update && sudo apt-get install telegraf
RedHat and CentOS
sudo yum install telegraf
SLES & openSUSE
# add go repository zypper ar -f obs://devel:languages:go/ go # install latest telegraf zypper in telegraf
sudo pkg install telegraf
Telegraf is a plugin-driven agent orchestrated by the
telegraf.config file. The configuration file is located at
C:\Program Files\Logzio\telegraf\ on Windows,
/usr/local/etc/telegraf.conf on MacOS, and
/etc/telegraf/telegraf.conf on Linux. We basically need two plugins for this job: input and output.
The input plugin that we will be using across all Google Cloud services is
inputs.stackdriver. This is what it looks like:
[[inputs.stackdriver]] project = "<<YOUR-PROJECT>>" metric_type_prefix_include = [ "<<NAMESPACE>>", ] interval = "1m"
Here we need to define the name of our GCP project and a namespace that we will collect metrics from. We can specify as many namespaces as there are. The full list is available here.
The outputs plugin looks like this:
[[outputs.http]] url = "https://<<LISTENER-HOST>>:8053" data_format = "prometheusremotewrite" [outputs.http.headers] Content-Type = "application/x-protobuf" Content-Encoding = "snappy" X-Prometheus-Remote-Write-Version = "0.1.0" Authorization = "Bearer <<PROMETHEUS-METRICS-SHIPPING-TOKEN>>"
Here we need to replace
<<PROMETHEUS-METRICS-SHIPPING-TOKEN>> with a token for our Logz.io Metrics account and
<<LISTENER-HOST>> with the Logz.io Listener URL for our region, configured port
8053 for https traffic.
Our last step is to start Telegraf, which will automatically fetch the config file where we configured our logins. For MacOS, however, we need to explicitly provide the path to the config file. This is what we need to run, depending on our operating system:
telegraf.exe --service start
telegraf --config telegraf.conf
Linux (sysvinit and upstart installations)
sudo service telegraf start
Linux (systemd installations)
systemctl start telegraf
That’s it. No additional configuration is needed, so now you can just navigate to your Logz.io account and see all your GCP metrics there. To assist you further, we created a dedicated set of instructions for each namespace, which you can find on our documentation page either in the app or on docs.logz.io. Just filter the list by Google Cloud and you will see all applicable documents.
Stay Up to Date
Stay up to date with Logz.io as we continue to amplify our support for Telegraf and other popular monitoring and observability services. Subscribe to our blog for the latest product news, DevOps tutorials, and thoughts from tech leaders around the industry.