Monitoring Google Cloud Platform with Stackdriver and

Monitoring with Google Cloud Platform and Logs from Stackdriver

We’re happy to announce a new integration with Google Stackdriver, allowing users to easily ship data from Google Cloud Platform into via Google Pub/Sub! Early adopters of Google Cloud may recall that they were pretty much in the dark as far as logging their projects was concerned. Sure, they could access their virtual machines and manually grep log files but that was pretty much it. With this new integration, we can import logs from Stackdriver into

This changed dramatically in 2014 when Google acquired Stackdriver and integrated it as a managed service for collecting and storing logs from Google Cloud Platform services and applications. As good a tool as it is, though, Google Stackdriver falls short compared to other log management and log analysis tools in the market. The ELK Stack (Elasticsearch, Logstash, Kibana, and Beats) offers users a much more powerful experience, allowing them to perform advanced queries and build those beautiful Kibana dashboards we’re all accustomed to seeing. 

This article will explain how to integrate Stackdriver with so you can easily monitor your Google Cloud projects using the world’s most popular open-source log management solution. The integration is designed to help you easily tap into all your Google Cloud projects and applications — it’s container-based, lightweight, and supports multiple Google Pub/Sub pipelines for more complex environments. 

Let’s take a closer look.


You’ll need a few things to use build the pipeline described below:

  • Docker
  • A GCP project

Step 1: Simulating some logs

If you’ve already got logs flowing into Stackdriver — great. You can skip to the next step. If not, no worries, the following instructions will help you fake some request logs using a simple function that we’re going to run as a Cloud Function. 

In the Google Cloud console, simply open Cloud Functions and hit the Create new function button

Select HTTP as the trigger type, and copy/paste the code below as the source code for the index.js file:


var faker = require('faker')

exports.helloWorld = (req, res) => {
  var count = req.body.count || 1;
  for(var i = 0; i < count; i++) {
  res.status(200).send('Sent: ' + count + ' logs like ' + generateFakeLog())

function generateFakeLog() {
  var file = '/' + faker.system.fileName() + faker.system.fileExt()
  return faker.internet.ip() + ' - - [' + + '] "GET ' + file + ' HTTP/1.1" ' + faker.internet.userAgent()


Use the following code for the package.json (for defining dependencies):


  "name": "sample-http",
  "version": "0.0.1",
  "dependencies": {
    "faker": "4.1.0"

Create the new function and use the URL under the Trigger tab to call the function - monitoring GCP with Stackdriver and

Create the new function and use the URL under the Trigger tab to call the function – monitoring GCP with Stackdriver and

After creating the new function, we can use the URL displayed under the Trigger tab to call the function in our browser and generate some fake logs.

Create the new function and use the URL under the Trigger tab to call the function - monitoring GCP with Stackdriver and

Create the new function and use the URL under the Trigger tab to call the function – monitoring GCP with Stackdriver and

Clicking View Logs in the top-right corner of the console, we’re taken directly to Stackdriver where the logs we generated are displayed:

Click View Logs to go directly to Stackdriver where the generated logs are displayed

Click View Logs to go directly to Stackdriver where the generated logs are displayed

Step 2: Streaming to Pub/Sub

Next, we’re going to export the logs from Stackdriver to Google Pub/Sub. To do this, open the Exports tab in Stackdriver and then click Create Export.

In the pane that’s displayed on the right, name your export (aka. sink) and select Google Pub/Sub as the Sink Service. If you already have a Pub/Sub topic, select it from the drop-down menu. If not, create a new one and click the Create Sink button.

Create a new new Pub/Sub topic with the Create Sink button

Create a new new Pub/Sub topic with the Create Sink button

If you’ve already got a subscription for the topic, you can skip ahead to Step 3. If not, open the Google Pub/Sub console, and create a new subscription for the newly created topic. 

To make sure the pipeline is up and running, and the logs being collected by Stackdriver are streaming as expected into Pub/Sub, you can click the View Messages button at the top of the page:

Click View Messages

Click View Messages

Step 3: Integrating Logs from Stackdriver with

So we have data being streamed from our “application” into Stackdriver and from there into Pub/Sub. Our last and final step is to set up the integration with 

We’ll start with creating and accessing a folder to hold the integration resources: 

mkdir logzio-pubsub && cd logzio-pubsub 

Next, we’re going to build a credentials file using the following command (be sure to replace with your project ID):


wget \
&& make PROJECT_ID=my-project-id


In the case of multiple GCP projects, repeat the process for each project and just change the project ID.


Our next step is to define our Pub/Sub topics and subscriptions. We will do this using a pubsub-input.yml file: 


sudo vim pubsub-input.yml


Below is an example of the configurations that need to be entered in this file:


- project_id: 
  credentials_file: ./credentials-file.json
  subscriptions: ["MY-PUBSUB-SUBSCRIPTION"]
  type: stackdriver


Let’s understand the different building blocks in this configuration:


  • listener – the URL of the listener. This will differ depending on where your account is located. For reference, check out this list of available regions.
  • project_id – the ID of your GCP project. This should be the same ID you used when creating your credentials file. 
  • credentials_file – the location of the ./credentials-file.json created above.
  • token – the account shipping token. It can be found in the UI, on the General page.
  • topic_id – the ID of the Pub/Sub topic.
  • subscriptions – a comma-separated list of Pub/Sub subscriptions. 
  • type – the data source type. In the case of Google Stackdriver, this will be stackdriver.


And yes, you can ship from multiple GCP projects if you like into different accounts — we just need to add another block under the pubsubs section in the file with the relevant configurations.


All that’s left for us to do now is run the container for integrating with Pub/Sub.

We’ll pull the image with:


docker pull logzio/logzio-pubsub 

And then, from within the same directory in which we created all our integration resources, run the container as follows (be sure to enter the path to the local directory): 


docker run --name logzio-pubsub -v /logzio-pubsub/pubsub-input.yml:/logzio-pubsub/pubsub-input.yml -v /logzio-pubsub/credentials-file.json:/logzio-pubsub/credentials-file.json logzio/logzio-pubsub


The container will run a beats-based agent that uses the credentials and configuration file we created to set up the pipeline of logs from Stackdriver, via Pub/Sub, into Within a minute or two, you should begin to see logs appearing in

Logs from Stackdriver appearing in

Logs from Stackdriver appearing in


Google Stackdriver is a great tool for centrally logging across Google Cloud Platform projects and applications but requires a complementary solution to give teams full analysis power. While improving over the years, Stackdriver still lacks the querying and visualization capabilities engineers are used to using. The new integration with gives Google Cloud users the option to easily integrate with’s managed ELK Stack and so enjoy the best of two worlds — Stackdriver’s native integration into Google Cloud projects and ELK’s power of analysis. 


Observability at scale, powered by open source

Internal Live. Join the weekly live demo.
DevOps Pulse 2022: Observability Trends and Challenges.
Forrester Observability Snapshot.

Organize Your Kubernetes Logs On One Unified SaaS Platform

Learn More