This is the first in our series of tutorials on Logz.io Infrastructure Monitoring. Also read Building Grafana Visualizations and Configuring Alerts and Log-Metric Correlation.

This Logz.io Infrastructure Monitoring tutorial will cover our latest product, our new metrics solution based on Grafana. Engineers monitor metrics to understand CPU and memory utilization for infrastructure, duration and serverless execution, or for network traffic. For more advanced metrics monitoring operations, teams can send custom metrics to monitor signals like the number of active users.

Logz.io’s flagship product is Log Management, which delivers a fully-managed ELK Stack. Our customers use this product to monitor and investigate their logs, which tells them what’s going on with their application or services.

But consider how many logs your system generates when it’s on fire or cracking at the edges. How do you identify specific problems? 

Metrics are quantifiable data points around your application (application monitoring) or infrastructure (infrastructure monitoring). They will signal when and where problems have taken or are taking place. From there, you need your logs to diagnose the issue.

For example, a spike in CPU usage (which metrics would indicate) is not the problem itself, rather, it’s a side effect of the problem. Users need to investigate their logs to actually identify the root cause of the issue.

Put simply, metrics are important to monitor the health of your system without waiting for end users to flood your support system.

This infrastructure monitoring tutorial will cover the basics of setup for your Logz.io account, AWS configuration, and shipping metrics into Logz.io for display in a Grafana dashboard on Logz.io Infrastructure Monitoring. A follow-up Grafana visualization tutorial will be published this week.

Infrastructure Monitoring Tutorial: Set Up AWS and Logz.io

Monitoring metrics with Logz.io starts with sending them from your environment to ours. Sticking with our inclination toward open source, we recommend sending metrics with Metricbeat (check out our Metricbeat tutorial). To simplify life, we’ve already produced a bunch of integrations to ship metrics data.

You can take a look at https://docs.logz.io/shipping/#metrics-sources for the specific platform you want. In this blog, I’ll be covering using the feature-rich Docker image we’ve produced to send metrics. The collector has the ability to grab metrics from AWS, system level and even Docker itself, we’ll be looking at the AWS module here.

So, you’re now ready to ship metric data from AWS and put it into Logz.io, how do you actually glue this together? First things first, you’ll need the Docker image. From the command line run:

docker pull logzio/docker-collector-metrics

On the AWS side, you’ll need to configure an IAM user with the following permissions:cloudwatch:GetMetricData, cloudwatch:ListMetrics, ec2:DescribeInstances, ec2:DescribeRegions, iam:ListAccountAliases & sts:GetCallerIdentity.

Once the IAM user is in place, create an Access key ID and Secret access key for the Metricbeat configuration.

As the Docker works on a region-by-region basis, you will need a region code for the infrastructure to monitor.

You now have all the AWS information to configure the docker collector, the last step will be getting the metrics token, you can find this under Settings -> Manage accounts -> Metrics account plan.

Configure the Logz.io Infrastructure Monitoring account to ship metrics

Configure the Logz.io Infrastructure Monitoring account to ship metrics

Configure Docker Metrics Collector

Now you have everything to configure the Docker Metrics Collector.

Each of the required configuration parameters should pass in as environment variables, so each one should have a format like so: --env ENV_VARIABLE_NAME="value".

bash
docker run --name docker-collector-metrics \
--env LOGZIO_TOKEN="<<METRICS-TOKEN>>" \
--env LOGZIO_MODULES="aws" \
--env AWS_ACCESS_KEY="<<AWS-ACCESS-KEY>>" \
--env AWS_SECRET_KEY="<<AWS-SECRET-KEY>>" \
--env AWS_REGION="<<AWS-REGION>>" \
--env AWS_NAMESPACES="<<AWS-NAMESPACES>>" \
logzio/docker-collector-metrics

You need to switch out each part that has the <<>> placeholder with the configs you gathered previously. Additionally, the "<<AWS-NAMESPACES>>" is the AWS CloudWatch namespaces in a comma-separated list you’ll want to monitor.

For example, if you wanted to ship AWS EC2 metrics, use “AWS/EC2”, or “AWS/S3” to ship S3 metrics, and “AWS/Lambda”for Lambda.

If you want EC2, Lambda and S3 it would be “AWS/EC2,AWS/Lambda,AWS/S3”

After you run the container with all the correct configs it will immediately start sending data to Logz.io once it’s connected to the listener endpoint. If you give it a few minutes before heading to the dashboard you’ll start to see data appear and ready to use.

Once you’ve started sending metrics data to Logz.io, you can follow the rest of this infrastructure monitoring tutorial series. Check out building the Grafana dashboard for visualizations and other advanced analytics features to help with your operations processes. 

Read the entire series of tutorials on Logz.io Infrastructure Monitoring: