Tutorial: Ship AWS EC2 Metrics to Logz.io

Ship EC2 metrics to Logz.io

Amazon Elastic Compute Cloud (a.k.a., EC2), is no doubt the core current computing infrastructure. It sits at the heart of AWS, the main kind of structure for housing virtual machines and containers for development and operations. Applying standards of observability with EC2 logs and obviously EC2 metrics will inform you on if you have the right sorts of instances in place (and the appropriate size of those instances).

Here, we’ll give you a brief overview of the EC2 instances that AWS offers, and then the types of metrics you should be importing from them. After that, we’ll give a quick tutorial to ship EC2 metrics to Logz.io.

Different Kinds of AWS EC2 Instances

Besides being built to different scales of data, there are varieties. The main kinds of EC2 instances include A1, M5, M5a, T3, and T3a that are differentiated by their EBS-optimization and number of computing units, hence adding flux to their price. T instances, for…well…instance…are burstable performance instances that can provide sustain high CPU performance while accommodating ‘bursting’ about the baseline.

Here are the other varieties of EC2 instances, specialized  for different purposes:

  1. Computing (C4, C5, C5n) — This group is optimized for batch processing, web servers and HPC.
  2. Memory (R4, R5, R5a, X1, X1e, z1d, and High Memory)
  3. Accelerated computing (F1, G3, P2, P3) — These use hardware accelerators to perform complex functions like graphics processing, data pattern matching, and floating point calculations.
  4. Storage (D2, H1, I3) — These are designed for workloads with high sequential read & write access.

Types of Amazon EC2 Metrics

There are various kinds of EC2 metrics that CloudWatch tracks:

  • Instance metrics
  • EBS metrics (Nitro-based instances)
  • Traffic mirroring metrics
  • CPU credit metrics
  • Status check metrics

To collect EC2 metrics, Logz.io utilizes its Docker Metrics Collector for Amazon S3 metrics as well.

Configuring the Docker Collector to Ship EC2 Metrics

If you’re not already running Docker Metrics Collector, follow these steps.
Stop the container, add aws to the LOGZIO_MODULES environment variable, then restart.

You can find the run command and all parameters in this procedure.

Set up an IAM user with the following permissions:

  • cloudwatch:GetMetricData
  • cloudwatch:ListMetrics
  • ec2:DescribeRegions
  • iam:ListAccountAliases
  • sts:GetCallerIdentity
  • ec2:DescribeInstances

Next, you’ll need a few details for configuring Metricbeat for EC2 Auto Scaling metrics.

  • Create an Access key ID for the IAM user
  • Secret access key for the IAM user
  • Specify your metrics region

Paste all these details into a text editor to make configuring the data request easier later on.

Enable EC2 Auto-Scaling Metrics

In the EC2 console left menu, click AUTO SCALING > Auto Scaling Groups

Select the Auto Scaling group you want to monitor. To do this, click the Monitoring tab, and then click Enable Group Metrics Collection.

Then save your configuration. AWS advises it might take up to 15 minutes for metrics to start flowing here.

Pull and Run Docker

Download the Logz.io Docker Metrics Collector image here:

docker pull logzio/docker-collector-metrics

Finally, run the container with the following variables in a docker run command. For everything marked with “<<…>>”, replace all content between the outside quotation marks with the relevant information:

docker run --name docker-collector-metrics \
--env LOGZIO_TOKEN="<>" \
--env LOGZIO_MODULES="aws" \
--env AWS_ACCESS_KEY="<>" \
--env AWS_SECRET_KEY="<>" \
--env AWS_REGION="<>" \
--env AWS_NAMESPACES="<>" \
logzio/docker-collector-metrics

Logz.io and AWS Parameters

There are a number of parameters to set between Logz.io and AWS. Some are optional. For the following Logz.io parameters to define, the first two are requirements:

  1. LOGZIO_TOKEN—This is your Logz.io account token. Replace <<SHIPPING-TOKEN>> with the token of the account you want to ship to.
  2. LOGZIO_URL—This is the Logz.io listener host to ship the metrics to. Replace <<LISTENER-HOST>> with your region’s listener host (for example, listener.logz.io). For more information on finding your account’s region, see Account region.
  3. LOGZIO_MODULES—This is a comma-separated list of Metricbeat modules to be enabled on this container (formatted as module1,module2,module3). To use a custom module configuration file, mount its folder to /logzio/logzio_modules.
  4. LOGZIO_TYPE—In this instance, it will be something like docker-collector-metrics. The log type you’ll use with this Docker. This is shown under the type field in Kibana. Logz.io applies parsing based on type.
  5. LOGZIO_LOG_LEVEL—This is the log level the module startup scripts will generate.
  6. LOGZIO_EXTRA_DIMENSIONS—A semicolon-separated list of additional fields to be included with each message sent (formatted as fieldName1=value1;fieldName2=value2). To use an environment variable as a value, format as fieldName=$ENV_VAR_NAME. Environment variables must be the only value in the field. Where an environment variable can’t be resolved, the field is omitted.

AWS Module Parameters include the following, and all four parameters are required:

  1. AWS_ACCESS_KEY—Your IAM user’s access key ID.
  2. AWS_SECRET_KEY—Your IAM user’s secret key.
  3. AWS_NAMESPACES—Comma-separated list of namespaces of the metrics you want to collect. For S3, this will be AWS/S3.
  4. AWS_REGION—Your region’s slug. You can find this in the AWS region menu (in the top menu, to the right).

Go to Logz.io

Wait a few minutes and you should start seeing AWS Cloudwatch metrics for your S3 buckets flowing into Logz.io.

For more information on shipping cloud metrics and logs to Logz.io, subscribe to the blog for more info on AWS, Azure, and other in-demand services.

    Stay updated with us!

    By submitting this form, you are accepting our Terms of Use and our Privacy Policy

    Thank you for subscribing!

    Internal

    × Announcing Early Access for Prometheus-as-a-service! Learn More