What are AWS EC2 Instances? A Tutorial for EC2 Metrics Shipping with Logz.io

An Intro to AWS EC2 Metrics

Amazon Elastic Compute Cloud (a.k.a., EC2), is no doubt the core current computing infrastructure. It sits at the heart of AWS, the main kind of structure for housing virtual machines and containers for development and operations. Applying standards of observability with EC2 logs and obviously EC2 metrics (or any kind of AWS metrics for that matter) will inform you on if you have the right sorts of instances in place (and the appropriate size of those instances).

Here, we’ll give you a brief overview of the EC2 instances that AWS offers, and then the types of metrics you should be importing from them. After that, we’ll give a quick tutorial to ship EC2 metrics to Logz.io.

Different Kinds of AWS EC2 Instances

Besides being built to different scales of data, there are varieties. The main kinds of EC2 instances include A1, M5, M5a, T3, and T3a that are differentiated by their EBS-optimization and number of computing units, hence adding flux to their price. T instances, for…well…instance…are burstable performance instances that can provide sustained, high CPU performance while accommodating ‘bursting’ about the baseline.

Here are the other varieties of EC2 instances, specialized  for different purposes:

  1. Computing (C4, C5, C5n) — This group is optimized for batch processing, web servers and HPC.
  2. Memory (R4, R5, R5a, X1, X1e, z1d, and High Memory)
  3. Accelerated computing (F1, G3, P2, P3) — These use hardware accelerators to perform complex functions like graphics processing, data pattern matching, and floating point calculations.
  4. Storage (D2, H1, I3) — These are designed for workloads with high sequential read & write access.

EC2 Auto Scaling Metrics

Recently, AWS added predictive scaling to its Auto Scaling feature for EC2 instances. It also comes replete with custom metrics to measure when automatic adjustments are made according to those forecasted needs, plus the ultimate provisioned resources. This tutorial will cover standard EC2 and Auto Scaling metrics, as well as how to ship them to Logz.io.

New Types: EC2 Graviton & Beyond

Recently, Amazon expanded the list of available instance types based on what processors support each instance. Options have included various Intel and AMD processors in the past. A fast-emerging processor is the AWS Graviton, which purports to add 7x the performance, 4x the compute cores, 5x the memory speed, and twice the amount of caching capacity. EC2 A1 instances are the first of Arm-based instances on AWS. 

Types of Amazon EC2 Metrics

There are various kinds of EC2 metrics – that go beyond just metrics for instances – that CloudWatch tracks:

  • Instance Metrics
    • CPUUtilization
    • DiskReadBytes
    • DiskWriteBytes
    • DiskReadOps
    • DiskWriteOps
    • NetworkIn
    • NetworkOut
    • NetworkPacketsIn
    • NetworkPacketsOut
  • CPU credit metrics
    • CPUSurplusCreditsCharged
    • CPUSurplusCreditBalance
    • CPUCreditUsage
    • CPUCreditBalance
  • Dedicated Host metrics
    • DedicatedHostCPUUtilization
  • Amazon EBS metrics for Nitro-based instances
    • EBSByteBalance%
    • EBSIOBalance%
    • EBSReadOps
    • EBSWriteOps
    • EBSReadBytes
    • EBSWriteBytes
  • Status check metrics
    • StatusCheckFailed
    • StatusCheckFailed_Instance
    • StatusCheckFailed_System
  • Traffic mirroring metrics
  • Amazon EC2 metric dimensions
    • AutoScalingGroupName
    • ImageID
    • InstanceID
    • InstanceType
  • Amazon EC2 usage metrics
    • ResourceCount

Auto Scaling Metrics

  • Auto Scaling group metrics
    • GroupMinSize
    • GroupMaxSize
    • GroupDesiredCapacity
    • GroupInServiceInstances
    • GroupPendingInstances
    • GroupStandbyInstances
  • Instance weighting metrics
    • GroupInServiceCapacity
    • GroupPendingCapacity
    • GroupStandbyCapacity
    • GroupTerminatingCapacity
    • GroupTotalCapacity

Logz.io and AWS Parameters

There are a number of parameters to set between Logz.io and AWS. Some are optional. For the following Logz.io parameters to define, the first two are requirements:

Required Parameters:

  1. LOGZIO_TOKEN—This is your Logz.io account token for shipping Prometheus metrics. Replace <<SHIPPING-TOKEN>> with the token of the account you want to ship to.
  2. LOGZIO_REGION—Your Logz.io region code. For example if your region is US, then your region code is us. Look up your region code.
  3. AWS_ACCESS_KEY—Your IAM user’s access key ID.
  4. AWS_SECRET_KEY—Your IAM user’s secret key.
  5. AWS_NAMESPACES—Comma-separated list of namespaces of the metrics you want to collect. For S3, this will be AWS/S3. Note: This Environment variable is required unless you define the CUSTOM_CONFIG_PATH Environment variable
  6. AWS_DEFAULT_REGION—Your region’s slug. You can find this in the AWS region menu (in the top menu, to the right).

Optional Parameters:

  1. SCRAPE_INTERVAL—The time interval (in seconds) during which the Cloudwatch exporter retrieves metrics from Cloudwatch, and the Opentelemtry collector scrapes and sends the metrics to Logz.io. The default will be 300. Note: This value must be a multiple of 60. This isn’t required, but recommended.
  2. P8S_LOGZIO_NAME —The value of the p8s_logzio_name external label. This variable identifies which Prometheus environment the metrics arriving at Logz.io came from. This isn’t required, but recommended.
  3. CUSTOM_CONFIG_PATH—Path to your Cloudwatch exporter configuration file. For more information refer to the documentation. Note: Set the period_seconds parameter according to your SCRAPE_INTERVAL  
  4. CUSTOM_LISTENER —Set a custom URL to ship metrics to (for example, http://localhost:9200). This overrides the LOGZIO_REGION Environment variable.

Configuring the Docker Collector to Ship EC2 Metrics

If you’re not already running Docker Metrics Collector, follow these steps.

Stop the container, add aws to the LOGZIO_MODULES environment variable, then restart.

You can find the run command and all parameters in this procedure.

Set up an IAM user with the following permissions:

  • cloudwatch:GetMetricStatistics
  • cloudwatch:ListMetrics
  • tag:GetResources

You’ll need the following details to fully connect with AWS.

  • Create an Access key ID for the IAM user
  • Secret access key for the IAM user
  • Specify your metrics region

Paste all these details into a text editor for later.

Spin up your instance and install Docker.

Finally, take those previous details and apply them to the following command:

docker run --name cloudwatch-metrics \
-e TOKEN=<<METRICS-TOKEN>> \
-e LOGZIO_REGION=<<LOGZIO_REGION>> \
-e AWS_REGION=<<AWS_REGION>> \
-e AWS_ACCESS_KEY_ID=<<AWS_ACCESS_KEY_ID>> \
-e AWS_SECRET_ACCESS_KEY=<<AWS_SECRET_ACCESS_KEY>> \
-e AWS_NAMESPACES=<<AWS_NAMESPACES>> \
logzio/cloudwatch-metrics

Customize Your Docker Solution Configuration

If you don’t want to use our default values and use your own, here is how to do it:.

Custom Configuration: config.yml File

This is a much simpler YAML configuration file than you’d often find with something like docker-compose. In this case, all relevant parameters for OpenTelemetry and CloudWatch are included together. YAML files areis often cumbersome for advanced configurations. It’s a best practice in the industry to provide a template of this document for users. And even then, aside from knowing when to add and remove hashtags, it isn’t always a straightforward intuition to set it right. 

Unlike in the quick start option, you can customize configurations for scrape_interval, scrape_timeout, and remote_timeout among others. Custom CloudWatch configs and custom namespaces can also only be added here.

Let’s take a look at snippets from the .yml file. You can configure the file to export EC2 and EC2 Auto Scaling metrics. 

#config.yml sample file for Logz.io cloudwatch-metrics exporter

otel:
  # your logz.io region
  logzio_region: "us"
  # custom listener address
  custom_listener: ""
  # environment tag that will be attached to all samples
  p8s_logzio_name: "cloudwatch-metrics"
  # your logz.io metrics token
  token: ""
  # the time to wait between scrape requests
  scrape_interval: 300
  # the time to wait before throttling remote write post request to logz.io
  remote_timeout: 120
  # the time to wait before throttling a scrape request to cloudwatch exporter
  scrape_timeout: 120
  # opentelemetry log level
  log_level: "debug"
  # python script log level
  logzio_log_level: "info"
  # aws credentials
  AWS_ACCESS_KEY_ID: ""
  AWS_SECRET_ACCESS_KEY: ""
cloudwatch:
  # set to true if you are loading a custom configuration file for cloudwatch exporter
  custom_config: "false"
  # your cloudwatch aws region
  region: "us-east-1"
  # role arn to assume
  role_arn: ""
  # list of aws cloudwatch namespaces to monitor
  aws_namespaces: [AWS/EC2, AWS/Auto Scaling]
  # The newest data to request. Used to avoid collecting data that has not fully converged
  delay_seconds: 300
  # how far back to request data for. Useful for cases such as Billing metrics that are only set every few hours
  range_seconds: 300
  # period to request the metric for. Only the most recent data point is used
  period_seconds: 300
  # boolean for whether to set the Prometheus metric timestamp as the original Cloudwatch timestamp
  set_timestamp: "false"

Finally, mount config.yml to the container:

docker run --name cloudwatch-metrics \
-v <<path_to_config_file>>:/config_files/config.yml \
logzio/cloudwatch-metrics

You can check the individual configurations of the exporter and OTel collector at the following addresses:

  • CloudWatch exporter: http://localhost:5001/config/cloudwatch
  • OpenTelemetry collector: http://localhost:5001/config/otel

We at Logz.io believe in using Open Source and contribute back to it, that is why we decided to contribute all our AWS prebuilt configuration back to the CloudWatch exporter .

If you want to use your own CloudWatch configuration file just run the following command:

docker run --name cloudwatch-metrics \
-e TOKEN=<<TOKEN>> \
-e LOGZIO_REGION=<<LOGZIO_REGION>> \
-e AWS_ACCESS_KEY_ID=<<AWS_ACCESS_KEY_ID>> \
-e CUSTOM_CONFIG=true \
-e AWS_SECRET_ACCESS_KEY=<<AWS_SECRET_ACCESS_KEY>> \
-v <<path_to_cloudwatch_config_file>>:/config_files/cloudwatch.yml \
logzio/cloudwatch-metrics

Go to Logz.io

Just wait a few minutes and you should see your EC2 and/or EC2 Auto Scaling metrics appear in your Logz.io Infrastructure Monitoring account.

AWS Cloudwatch and EC2 Dashboard for Logz.io
AWS Cloudwatch and EC2 Dashboard for Logz.io

To customize that display further, implement our prefabricated dashboards for EC2 metrics or Auto Scaling metrics.

For more information on shipping cloud metrics and logs to Logz.io, subscribe to the blog for more info on AWS, Azure, and other in-demand services.

Get started for free

Completely free for 14 days, no strings attached.