AWS S3 buckets are an indisputably powerful—and extremely well-organized—DevOps tool. Standing for “simple storage service,” the S3 is the lowest tier offered for AWS storage, but it is also the most indispensable. S3 buckets store data for immediate recall, the most active components in Amazon’s arsenal of storage options. They can store a variety of developer applications and up to five terabytes of data each. In turn, you can monitor the buckets themselves, gleaning AWS S3 CloudWatch metrics and logs from AWS.

Beyond S3 storage containers, there are options colder and colder storage (whose names Glacier, Snow, and Snowmobile make clear). S3 is known for its high availability because of its storage of data replicas to cover against downtime like network outages or hardware problems. In kind, AWS has a very strong SLA to back its storage (literally 99.9% uptime). They also use REST and SOAP interfaces.

S3 server access logs record requests to each bucket via AWS CloudWatch. Beyond this, AWS records three types of S3 CloudWatch metrics: 1) request metrics (by default at 60-second intervals), 2) replication metrics, and 3) daily storage metrics (reported once daily).

Replication metrics can be very interesting, monitoring 1) the number of objects to be replicates, 2) the size of objects to be replicated, and 3) maximum replication time. utilizes its Docker Metrics Collector for Amazon S3 metrics as well.

Configuring the Docker Collector for S3 Metrics

If you’re not already running Docker Metrics Collector, follow these steps.
Stop the container, add aws to the LOGZIO_MODULES environment variable, then restart.

You can find the run command and all parameters in this procedure.

Set up an IAM user with the following permissions:

  • cloudwatch:GetMetricData
  • cloudwatch:ListMetrics
  • ec2:DescribeRegions
  • iam:ListAccountAliases
  • sts:GetCallerIdentity

Next, you’ll need a few details for configuring Metricbeat for S3 request metrics.

  • Create an Access key ID for the IAM user
  • Secret access key for the IAM user
  • Specify your metrics region

Paste all these details into a text editor to make configuring the data request easier later on.

Enable S3 Request Metrics

Log into your AWS Management Console and open the Amazon S3 console. Then go to your bucket list (as it were) and select the one you want to enable.

Beneath the management tab, select Metrics, choose Requests, then click the edit icon and opt into request metrics (and optionally, either or both storage metrics and data transfer metrics if you don’t have these selected at this point).

Then save your configuration. AWS advises it might take up to 15 minutes for metrics to start flowing here.

Pull and Run Docker

Download the Docker Metrics Collector image here:

docker pull logzio/docker-collector-metrics

Finally, run the container with the following variables in a docker run command. For everything marked with “<<…>>”, replace all content between the outside quotation marks with the relevant information:

docker run --name docker-collector-metrics \
--env LOGZIO_TOKEN="<>" \
--env LOGZIO_MODULES="aws" \
--env AWS_ACCESS_KEY="<>" \
--env AWS_SECRET_KEY="<>" \
--env AWS_REGION="<>" \
--env AWS_NAMESPACES="<>" \
logzio/docker-collector-metrics and AWS Parameters

There are a number of parameters to set between and AWS. Some are optional. For the following parameters to define, the first two are requirements:

  1. LOGZIO_TOKEN—This is your account token. Replace <<SHIPPING-TOKEN>> with the token of the account you want to ship to.
  2. LOGZIO_URL—This is the listener host to ship the metrics to. Replace <<LISTENER-HOST>> with your region’s listener host (for example, For more information on finding your account’s region, see Account region.
  3. LOGZIO_MODULES—This is a comma-separated list of Metricbeat modules to be enabled on this container (formatted as module1,module2,module3). To use a custom module configuration file, mount its folder to /logzio/logzio_modules.
  4. LOGZIO_TYPE—In this instance, it will be something like docker-collector-metrics. The log type you’ll use with this Docker. This is shown under the type field in Kibana. applies parsing based on type.
  5. LOGZIO_LOG_LEVEL—This is the log level the module startup scripts will generate.
  6. LOGZIO_EXTRA_DIMENSIONS—A semicolon-separated list of additional fields to be included with each message sent (formatted as fieldName1=value1;fieldName2=value2). To use an environment variable as a value, format as fieldName=$ENV_VAR_NAME. Environment variables must be the only value in the field. Where an environment variable can’t be resolved, the field is omitted.

AWS Module Parameters include the following, and all four parameters are required:

  1. AWS_ACCESS_KEY—Your IAM user’s access key ID.
  2. AWS_SECRET_KEY—Your IAM user’s secret key.
  3. AWS_NAMESPACES—Comma-separated list of namespaces of the metrics you want to collect. For S3, this will be AWS/S3.
  4. AWS_REGION—Your region’s slug. You can find this in the AWS region menu (in the top menu, to the right).

Go to

Wait a few minutes and you should start seeing AWS Cloudwatch metrics for your S3 buckets flowing into

For more information on shipping cloud metrics and logs to, subscribe to the blog for more info on AWS, Azure, and other in-demand services.

Observability at scale, powered by open source


2022 Gartner® Magic Quadrant for Application Performance Monitoring and Observability
Forrester Observability Snapshot.

Consolidate Your AWS Data In One Place

Learn More