AWS S3 buckets are an indisputably powerful—and extremely well-organized—DevOps tool. Standing for “simple storage service,” the S3 is the lowest tier offered for AWS storage, but it is also the most indispensable. S3 buckets store data for immediate recall, the most active components in Amazon’s arsenal of storage options. They can store a variety of developer applications and up to five terabytes of data each. In turn, you can monitor the buckets themselves, gleaning AWS S3 CloudWatch metrics and logs from AWS.
Beyond S3 storage containers, there are options colder and colder storage (whose names Glacier, Snow, and Snowmobile make clear). S3 is known for its high availability because of its storage of data replicas to cover against downtime like network outages or hardware problems. In kind, AWS has a very strong SLA to back its storage (literally 99.9% uptime). They also use REST and SOAP interfaces.
S3 server access logs record requests to each bucket via AWS CloudWatch. Beyond this, AWS records three types of S3 CloudWatch metrics: 1) request metrics (by default at 60-second intervals), 2) replication metrics, and 3) daily storage metrics (reported once daily).
Replication metrics can be very interesting, monitoring 1) the number of objects to be replicates, 2) the size of objects to be replicated, and 3) maximum replication time.
Logz.io utilizes its Docker Metrics Collector for Amazon S3 metrics as well.
Configuring the Docker Collector for S3 Metrics
If you’re not already running Docker Metrics Collector, follow these steps.
Stop the container, add
aws to the
LOGZIO_MODULES environment variable, then restart.
You can find the
run command and all parameters in this procedure.
Set up an IAM user with the following permissions:
Next, you’ll need a few details for configuring Metricbeat for S3 request metrics.
- Create an Access key ID for the IAM user
- Secret access key for the IAM user
- Specify your metrics region
Paste all these details into a text editor to make configuring the data request easier later on.
Enable S3 Request Metrics
Log into your AWS Management Console and open the Amazon S3 console. Then go to your bucket list (as it were) and select the one you want to enable.
Beneath the management tab, select
Requests, then click the edit icon and opt into request metrics (and optionally, either or both storage metrics and data transfer metrics if you don’t have these selected at this point).
Then save your configuration. AWS advises it might take up to 15 minutes for metrics to start flowing here.
Pull and Run Docker
Download the Logz.io Docker Metrics Collector image here:
docker pull logzio/docker-collector-metrics
Finally, run the container with the following variables in a
docker run command. For everything marked with “<<…>>”, replace all content between the outside quotation marks with the relevant information:
docker run --name docker-collector-metrics \ --env LOGZIO_TOKEN="<>" \ --env LOGZIO_MODULES="aws" \ --env AWS_ACCESS_KEY="<>" \ --env AWS_SECRET_KEY="<>" \ --env AWS_REGION="<>" \ --env AWS_NAMESPACES="<>" \ logzio/docker-collector-metrics
Logz.io and AWS Parameters
There are a number of parameters to set between Logz.io and AWS. Some are optional. For the following Logz.io parameters to define, the first two are requirements:
LOGZIO_TOKEN—This is your Logz.io account token. Replace
<<SHIPPING-TOKEN>>with the token of the account you want to ship to.
LOGZIO_URL—This is the Logz.io listener host to ship the metrics to. Replace
<<LISTENER-HOST>>with your region’s listener host (for example, listener.logz.io). For more information on finding your account’s region, see Account region.
LOGZIO_MODULES—This is a comma-separated list of Metricbeat modules to be enabled on this container (formatted as
module1,module2,module3). To use a custom module configuration file, mount its folder to
LOGZIO_TYPE—In this instance, it will be something like docker-collector-metrics. The log type you’ll use with this Docker. This is shown under the type field in Kibana. Logz.io applies parsing based on
LOGZIO_LOG_LEVEL—This is the log level the module startup scripts will generate.
LOGZIO_EXTRA_DIMENSIONS—A semicolon-separated list of additional fields to be included with each message sent (formatted as
fieldName1=value1;fieldName2=value2). To use an environment variable as a value, format as
fieldName=$ENV_VAR_NAME. Environment variables must be the only value in the field. Where an environment variable can’t be resolved, the field is omitted.
AWS Module Parameters include the following, and all four parameters are required:
AWS_ACCESS_KEY—Your IAM user’s access key ID.
AWS_SECRET_KEY—Your IAM user’s secret key.
AWS_NAMESPACES—Comma-separated list of namespaces of the metrics you want to collect. For S3, this will be
AWS_REGION—Your region’s slug. You can find this in the AWS region menu (in the top menu, to the right).
Go to Logz.io
Wait a few minutes and you should start seeing AWS Cloudwatch metrics for your S3 buckets flowing into Logz.io.
For more information on shipping cloud metrics and logs to Logz.io, subscribe to the blog for more info on AWS, Azure, and other in-demand services.