AWS Route 53 Logging with Logz.io and the ELK Stack

aws route 53

Route 53 is Amazon’s Domain Name System (DNS) service (the name, of course, is a reference to TCP/UDP port 53 where DNS server requests are addressed). Route 53 allows users to not only route traffic to application resources or AWS services, but also register domain names and perform health checks.

The ability to log DNS queries routed by Route 53 was introduced by AWS in September last year. Once enabled, this feature will forward Route 53 query logs to CloudWatch, where users can search, export or archive the data. This is useful for a number of use cases, primarily troubleshooting but also security and business intelligence.

Once in CloudWatch, what next? For aggregation, analysis and visualization, Route 53 query logs can be exported to an AWS storage or streaming service such as S3 or Kinesis. Another option is to use a 3rd party platform, and this article will explore the option of exporting the logs into the ELK Stack.

dashboard

Route 53 Query Logs 101

As mentioned above, all queries running through Route53 can be logged and forwarded to CloudWatch for further inspection. This capability needs to be enabled as it is not turned on by default.

One CloudWatch log stream is created by Route 53 for each edge location responding to a DNS query, and query logs are sent to the relevant log stream.

The query logs themselves contain the following fields:

  • Log format – version number of this query log.
  • Timestamp – date and time that Route 53 responded to the request (in ISO8601 format).
  • Hosted Zone ID – ID of the hosted zone associated with all DNS queries in the log.
  • Query name – domain or subdomain specified in the request.
  • Query type – the DNS record type that was specified in the request, or ANY.
  • Response code – DNS response code returned by Route 53.
  • Protocol – the protocol that was used to submit the query (TCP/UDP).
  • Route 53 edge location – Route 53 edge location that responded to the query.
  • Resolver IP address – IP address of the DNS resolver that submitted the request to Route 53.
  • EDNS client subnet – a partial IP address for the client that the request originated from, if available from the DNS resolver.

Here’s an example log:

1.0 2018-03-08T15:19:21Z Z22T9V0GPE9FH3 www.danieldemosite.com A NOERROR UDP ATL52 52.23.175.171 -

By default, CloudWatch will store Route 53 query logs forever, but you can configure retention as with any log data shipped to CloudWatch. Query logs for Route 53 can be turned off just as easily as it was turned on.

Enabling Route53 Query Logging

Your first step is to enable query logging.

Assuming you already have a Hosted Zone and a registered domain in Route 53 (read more about how to set this up here), open the Route 53 console, select your Hosted Zone and your domain.

hosted zone domain

At the bottom the panel that slides open on the right, click Configure query logging.

You will now be asked to configure a CloudWatch log group to send the logs to. You can select an existing log group or create a new one.

set up cloudwatch

 

As shown in the image above, I’m going to create a new log group, and enter its name: /aws/route53/danieldemosite.

Clicking Create log group, a success message is displayed and you are now required to configure permissions. You can select existing resource policies or create new ones.

success query

I’m going to create a new resource policy, specifying the policy name and the log group it applies to. When done, just click Create policy and test permissions.

The policy is created and the permissions tested. If all goes well, you will see another green success message, meaning it’s now time to check whether query logs are being sent into CloudWatch.

log group time

You should be seeing a new log group created with the name you chose, containing some test messages.

Extracting data from CloudWatch

Great, Route 53 query logs are being sent into CloudWatch. Now what?

CloudWatch can be a useful tool to use for analyzing data but only to a certain degree. Properly querying the data and visualizing it is not possible, not to mention processing it for easier analysis. That’s why a lot of CloudWatch users export the data to another service or 3rd party platform such as the ELK Stack.

If you’re using your own ELK Stack, one way to go about it is to export to S3.

You can perform manual batch exporting or configure a Kinesis Firehose Stream to subscribe to the CloudWatch log group in question. From S3 you can then ship the query logs into the ELK Stack using Logstash S3 input plugin. Your Logstash configuration will need to include configurations for collecting the data from the S3 bucket, processing the query logs and forwarding it to Elasticsearch for indexing.

Here is an example of what that configuration might look like:

input {
  s3 {
    type => "route_53"
    bucket => "bucketName"
    region => "us-east-1"
    access_key_id => "aws_access_key"
    secret_access_key => "aws_secret_key"
  }
}


filter {

  grok {
    match => { "message" => "%{GREEDYDATA:log_format} %{TIMESTAMP_ISO8601:timestamp} %{DATA:hosted_zone} %{DATA:query_name} %{WORD:query_type} %{WORD:response_code} %{WORD:protocol} %{WORD:edge_location} %{IP:resolver_ip} (%{IP:edns_client}/%{NUMBER:bit}|-) }
  }

  date {
  	match => ["timestamp","ISO8601"]
  }
  
  geoip {
	source => "edns_client"
	target => "edns_client_geoip"
  }
  geoip {
	source => "resolver_ip"
	target => "resolver_ip_geoip"
}

output { 
  elasticsearch {
    hosts => ["localhost:9200"]
  }
}

Don’t forget to update the Logstash index template to map the edns_client_geoip and resolver_ip_geoip fields as geo_point.

Shipping into Logz.io

A few months ago we introduced a Lambda function that will ship data from a specified CloudWatch log group into Logz.io’s hosted ELK. The function is available on the new AWS Serverless application repository.

I will not specify the entire process for creating the Lambda–full instructions are available on GitHub, in the AWS repo, and in this article we published.

Two points are worth highlighting:

  • After adding the function code, you need to define environment variables. Be sure to set the value for the TYPE variable to ‘route_53’. This will ensure Logz.io applies automatic parsing for these logs.

environment variables

  • As the trigger, be sure to select your Route 53 log group.

route 53

I recommend performing a test to make sure the pipeline is solid. Be sure to select CloudWatch logs as the test event.

execution results

Within a few minutes, you should be seeing Route 53 query logs appearing in Logz.io. Logz.io automatically applied parsing to the logs so you can begin analysis.

logs

One last step before you can begin to analyze the data is to configure field mapping for the two geographical fields – resolver_geoip.location and client_geoip.location. This is easily done on the Field Mapping page, under Settings.

field mapping

Analyzing Route 53 query logs

Once in Kibana, you can begin to analyze the data. As a first step, familiarize yourself with the different available fields (see descriptions above or AWS docs for reference). Adding fields to the main display area will help you get some measure of visibility into the data and a better understanding of what they represent.

In the screenshot below, I’ve added the query_name and response_code fields.

query name response

Querying the data, you can pinpoint specific queries handled by Route 53. For example, a field-level search for queries originating from the United States with a query type of “TXT”:

edns_client-geoip.country_name:United States AND query_type:TXT

You can use the Filter dialog for easier building of filters and queries.

protocol

Visualizing Route 53 query logs

Once you’ve got a clearer picture of the data, you can begin to plan how you want to visualize it. Kibana has some rich visualization capabilities, so you can slice and dice the data in any way you want.

Here are some basic examples of how to visualize the data.

Query types over time

DNS record types indicate the format of the data in a query and can therefore be useful for understanding intended usage. Using the query_type field, we can build a bar chart visualization that gives us a depiction of the different types over time.

query types

Coordinate map for query origins

Route 53 query logs can contain two IPs that are geo-enriched by Logz.io’s parsing — that of the DNS resolver and that of the EDNS client. The latter is only available if supplied by the DNS resolver according to the protocol in place.

The EDNS client provides a good indication of where queries are originating from, but the resolver IP can also give a general indication of the approximate geographical location. Both can be used for coordinate map visualizations.

Resolver IP

resolver IP

EDNS client

EDNS client

Query name breakdown

It’s always useful to see a breakdown of the actual queries been made over time. As seen below, I’m using an area chart visualization using the query_name field to monitor queries to my pretty basic demo site.

query name breakdown

Once you’ve got your visualizations lined up and ready, add them up into a dashboard. The result — a nice general overview of queries handled by Route 53.

dashboard 2

This dashboard is available in ELK Apps — Logz.io’s library of pre-made dashboards and visualizations for different log types. Simply open ELK Apps, search for Route 53, and install  the dashboard.

Endnotes

Monitoring traffic between client devices and DNS resolvers can reveal a wealth of information useful for forensic analysis and security. Abnormal amounts of queries going through specific resolvers or originating from specific clients could mean malicious behavior — bots, malware or even DDoS attacks.

Not being able to analyze and visualize this data greatly inhibits your ability to make use of these logs, and that is precisely where the ELK Stack comes into the picture, providing the tools to efficiently dissect the logs.

To seal the deal and be more proactive, you also need to implement alerting. Alerting does not exist in the ELK Stack out-of-the-box. To be able to create alerting rules and get notified when something out of the ordinary is taking place, you will need to either install X-Pack or use an open source plugin. Logz.io provides a built-in and powerful alerting engine that will help you get notified, in real-time, when specific conditions that you define are met. Read more about Logz.io alerts here.

See alerts and other great Logz.io features in action!

 

 

Thank you for Subscribing!
Artboard Created with Sketch.
×

Turn machine data into actionable insights with ELK as a Service

By submitting this form, you are accepting our Terms of Use and our Privacy Policy

×

DevOps News and Tips to your inbox

We write about DevOps. Log Analytics, Elasticsearch and much more!

By submitting this form, you are accepting our Terms of Use and our Privacy Policy