A Kibana Tutorial: Getting Started

Daniel Berman
What is Kibana

Kibana is the visualization layer of the ELK Stack — the world’s most popular log analysis platform which is comprised of Elasticsearch, Logstash, and Kibana.


This tutorial will guide you through some of the basic steps for getting started with Kibana — installing Kibana, defining your first index pattern, and running searches. Examples are provided throughout, as well as tips and best practices.

Installing Kibana

Presuming you already have Elasticsearch installed and configured, we will start with installing Kibana. If you want to find out more about installing Elasticsearch, check out this Elasticsearch tutorial.
Depending on your operating system and your environment, there are various ways of installing Kibana. We will be installing Kibana on an Ubuntu 16.04 machine running on AWS EC2 on which Elasticsearch and Logstash are already installed.
Start by downloading and installing the Elastic public signing key:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key
add -

Add the repository definition:

echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo
tee -a /etc/apt/sources.list.d/elastic-7.x.list

It’s worth noting that there is another package containing only features available under the Apache 2.0 license.  To install this package, use:

echo "deb https://artifacts.elastic.co/packages/oss-7.x/apt stable main" |
sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

All that’s left to do is to update your repositories and install Kibana:

sudo apt-get update
sudo apt-get install kibana

Open up the Kibana configuration file at: /etc/kibana/kibana.yml, and make sure you have the following configurations defined:

server.port: 5601
elasticsearch.url: "http://localhost:9200"

These specific configurations tell Kibana which Elasticsearch to connect to and which port to use.
Now, start Kibana with:

sudo service kibana start

Open up Kibana in your browser with: http://<yourServerIP>:5601. You will be presented with the Kibana home page.
Kibana Home

Defining an index pattern

Your next step is to define a new index pattern, or in other words, tell Kibana what Elasticsearch index to analyze. To do that you will of course need to have data indexed.
For the purpose of this tutorial, we’ve prepared some sample data containing Apache access logs that is refreshed daily. You can download the data here: https://logz.io/sample-data
Next, we will use Logstash to collect, parse and ship this data into Elasticsearch. If you haven’t installed Logstash yet, or are not familiar with how to use it, check out this Logstash tutorial.
Create a new Logstash configuration file at: /etc/logstash/conf.d/apache-01.conf:

sudo vim /etc/logstash/conf.d/apache-01.conf

Enter the following Logstash configuration (change the path to the file you downloaded accordingly):

input {
  file {
    path => "/home/ubuntu/apache-daily-access.log"
    start_position => "beginning"
    sincedb_path => "/dev/null"
  }
}
filter {
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
    date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
    geoip {
      source => "clientip"
    }
}
output {
  elasticsearch {
    hosts => ["localhost:9200"]
  }
}

Start Logstash with:

sudo service logstash start

If all goes well, a new index will be created in Elasticsearch, the pattern of which can now be defined in Kibana.
In Kibana, go to Management → Kibana Index Patterns, and Kibana will automatically identify the new “logstash-*” index pattern.  
create index pattern
Define it as “logstash-*”, and in the next step select @timestamp as your Time Filter field.
index pattern 2
Hit Create index pattern, and you are ready to analyze the data. Go to the Discover tab in Kibana to take a look at the data (look at today’s data instead of the default last 15 mins).
analyze data

Using Kibana in Logz.io

If you’re using Logz.io, simply use this cURL command to upload the sample log data. Logz.io listeners will parse the data using automatic parsing so there’s no need to configure Logstash (the token can be found on the Settings page in the Logz.io UI, and the type of the file is apache_access):

curl -T <Full path to file>
http://listener.logz.io:8021/file_upload/<Token>/apache_access

Kibana Searching

Kibana querying is an art unto itself, and there are various methods for performing searches on your data. This section will describe some of the most common search methods as well as some tips and best practices that should be memorized for optimized user experience.

KQL and Lucene

Up until version 6.2, the only way to query in Kibana was using Lucene syntax. Starting in version 6.2, another query language was introduced called Kuery, or as it’s been called now — KQL (Kibana Querying Language) to improve the searching experience.
Since version 7.0, KQL is the default language for querying in Kibana but you can revert to Lucene if you like. For the basic example below, there will be little difference in the search results.

Free-Text Search

Free text search works within all fields — including the _source field, which includes all the other fields. If no specific field is indicated in the search, the search will be done on all of the fields that are being analyzed.
In the search field at the top of the Discover page, run these searches and examine the result ( set the time parameter on the top right of the dashboard to the past month to capture more data):

  • category
  • Category
  • categ
  • cat*
  • categ?ry
  • “category”
  • category\/health
  • “category/health”
  • Chrome
  • chorm*

Tips and gotchas

  1. Text searches are not case sensitive. This means that category and CaTeGory will return the same results. When you put the text within double quotes (“”), you are looking for an exact match, which means that the exact string must match what is inside the double quotes. This is why [category\/health] and [“category/health”] will return different results
  2. Kibana wildcard searches – you can use the wildcard symbols [*] or [?] in searches. [*] means any number of characters, and [?] means only one character

Field-Level Searches

Another common search in Kibana is field-level queries, sued for searching for data inside specific fields. To use this type of search that, you need to use the following format:

<fieldname>:search

As before, run the following searches to see what you get (some will purposely return no results):

  • name:chrome
  • name:Chrome
  • name:Chr*
  • response:200
  • bytes:65
  • bytes:[65 TO *]
  • bytes:[65 TO 99]
  • bytes:{65 TO 99}
  • _exists_:name

Tips and gotchas

  1. Field-level searches depend on the type of field. (Logz.io users – all fields are not analyzed by default, which means that searches are case-sensitive and cannot use wildcard searches. The reason we save all of the fields as “not analyzed” is to save space in the index because the data is also duplicated in an analyzed field called _source)
  2. You can search a range within a field. If you use [], this means that the results are inclusive. If you use {}, this means that the results are exclusive.
  3. Using the _exists_ prefix for a field will search the documents to see if the field exists
  4. When using a range, you need to follow a very strict format and use capital letters TO to specify the range

Logical Statements

You can use logical statements in searches in these ways:

  • USA AND Firefox
  • USA OR Firefox
  • (USA AND Firefox) OR Windows
  • -USA
  • !USA
  • +USA
  • NOT USA

Tips and gotchas

  1. You need to make sure that you use the proper format such as capital letters to define logical terms like AND or OR
  2. You can use parentheses to define complex, logical statements
  3. You can use -,! and NOT to define negative terms

Kibana special characters

All special characters need to be properly escaped. The following is a list of all available special characters:

+ – && || ! ( ) { } [ ] ^ ” ~ * ? : \


Proximity searches

Proximity searches are an advanced feature of Kibana that takes advantage of the Lucene query language.
[categovi~2] means a search for all the terms that are within two changes from [categovi]. This means that all category will be matched.

Tips and gotchas

Proximity searches use a lot of system resources and often trigger internal circuit breakers in Elasticsearch. If you try something such as [catefujt~10], it is likely not to return any results due to the amount of memory used to perform this specific search.

Kibana Autocomplete

To help improve the search experience in Kibana, the autocomplete feature suggests search syntax as you enter your query. As you type, relevant fields are displayed and you can complete the query with just a few clicks. This speeds up the whole process and makes Kibana querying a whole lot simpler.
query

Kibana Filtering

To assist users in searches, Kibana includes a filtering dialog that allows easier filtering of the data displayed in the main view.
To use the dialog, simply click the Add a filter + button under the search box and begin experimenting with the conditionals. Filters can be pinned to the Discover page, named using custom labels, enabled/disabled and inverted.
discover
Power users can also enter Elasticsearch queries using the Query DSL.

Summary

This tutorial covered the basics of setting up and using Kibana and provided the steps for setting up a test environment only. Tracking Apache access logs in production, for example, is better done using Filebeat and the supplied Apache module. The sample data provided can, of course, be replaced with other types of data, as you see fit. Kibana also provides sets of sample data to play around with, including flight data and web logs.
If you have any suggestions on what else should be included in the first part of this Kibana tutorial, please let me know in the comments below. The next Kibana tutorial will cover visualizations and dashboards.
Enjoy!

Check Out our Additional Features for Kibana

Thank you for Subscribing!
Artboard Created with Sketch.

3 responses to “A Kibana Tutorial: Getting Started”

  1. Hussain says:

    I’ve one query as i started using Kibana. and started building couple of simple visualization and dashboard. Now I’m trying to generate some complex visualization by the data/logs i have.
    Basically, we do generate logs for each update user performed in our app. And the logs is generated something like this fashion “[User ID : hussain] Got UpdateStatus:SUCCESS”.
    Currently I’m able to discover logs and generated visualization for total updates being performed by all users with a simple query “UpdateStatus*SUCCESS”, so this give all the count of successful update for all users.
    However, what i m trying to get is a kind of visualization which says Updates count against each user and can be displayed in a form of piechart/bar chart etc. But I’m finding difficult to query & extract the data in such form.
    Any kind of thoughts suggestion is appreciated.
    Regards,
    Hussain

Leave a Reply

×

Turn machine data into actionable insights with ELK as a Service

By submitting this form, you are accepting our Terms of Use and our Privacy Policy

×

DevOps News and Tips to your inbox

We write about DevOps. Log Analytics, Elasticsearch and much more!

By submitting this form, you are accepting our Terms of Use and our Privacy Policy
× Enter to win $300 to Amazon. Take the DevOps Pulse 2019! Take Survey