DevOps Is Changing from Solving Problems to Driving Business

server log analysis devops

DevOps has traditionally played important roles in development and IT operations, but the practice is quickly becoming core to other business functions such as customer success, business intelligence, and marketing analytics.

Modern marketers today are driven by data and rely on many different analytics tools. They need DevOps engineers in general and server log data specifically to do their jobs well. Here’s why: Server log files contain the only data that is completely full and accurate in the context of how search engines such as Google are crawling websites.

If a search engine spider encounters an error and does not load a page, the webmaster does not know because traditional traffic analytics tools such as Google Analytics do not track those issues. Logfile data, on the other hand, does reveal what problems bots are encountering on a website – and many of those issues can hurt a site’s appearance and rankings in Google.

Too many response code errors can lead Google to cut the rate at which it crawls your company’s website. You want to monitor and confirm that search engines are crawling everything that you want to appear in public search results (everything else should be blocking search engine bots). When pages are assigned new URLs, it’s important that the redirection will refer incoming links appropriately.

What Is SEO?

Contrary to what too many charlatans still proclaim (and unfortunately too many people still believe), “SEO” is not a bag of tricks to rank first in Google. This is 2015, not 2000. As I explain in a personal essay of mine and whenever I speak at digital marketing conferences, here is the definition of “SEO”:

“SEO is helping search engines to crawl, parse, index, and then display your website in organic search results for desired, relevant keywords and search queries.”

Server log files – in addition to items including XML sitemaps, schema markup, website hierarchy, internal linking practices, meta tags, mobile-responsive design, and site speed – must be examined and addressed when needed to do exactly that.

How to Examine Server Log Files

DevOps engineers have traditionally used proprietary software to analyze the logs of their systems, networks, servers, and applications. However, the open-source ELK Stack – Elasticsearch, Logstash, and Kibana – has become extremely popular and is now used by companies including Netflix, LinkedIn, Facebook, Microsoft, and Cisco. (We use the ELK Stack to monitor our own environment, and to help the DevOps community, our CEO, Tomer Levy, has written a guide to deploy the platform.)

Regardless of how you choose to analyze your server log files, marketers and DevOps engineers will need to work together to use the data to solve certain problems. Here is a partial list of them (with examples from our own web server using one of our analytical dashboards).

What DevOps Engineers Need to See

Server Bot Crawl Volume

The number of requests made by search engine crawlers is important to know. If the marketing and sales teams want website content to be included in search results in Yandex in Russia but the search engine is not crawling your website, that is a significant issue. (In response, you’d want to see the Yandex Webmaster documentation and this reference article on Search Engine Land.)

Response code errors

For those who might need a refresher, the popular SEO software company Moz has a great guide on the meanings behind different status codes. I have a Logz.io alert system setup to tell me when 4XX and 5XX errors are found because those are significant in both marketing and IT contexts.

Temporary redirects

302 redirects, which are used when a URL is redirected only for a temporary period of time, do not pass what SEOs call “link juice” to the new URL. (The more and better the links that point to a given web page, the better the chance that it will rank highly in search engines.) It’s better to use 301 redirects (permanent redirects) instead.

Crawl budget waste & duplicate crawling

Google assigns a crawl budget to every website based on a lot of different factors — if a website’s budget is, say, 1 GB of page data per day, then it is crucial to ensure that the 1 GB consists only of pages that the company wants to appear in public search results.

Even though technical SEOs and DevOps engineers can block (all or any) search engines in robots.txt files and meta-robots tags, Google might still be crawling advertising landing pages, internal scripts, web pages with sensitive information, and more. Log files will list every URL that is being crawled by search engines — despite what you may have instructed them not to access.

If you hit your given crawl limit but still have new materials on your website that you want to be found in search results. Google might leave your website before indexing it. Duplicate URL crawling — often through the addition of URL parameters in the tracking of marketing campaigns — is one of the most common causes of crawl waste.

To fix this issue, I would see the guides on Google and Search Engine Land herehere, here, and here.

Crawl priority

Google might have deemed an important part of your website to be not worthy of being crawled too often. The log files will show what individuals and subdirectories as a whole are crawled most and least often. If this is the case, you can change the crawl-priority settings in your XML sitemaps to tell Google that a given part of your site is updated enough that it deserves to be crawled more frequently.

Last crawl date

Have you added something to your website that you need Google to index as soon as possible? The log files contain the data that when telling you when a URL was last crawled by a given search engine.

Crawl budget

The instances of Google crawling our website is one important thing that I personally like to check because the overall crawl volume is a rough proxy for how much the search engine “likes” your site. After all, Google does not want to waste resources on poor websites.

From Negative to Positive

DevOps used to be all about problems – engineers, after all, have always monitored platform performance, fixed cluster disconnects, and taken care of similar issues. If an employee at a company knew the operations person, it meant that the employee had a lot of problems.

Today, DevOps has the opportunity to be more visible and help in a positive way by becoming the information driver within an organization. DevOps engineers now provide data that supports countless business decisions – and marketing is just another area in which they can help.

Note: This essay originally appeared on the blog of the DevOps Summit.

Get started for free

Completely free for 14 days, no strings attached.