How Log Analytics Improves Your Zero Trust Security Model

Observability and Zero Trust Security are two sides of the same coin

Over the past few years, cloud computing has passed through its hype and early-adopter phases. Now we are hitting the peak of migration from on-premise to cloud-based infrastructure. Consequently, this transition and the advent of cloud computing has dramatically changed the way we think about security. Namely, the security paradigm has shifted towards a Zero Trust Security Model. Now more than ever, it is important to bring clarity to this topic and establish its not-so-obvious relationship with observability.

What Is the Zero Trust Security Model?

While the Zero Trust Security Model does not apply exclusively to cloud security, it is implicitly linked to it. The term “Zero Trust Security” was coined by Forrester Research back in 2010 and introduced as a “data-centric” model. Essentially, it was built on the idea that architecture should be data-centric. In other words, it seeks to understand how data flows rather than blindly treating private network traffic as trusted. Since then, Zero Trust has evolved to base itself on a “people-centric perimeters” approach. This takes the concept further, suggesting that people and devices need to be verified regardless of where they are located, even if they are already within the network perimeter.

The Zero Trust Security model is established by following these five steps:

  1. Identifying your sensitive data by classifying them as public, internal, or confidential. This enables you to create subsets of data that represent their own microperimeter.
  2. Mapping the flows of your sensitive data by looking at how data moves across your network.
  3. Architecting your zero trust microperimeters by creating micronetworks around them, all the while looking at different security controls to enforce those microperimeters.
  4. Continuously monitoring your zero trust ecosystem with security analytics that leverage your logs and data analytics to identify anomalous and/or malicious behavior.
  5. Embracing security automation and orchestration using modern tools that can reduce your operational overhead, enforce policies, and ease the process of security management.

It is a radical shift from traditional security models, which are largely built on “network perimeter”-based security. In this classic security doctribe, resources are essentially isolated from the outside by the establishment of different network perimeters (e.g., private subnets). This structure prevents external attacks, but maintains implicit trust in everyone within the network. 

Understanding Observability

So what is observability anyway? Observability is usually associated with logging and metrics, but it encompasses many other processes as well, including monitoring, tracing, analytics, and alerting. 

Observability is an old term based on control theory, a mathematical concept from the engineering world describing how internal mechanics can change based on feedback. Observability holds that the internal states of a system can be measured by knowledge gained from the system’s external outputs. 

When you compile the information you get using all of the different dimensions of observability, you gain not only a cohesive story about your systems, but also a holistic view of them, which you can use as actionable intelligence. 

First among these processes is logging, playing a fundamental role in auditability, compliance, behavioral analytics, and incident response.

Using Log Analytics to Improve the Zero Trust Security Model

The Zero Trust philosophy that no resource can be safely assumed to present no threat has challenged the way engineering teams think about security over the past several years. It thus requires mitigation of potential internal threats in addition to external by mandating the following processes:

  • Micro-segmentation, which is accomplished by building identity-based network contexts and introducing the concept of microperimeters created based on user identity.
  • Continuous user identification at every access request, rather than only at first entry. A solid authentication and authorization process, combined with methods such as temporary tokens, is important to have in place so that user identity is continuously verified.
  • Strictly controlling access by using the principle of least privilege (PoLP; a.k.a. minimal privilege) for the access rights of both users and devices. This makes sure that every resource only has access to what it really needs in order to function properly. 

Does the Technology to Implement Zero Trust Actually Exist?

Technology evolves at an incredibly fast pace. A decade ago, the idea of building a Zero Trust architecture would be incredibly daunting. 

With the rise of cloud computing and the modern way of designing software architectures, however, implementing the Zero Trust model is within reach. In fact, a lot of the technologies needed to implement a Zero Trust Security policy are already widely used. 

A few examples of these technology building blocks are MFA (multi-factor authentication),  VPCs (Virtual Private Clouds), Network Security Groups, and IAM (Identity and Access Management). When combined, these and many other powerful technologies are providing the brickwork for Zero Trust architectures.

That said, the pillars of observability – including logging- are not often leveraged properly, much less connected to the Zero Trust model. This despite the fact that one of Forrester’s five steps into Zero Trust is to “continuously monitor your zero-trust ecosystem with security analytics.

Therefore, when selecting technologies to implement a Zero Trust model, make sure to include log analytics. Identifying the right logs to analyze for security and continuously monitoring them for potential breaches using the correct tools is the only way to ensure that a Zero Trust wall remains intact, rather than being a mere facade.

Challenges of Zero Trust

Despite its popularity, the Zero Trust Security Model comes with its challenges. One is the incompatibility of the model with legacy applications where microperimeters and minimal privilege cannot be applied so easily. Likewise, applying the multiple dimensions of observability might prove to be a hard task, since it is rarely easy to adapt legacy applications.

Many of these legacy systems fail to have a basic centralized logging facility in place. Yet, enabling a centralized logging solution might be the best starting point. Enabling one of the core aspects of observability—logging—is a great way to unlock the potential of older technology. It often does not require changes to the business logic, and, with the proper logging solution, you can create visibility and gather intelligence using log analytics that are key to establishing a Zero Trust architecture.

Having proper log analytics—capable of detecting the malicious behavior of both people or devices—in place helps you make great strides towards improving the Zero Trust Security Model and is critical to the model’s success.

Conclusion

There is a strong relationship that exists between the Zero Trust Security Model and the concept of observability. Both encompass more than just logging, but logs and the intelligence derived from them do play important roles in enhancing a Zero Trust architecture.  

As you move forward establishing your Zero Trust Security Model—in either modern cloud systems or in legacy ones—make sure to explore log analytics as a starting point, and keep in mind that selecting the right tools from the outset can save you time and money.

Get started for free

Completely free for 14 days, no strings attached.