Adopting Zero Trust and Layered Security – Introduction

Blog Series

Feature Article

I’m a firm believer in standards and using standards to drive security policies and procedures. However, this notion of “zero trust” as a single approach should be squashed. From the standard, they define Zero trust as “Zero trust architecture (ZTA) is an enterprise cybersecurity architecture that is based on zero trust principles and designed to prevent data breaches and limit internal lateral movement” This is a great goal and everyone in the cybersecurity space should be working together to achieve this mission. My mission is to help spark conversation around network security, layered security, security by design and security first.

We can look at “zero trust” from a standard point of view and essentially a group of controls which are applied based on the sensitivity or risk of

  • Device
  • User
  • Data
  • Location

Again going back to the standard we see the following quote to support this “ZT is not a single architecture but a set of guiding principles for workflow, system design and operations that can be used to improve the security posture of any classification or sensitivity level [FIPS199].” In my opinion, we should use asset classification and data classification plus the end-user roles to determine the appropriate location where an asset resides and more importantly the security controls to minimize risk. This isn’t a new approach or standard but is often overlooked when implementing standards to properly handle risk at an enterprise level.

I want to raise a few topics from the NIST-8000 introduction section which could be debated. First, we have the following references where complexity is a motivator for Zero-Trust. “This complex enterprise has led to the development of a new model for cybersecurity known as zero trust” I agree and as an industry we should simplify networks by using proper network architecture, documentation and threat modelling to understand attack paths. Complexity should not be an excuse for a lack of knowledge or deep understanding when designing a network with proper security controls.

Over the last 20 years, I have designed many application and enterprise networks using access control lists where each subnet/VLAN was only permitted outbound traffic to specific services based on port address. This forced network administrators to classify all network assets into categories and enforce the least privilege concept of controlling network traffic. Here are some classic examples and further diagrams will be provided.

  • DMZ Zone
  • Common Services
  • Application Services
  • Web Servers
  • Database Servers
  • Accountants
  • IT Staff
  • IT Management
  • Developers
  • General Staff

This process can also be leveraged to understand what is on your network and essentially drive asset management. Asset management is a pillar and the backbone of all threat risk assessments, threat vulnerability assessments, threat modelling and vulnerability management program.

The second statement is “This complexity has outstripped legacy methods of perimeter-based network security as there is no single, easily identified perimeter for the enterprise” and “Perimeter-based network security has also been shown to be insufficient since once attackers breach the perimeter, further lateral movement is unhindered.“ which isn’t entirely true in my experience. Let’s start to unpackage this further into sections we can all agree upon. Working from home and Covid has changed the enterprise permitter so no single perimeter can be established. This statement is fair in my opinion, but has this really changed? Everyone used to take our laptops homes as IT professionals and various other staff members.

Looking at this from a different angle you can also determine that as an industry the number of Critical CVE, Zero days has increased 10x over the last 24 months. Using this rational understanding, the security of our assets and business data should be our top priority. Possibly people can relate this to building a house. You only need to build the house once and then maintain and/or improve the house over time. So you have a lot of effort initially and then a fairly quantifiable amount of effort after the fact. The same would hold true about a cybersecurity program where the effort to establish an initial baseline will be high but maintenance over time will be strategic and calculated for the most part.

I actually would also disagree with the statement “Perimeter-based network security has also been shown to be insufficient once attackers breach the perimeter, further lateral movement is unhindered” If we dig into this statement we can raise multiple counter-arguments. First let’s look at IPS, DNS Filter, IP address and URL controls with leading vendors like Palo Alto or Fortinet. I know for fact these controls have stopped many breaches that involved known command and control traffic based on signatures, known reputation and hash. They also can be used as a stop-gap for vulnerabilities and provide an appropriate level of audit or logging to provide compliance and assurance. It’s also fair to assume using a zero-trust model we can apply the equivalent controls with a leading EDR vendor like Microsoft or Crowd-Strike. Essentially Indicators of Compromise, URL filtering and DNS filtering are only as good as the vendor’s threat intelligence and the configuration policy applied. To summarize, network security controls are effective however they have a physical dependency which has a decreased footprint due to remote workforces. I don’t feel we should decrease the number of security layers and would suggest we actually increase them by applying DNS Filtering and IPS as a minimum baseline.

Lateral movement will however favour zero-trust and end-point protection solutions since some EDR solutions are effective against cyber-attacks however not all Anti-virus solutions are built equally. With that being said it’s very important your end-point solution effectively blocks and mitigates these attacks. So far the only solution on the market I would trust is Crowd Strike and won’t go into detail and I don’t work for them. The lateral movement detection and controls often require significant network design changes for internal firewalls to inspect traffic. This is why I would suggest an approach where a security first network design is considered and we follow best practices around least privilege.

Let me raise two other important controls I have seen fall by the waste side.

  • Hardened image (CIS Images)
  • OS level firewall.

In my opinion, a hardened image should include a default inbound firewall set to DENY all traffic. We can all rationalize why inbound traffic should be permitted for workstations but they are all edge cases where convenience over security is applied. I would agree the server landscape is a very different conversation but end-user workstations should always deny all inbound traffic as a general rule of thumb.

For example, you have a developer wanting 443 open for testing applications which is a very reasonable ask. We can apply a default deny to all users but we permit 443 based on group or class of users or we could force them to promote changes into a proper development environment where they can be tested in a controlled fashion with known security controls like Web Application Firewall.

Using a hardened image we can ensure best practices are followed, services are disabled and risk is reduced but not removed entirely. This is a very good audit metric and can be used to establish a minimum secure baseline for all assets in your organization.

I hope everyone enjoyed the introduction to the topic of zero trust and I plan to expand each topic and essentially build out a series of blog posts around enterprise security and how we adapt standards including zero trust to strengthen our security posture.

Here are a few areas I plan to write about in the near future.

  • Perimeter Security
  • Layered Security
  • Active Directory Security
  • Network traffic Controls
  • Outbound Traffic Controls
  • End Point Security