Zero Trust has become somewhat of a buzzword over the past couple of years and has been coined the new gold standard of security models as technology has changed. So what exactly does “Zero Trust” mean, and should your organization start considering adopting this model? In this blog, we’ll discuss the Zero Trust security model at a high level so you can determine if this journey is worth enduring. Then, in subsequent blogs, we’ll cover each of these core components of Zero Trust in more detail so you can learn how to start implementing these core components within your organization!
What is Zero Trust?
Before data resided in the cloud, organizations structured their security model around implicit trust assuming that anything behind the corporate firewall would be safe. The Zero Trust model flips this old model on its head. The new Zero Trust model assumes breach and instead will explicitly verify each and every request as though it derives from an uncontrolled/untrusted network. This newer model follows the “never trust, always verify” mentality, which means that regardless of where the request is coming from, or what resources are being accessed, we must verify before access is granted to the network. With that said, we can break Zero Trust down into 3 core principles:
- Verify explicitly
- Use least privileged access
- Assume breach
This first core principle transforms the security trust model into one that will verify requests explicitly based on data points including credentials/identity, location, device health, risk level, service or workload, data classification, and other anomalies. If we actually look at how many attackers compromise environments, this can be attributed to three main vectors:
- Compromised user accounts
- Using techniques like password spray, phishing, or malware
- On-premises identity systems are also more vulnerable since they lack “cloud-powered” protections like password protection, password spray detection, and AI for account compromise prevention
- Using techniques like password spray, phishing, or malware
- Compromised vendor accounts
- Vendor account that lack things like multi-factor authentication (MFA), IP range restrictions, device compliance, and access reviews were large targets for attackers.
- Compromised vendor software
- Cases, where user accounts are used with a vendor’s software that lacks MFA or other policy restrictions, can also open holes in the security posture for attackers to take advantage of. By treating vendor accounts in the same manner that we manage our regular end-user accounts, many of these attacks could be stopped in their tracks.
In all three of the cases above, these can be seen as major gaps in explicit verification. By making sure you extend this verification to all access requests, even those from vendors and especially those from on-premises environments, you are one step closer to a more secure environment.
Use least privilege access
For this second core principle, we can use least privilege access to ensure that we are granting permissions required for that user to meet a specific goal and nothing beyond what is actually needed. This can be accomplished by limiting user access with Just-In-Time and Just-Enough-Access (JIT/JEA), risk-based adaptive policies, and data protection. By granting the least privilege access, this can significantly minimize an attacker’s opportunity to move laterally throughout your environment if a breach were to occur. The overall goal of least privilege access is to distinguish attacks by limiting how much of a resource (user, device, or network) the attacker can access.
Have you ever heard the term “security through obscurity”? If so, throw that methodology out the door, because Microsoft doesn’t want anything to do with it! However, if you’re not familiar with the term, security through obscurity (STO) basically revolves around the idea that an organization will be less open to attacks if they hide important information and/or enforce secrecy as their main security technique. This is equivalent to hiding your front door key under the welcome mat thinking no one would be smart enough to look under it and find the “keys to the castle”. Unfortunately, this far too common, and as soon as that key is found you and your entire house have now become vulnerable! In the security world, this could involve hiding passwords inside of binary code or a script or changing a daemon port to reduce brute force attacks. The main issue with STO is that this is seen as the main method of security within an organization, and throwing all eggs into one basket is a very bad idea. Instead, one of the best ways to protect your environment is to assume as if an attacker has already breached your network. This last core principle revolves around minimizing the blast radius and segmenting access. Building your systems around the idea that a breach has already happened or will soon happen will give you more confidence knowing that mitigations are already in place if/when an intrusion occurs. So what does this entail? This involves collecting system data and telemetry, using it to detect anomalies, and then use that insight to automate prevention tactics so you can preferably prevent altogether. However, if that is not possible you will still be able to quickly detect, respond, and remediate near-real-time. Microsoft 365 Defender will allow you to quickly assess the attacker’s behavior and immediately begin remediating the issue.
By putting these three Zero Trust key principles into practice, you’ll be implementing an end-to-end strategy that spans across your entire digital estate! Now that we know the concept of Zero Trust, let’s talk about the approach to implementing Zero Trust through its seven main pillars:
- Secure Identity
- Secure Endpoints
- Secure Applications
- Secure Data
- Secure Infrastructure
- Secure Networks
- Visibility, Automation, and Orchestration
This involves verifying only people, devices, and processes that have been granted access to your resources can access them. When one of these identities tries to access a resource, this would include verifying its identity with strong authentication and also making sure the identity is compliant and typical for that identity. For example, typical” could mean accessing a resource from the USA consistently and then all of the sudden seeing that same identity attempting to access the resource from Russia that same day. When securing identity you should be following least privilege access principles mentioned earlier.
Now that the identity has been granted access to the resource, this means data could be flowing through a variety of different endpoints (i.e. BYOD devices, company issues devices, on-prem workloads, cloud-hosted servers, IoT devices, etc.). With all of these devices out in the wild comes a massive attack surface area. Luckily, we can enforce things like device compliance and device health to secure our access.
Another massive attack surface area involves your applications. This could include both on-premises legacy applications, as well as cloud-based applications. Applications are the software entry points to your information, so securing it should be top of mind! We can do this by applying controls and technologies to discover shadow IT, allowing you to ensure people are not using applications they shouldn’t be. We can also apply controls for in-app permissions, monitor for abnormal behavior, control specific user actions, and much more!
It’s safe to say that almost all data that your organization uses will be accessed over the network. This means that proper network controls should be put in place to enhance the visibility of that data and also help prevent any attackers from moving laterally if they were to compromise the network. The biggest areas to focus on include, network segmentation and in-network micro-segmentation, real-time threat protection, end-to-end encryption, monitoring, and then reviewing analytics.
This includes on-prem servers, cloud-based VM’s, containers, microservices, and the underlying operating systems and firmware. All of which can present a large attack vector. However, by assessing for versions and configuration you can significantly reduce the risk by hardening your defense. In addition, use telemetry to detect attacks and anomalies and stop them in their tracks by automatically blocking or flagging the behavior as risky and taking protective action accordingly.
Data is everywhere! Data resides across all of your files and content and includes both structured and unstructured data. Regardless of where the data resides, you will want to ensure that it remains safe especially once it leaves your devices, apps, infrastructure, or network. Luckily, data can be secured through things like classification, labeling, and encryption and access can be restricted accordingly.
Visibility, automation, and orchestration
Although this isn’t technically a core pillar for Zero Trust, it has become an important aspect in how you manage your data and ultimately helps you make better-trusted decisions which in turn hardens your security even further. With each of the pillars highlighted above, you will see various alerts generated along the way which will likely result in your Security Operations Center (SOC) analysts becoming busier than ever and may result in some of them missing alerts. Luckily, Microsoft gives you the proper tools to manage those threats through proactive and reactive detection so your SOC can focus on the real threats that matter the most and let the tools handle the rest!
That wraps up our first blog on adopting a Zero Trust strategy! I hope now you understand at a high level what exactly Zero Trust means and also have an understanding of each pillar in the Zero Trust strategy. In subsequent blogs, we’ll dive into each of these layers in our end-to-end journey of Zero Trust! I hope you have found this blog helpful, and I encourage you to check back shortly when we cover our first pillar of securing identity.