Writing Better Security Exclusions With EER

Last week at Wild West Hackin’ Fest, I released a model to help teams prioritize security exclusions called the Equilateral of Exclusion Risk (EER, pronounced “ear”). The cost of getting security exclusions wrong is simply too high for teams to be prioritizing exclusions on an ad-hoc basis. The EER model provides security teams with a list of security exclusions prioritized based on the risk caused by implementing the exclusion.

I suspect many detection engineers probably saw the talk and wondered “so what?” A year ago, it’s possible I would have been among them. As I mentioned in the accompanying conference talk, there are security haves and have nots. The “haves” possess the resources for dedicated detection engineers, many of whom are FTEs. They have the time and resources to work with offensive security teams to test their exclusions and understand the risk for bypass. Over time, testing will lead them to writing the best possible security rules (and exclusions) right out of the gate.

But that situation is not the norm. In fact, it’s far from it. Most security teams I talk to don’t have dedicated detection engineers. In fact, in many circumstances, detection engineering falls to the SOC analysts with the highest seniority (if it’s happening at all). The EER is written primarily for these teams. 

Figure 1 - The Equilateral of Exclusion Risk

The EER (pictured above) is relatively simple. When you need to write an exclusion to a security rule, pick the type of exclusion that’s the highest priority, supported by your tooling, and meets your use case.

Real World Use Cases for Exclusions

In the conference presentation where the EER was introduced, we offered three examples of use cases where security exclusions were necessary. For ease of understanding, we’ll repeat those here.

In the first example, a custom developed application that is critical to business functionality uses a licensing routine that is heavily obfuscated. This obfuscation is identified as malicious by the EDR. The analyst must create an exclusion for the custom software. Because it is custom developed, it is unlikely to receive frequent updates. In this case, the analyst requests a hash-based exclusion since this is the highest priority on the EER and fits mission parameters.

In the second example, a major sporting event venue sells commemorative screen savers that are packed using commercial tools. For copy protection, the machine ID is encoded in the screen saver binary during installation, so the hash is different for each endpoint where the screen saver is installed. In this case, since the hash is different on each endpoint, a hash-based exclusion is not usable. The binary is not digitally signed, so no digital signature can be used. In this case, a custom YARA signature is applied to exclude the binary.

A business-critical application that updates frequently creates a read-write-executable (RWX) section of memory and unpacks itself there. This is used as a copy-protection feature for the commercially developed application. Unfortunately, this is detected as malware by the EDR due to using malware-like behaviors. The org could write a YARA rule for the unpacking stub used by the commercial packer utilized by the application. This would however allow all malware also using the packer to bypass the exclusion as well. After inspecting the last 12 months of updates to the application, the security team discovers the application consistently uses the same Product Name in its executable metadata. The security team therefore adds a metadata exclusion in addition to the YARA rule. This uses a combination of elements from the EER where a single rule might have been bypassed.

A Word on Security Exclusions

Not all security rule exclusions are bad. Even assuming that perfect detection rules can always be created, this is rarely the best use of time. Far more often than not, good enough rules can be rapidly created by adding exclusions. I’ll argue that this is usually the ideal situation for most teams. Writing a larger number of good enough detection rules provides much greater security coverage than a smaller number of (allegedly) perfect detection rules.

When writing an exclusion, make sure that you validate that the exclusion does not render the detection useless. In other words, make sure that the detection still works as originally intended after adding the exclusion. Ideally, security teams will work with offensive security professionals to ensure that the exclusion cannot be easily bypassed. Because not everyone has access to offensive security professionals, using the highest priority exclusions in the EER will help teams write optimal exclusions.

EER Core Principles

The following are core principles of the EER:

  • There exist some activities that cannot be detected reliably without some exclusions built into the detection logic
  • Every exclusion introduced some risk of a bypass
  • Not all categories of exclusions introduce the same bypass risk
  • Optimal exclusions may not be supported by the security controls deployed by the org
  • Detection engineers should select the exclusion or exclusions with the lowest risk of bypass
  • As controls are updated, exclusions should be reviewed

The last point is particularly important. I have worked multiple incidents where when presenting results, security teams swear our investigation results must be wrong. They swear they have tested detections and that the actions we describe threat actors having taken would have created alarms. Assuming these teams have tested their detections at all, they definitely haven’t regularly retested the rules over time. 

In these situations, most often the detection failure is due to rule change that the teams presume would not cause any changes. In other cases, detection failures are caused by changes in security control configurations. For example, consider a rule that relies on share access auditing, not a default on any version of Windows. At the time the rule is written and tested, all is good. But later a change in audit posture renders the detection broken. Finally, in some cases security controls themselves change, breaking the detection. Sometimes this is due to a product failure (such as a log forwarding failure), but in other cases a product update changes how a feature operates. Finally, product updates may outright break a feature on which a detection depends.

TL;DR you should not rely on a detection which is not routinely validated. Unfortunately, offensive security teams don’t effectively scale to perform regression testing on security controls. Blue teams should consider using red team automation and control validation frameworks to scale their testing operations.

EER Use Cases

The EER can be used by teams in a number of scenarios. First and foremost, the EER can be included in any policies or procedures detailing the creation or editing of endpoint detection rules. This will ensure that teams have a common understanding of the risks of various security exclusions.

When dealing with vendors, many request that you exempt entire paths from inspection by endpoint security controls. Since path names are relatively low on the EER, you can use the model to push back on vendors who do this, requesting they provide you with better (read more secure) exclusion options if their software would otherwise be blocked by endpoint security controls.

This use case also applies to dealing with requests from IT or users to create overly insecure exclusions. Instead of trying to make your case that better exclusions are needed, you can use the model as evidence that the requested exclusions are suboptimal for security.

Finally, the EER can be used to incentivize security tool vendors to provide better options for exclusions. Most security tools don’t offer all exclusion options listed in the EER. You can’t build exclusions with the options your security tools don’t provide.

Conclusion

We hope you are able to the EER to write better security exclusions. The need for security exclusions is real. Security exclusions range widely in the risk they introduce. By using the EER you can write security exclusions introduce the least risk. When you write security exclusions, make sure to test them regularly, preferably using a control validation framework for scalability. With anything less, you don’t really know where you stand.

References

The EER whitepaper can be found here:

https://github.com/malwarejake-public/conference-presentations/blob/main/Equilateral%20of%20Exclusion%20Risk%20Whitepaper.pdf

The EER conference slides presented at WWHF can be found here:

https://github.com/malwarejake-public/conference-presentations/blob/main/Williams%20WWHF%202022%20-%20Equilateral%20of%20Exclusion%20Risk%20Slides.pdf