Using Effectiveness Assessments to Identify Quick Wins
An organization's overall security posture can be viewed from multiple different angles, such as technical assessments, program assessments, controls assessments, and risk assessments. A number of different frameworks for each of these assessment types exist, intended to help both technical teams as well as leadership organize security program building activities. Some of these include:
- Penetration Testing Execution Standard (PTES)
- NIST Cyber Security Framework (CSF)
- Center for Internet Security (CIS) Critical Security Controls (CSC)
- Factor Analysis of Information Risk (FAIR)
What most of these frameworks are missing is a gauge on how well the existing security inventory stack is performing, based on factors like how a tool has been deployed, how quickly a new rule can be pushed into production, the knowledge and experience of the team managing a tool, or even the maturity of a Security Information and Event Management (SIEM) team's rule correlation capabilities. This concept of effectiveness isn't new, but it has a tendency to get overlooked by the more common assessments mentioned above, which for many industries are mandated requirements.
The MITRE ATT&CK™ framework looks at a security program from yet another angle. One of the primary functions of this framework is to organize the currently known attack techniques by Tactics, such as Persistence or Lateral Movement. As of this blog date, there are currently 266 known attack techniques mapped to a dozen different Tactics in the enterprise ATT&CK™ framework. Each technique has associated variables, including the relevant platforms that could be targeted by the technique, the permissions required to execute the technique, and data sources.
Data sources are essentially a means of detecting a technique indicator, such as process monitoring or packet capture. There are currently 59 data sources mapped in the ATT&CK™ framework. So, logically, if you have tools in your security inventory stack that utilize or have visibility into a certain data source, you should be able to detect all of the techniques that are mapped to that data source. However, just because you have a tool monitoring the associated data source doesn't mean that you will detect those techniques. This is often due to challenges with other aspects of the security program, such as staffing or patching, hence the concept of effectiveness. This methodology applied to the ATT&CK™ framework was first introduced in two threat hunting blogs by Roberto Rodriguez1,2.
A solid example of this in practice is to look at an endpoint detection and response (EDR) solution. These tools typically utilize or have visibility into anywhere between 20 to 30 data sources, often making them one of the most comprehensive tools in a security stack. This can translate into one tool being able to detect upwards of 200 attack techniques.
But what about effectiveness? Perhaps the EDR tool is only deployed in the restricted PCI network segment, or its ability to detect memory-based attacks has been disabled because it was crashing multiple servers. What if the EDR logs aren't being sent to a SIEM and exist in their own silo? What if the managed security service provider (MSSP) currently managing the SIEM isn't technically familiar enough with those specific logs, resulting in ineffective log correlation that misses certain attacks?
By walking through an effectiveness maturity exercise, teams can quickly identify effectiveness gaps and communicate the associated remediation next steps to IT and Security leadership. These exercises provide additional value from a security posture perspective by illuminating aspects of a security program that are not normally covered by many of the other assessment types. Visibility into this angle of a security program allows for the quick identification of typically overlooked inefficiencies and allows leadership to make more precise decisions on how to quickly improve the organization's security posture.