Skip to Main Content
April 20, 2023

Incident Response Rapid Triage: A DFIR Warrior's Guide (Part 2 – Incident Assessment and Windows Artifact Processing)

Written by Justin Vaicaro
Incident Response Incident Response & Forensics Threat Hunting

In Part 1 of this series, we identified that there are three (3) key parts to successful incident preparation: ensuring that a solid incident triage process is in place, creating centralized analysis documentation, and solidifying incident communication.

In Part 2 of this series, I will delve into the process of thoroughly evaluating the incident, explore a non-exhaustive list of system analysis goals, and lead into the rapid processing of the acquired critical Windows endpoint artifacts. If you missed Part 1, you can catch up on it here.

Critical Incident Objectives

Before jumping right into an Incident Response scenario, it is important to do three (3) quick things:

  1. Take a deep breath.
  2. Take a step back.
  3. Remain calm and assess the given situation.

OK, game on!

Now, when beginning an Incident Response scenario, there are five (5) main questions to take into consideration. These are the questions that will drive your investigational efforts! You may have partial answers to some prior to beginning an investigation; however, this varies from case to case and environment to environment.

  1. What was the initial threat vector used to breach the environment?

This question is important to figure out, since it will most likely point you in the direction of an exposed system or application of concern that may have been compromised or a user of interest who clicked on that phishing email link, etc. It’s important to keep in mind that the initial alert that kicked off the investigation may not be the same as the initial threat vector that gave the attackers access into the environment. Finding this ‘patient zero’ or root cause is vital, as it will define the restoration and recovery points.

2. What is the scope of the impact?

This is necessary to address in order to prevent a hyperfocus on the 'known' compromised systems and alleviate premature containment efforts. To properly contain the incident, the analysts must understand how much of the organization has been compromised.

3. Was there any data stolen?

This question is important to address, but in many cases can be difficult to answer. Unless you land on the system where the data exfiltration took place or have the necessary network telemetry, this question may be extremely difficult to answer with high confidence.

4. Are the attackers still in the environment?

Before closing out any incident, this will be the question that will most likely stop every analyst in their tracks. Organizations sometimes look for that definitive response: “Yes, we high-speed incident responders have saved the world and have eradicated all threat actors from the organization!” In reality, this can often be an impossible question to answer for a multitude of reasons. This is where incident remediation efforts, critical post-incident monitoring, and ongoing environment threat hunting come into play.

Another difficult question that often comes up:

5. Who did it?

As many incident responders know, attribution is very difficult to ascertain and should not be one of the critical aspects to focus on during an investigation. As the IOCs are aggregated and OSINT is accomplished, there may be opportunities to tie this malicious activity back to a particular crime group or advanced persistent threat (APT).

Key Security Solution Access

Often, the necessity arises to gain access to various deployed security solutions within the client’s environment. It is better to request this access as early as possible to prevent any access delays and—more importantly—prevent potentially losing critical logged data visibility.

  • SIEM
  • EDR
  • Cloud environment (Azure, AWS, etc.)

Note: This is not a complete list of security solutions or additional environments to access, but I have found they are typically the most useful to request at first to gain additional perspective on the incident activity.

Windows System Artifact Processing Steps

Below are the steps that should be followed to create the necessary output files for the analyst to reference for rapid triage of the Windows system live forensic tool output. By running each of these steps in parallel, analysts can keep their investigation moving forward without stalling their momentum and waiting for each step to complete.

During this system processing, the goal should be to identify suspicious activity surrounding the following areas:

  • User account usage
  • Lateral movement
  • Process activity
  • Persistence mechanisms
    • Scheduled tasks
    • Autoruns, including registry keys
  • Attacker tool use
  • Attacker backdoors

Note: This is not a full list to focus on, but should be used to build from, which will provide guidance surrounding Windows system analysis goals and objectives.

Windows Event Log processing

Windows event log analysis is crucial in any incident, which is why this is usually the first step in the artifact processing. The intent (and the hope) is to derive key IOCs that can be used to pivot off to assist with tactical analysis of the remaining forensic artifacts.

Generally, an analyst will use Event Log Explorer (ELE), or the like, to manually analyze individual Windows event logs. This methodology is tedious, takes a significant amount of time to accomplish, and inherently leaves some key event logs out of the analysis process due to time constraints. By aggregating all available Windows event logs together and then analyzing the output, an analyst can cover the entire Windows event log landscape.

This can be achieved using some very effective log analysis tools.

  • EvtxECmd
  • Hayabusa
  • Zircolite
  • Chainsaw

These tools will create aggregated CSV file output that can be imported into Timeline Explorer, which is a tool that can be used to tactically carve the event log output based upon the analyst's needs in searching for key EID activity, strings of interest, or filter output down by a particular time frame of interest. From this quick triage log analysis, an analyst can find crucial pivot points that will allow for further targeted log analysis, identify key time frames of interest, and quickly uncover critical IOCs that will aide in targeted incident threat hunting activities.

$MFT and $UsnJrnl Processing

The $MFT can be considered one of the most important forensic files to analyze. It keeps records of all files in a volume, the file’s location in a directory, the physical location of the files in on a drive, and file metadata.

The $UsnJrnl (specifically, the ADS $J) is another critical artifact, and it will provide evidence pointing towards file creation, deletion, renaming, and more. A typical scenario experienced on an incident is attackers deleting or renaming files. This artifact can provide evidence of files that may have existed and identify what happened. This file activity can then be correlated against activity contained within the $MFT file. One thing to note about the $UsnJrnl, especially on particularly busy volumes, is that the data may only go back about a day.

Note: Due to the size of these files, it is very helpful to have a time frame of interest to start the analysis from.

MFTECmd can be used to provide quick processing of both the $MFT and $UsnJrnl artifacts. This tool will create an aggregated CSV file output that can be imported into Timeline Explorer.

From within Timeline Explorer, an analyst can filter the $MFT data by time frame of interest, filename, file extension, etc., and the $UsnJrnl data can be filtered by entry number, update reason (OPCODE), etc.

To provide a graphical view of these two files, an analyst could use MFTExplorer.

Key Registry Hive Processing

The registry holds important information about the software, hardware, and even the users of the system in question. This includes data about recently used programs or files, devices that may have or are connected to the system, systems pivoted to, etc.

The key registry hives that should be analyzed are the following:

  • SAM – Stores credentials and account information for local users
  • SYSTEM – Stores all system configuration information
  • SECURITY – Stores security policy information
  • SOFTWARE – Stores all information regarding installed software

Note: During the initial point in the investigation, an analyst may not know what specific user accounts were compromised, but as the investigation uncovers this information, the analyst will want to include the applicable NTUser.dat and UsrClass.dat hive files for analysis.

To provide automated processing of the registry hives outlined above, the following tools can be used.

  • RegRipper utilizes preconfigured plugins to identify potential suspicious activity.
  • RECmd can be used to create an aggregated CSV file of all processed registry hives, which can be imported into Timeline Explorer for further analysis.

Registry Explorer, on the other hand, can be used to provide a graphical view of these registry hives.

Memory Image Processing

There is not always time (or available data) to do thorough memory image analysis while working a fast-paced Incident Response engagement, but there is a wealth of forensic data usually waiting to be uncovered within a memory image. This step will provide the analyst the ability to at least do a quick overview of the activities taking place within the captured memory image to find any potential pivot points of interest.

To provide automated processing of a captured memory image, the following tools can be used.

  • Autotimeliner will run the timeliner, mftparser, and shellbags plugins and create a timeline in CSV file format, which can be imported into Timeline Explorer for further analysis.
  • AutoVolatility will run over 40 plugins (by default) and create a text file output for each plugin.

Bulk Extractor is a great multipurpose forensic tool that can also be used to extract various artifacts from memory images. One specific use case is carving network packets and streams directly from a memory image.

Volatility is the de facto command line memory analysis tool. There may be situations where an analyst may need to fluctuate between versions 2 and 3, so it is helpful to become familiar with both versions.

  • Volatility
  • Volatility Workbench (Windows Volatility v3 GUI)

Some of the recommended quick win plugins to run are listed below:

Volatility v2

  • Cmdline
  • Cmdscan
  • Dlllist
  • Iehistory
  • Ldrmodules
  • Malfind
  • Modules
  • Netscan
  • Notepad
  • Pslist
  • Pstree

Volatility v3

  • Windows.cmdline
  • Windows.dlllist
  • Windows.drivermodule
  • Windows.ldrmodule
  • Windows.malfind
  • Windows.netscan
  • Windows.netstat
  • Windows.pslist
  • Windows.pstree

Tool References

Forensic Analysis VMs

General-Purpose Tools

Live Forensic Tool

System Analysis Tools

By quickly identifying and prioritizing threats, it allows an organization to respond rapidly and effectively to an incident, while minimizing damage and downtime.

The key to successful triage is having the right tools and processes in place. As we have seen in Part 2 of this series, an analyst doesn’t need to be slowed down by the volume of acquired Windows artifact processing. A well-oiled rapid triage processing plan allows analysts to keep the artifact processing train speeding right along, while seamlessly analyzing completed artifacts in parallel.

Part 3 of this series will pivot into the processing of acquired incident network data and explore the various tools that are available to aid in the investigation.