Skip to Main Content
May 02, 2024

XZ Utils Made Me Paranoid

Written by Kevin Haubris
Research Security Testing & Analysis

On March 28, 2024, the news about the XZ Utils backdoor came out. Since then, I’ve been thinking about how we could identify these backdoors before packages are released or, at the very least, how to identify them after upgrades. After a week or so, I decided to try to write up a basic scanner to at least identify hooks in memory, which quickly turned into a much larger project than I expected. In this post, we’ll go through what the initial idea was, what needed to be built, and what we ended up with.

The Groundwork

If you look at my past blog posts (ELFLoader and COFFLoader), you will notice that I write a lot of in-memory loaders. To test out my idea, I started with one of my old in-memory loaders because I thought I could easily reuse and modify it to fit my needs. This turned out not to be the case, and we’ll cover why shortly. 

The full goal of this project was to do a few things. The first part was to identify what libraries were needed for a specific binary. However, that requires parsing the binary on disk identifying the libraries and then identifying all libraries that those libraries load. Then I had to relate those items to each other and compare what was in memory to what was on disk. The bulk of the work is pretty much the same that would be needed for building an in-memory loader, so if you want more details on those, check out my ELFLoader or COFFLoader blog posts. All the differences between a standard loader and what I had to do will be covered below.

If you just want to look at the code for an in ELF in-memory loader, I recommend the project at since it's fairly simple and handles most relocations that are necessary.

Standard Function Hooking

The first one I tried to do was to identify a basic function hook. The main reason I chose to do this first was because I have a simple accept backdoor proof of concept that already used hooking. This allowed me to test without running an actual backdoor on my system.

Parse the ELF

The first thing I had to do was to parse the actual binary file, which is what I would do for an in-memory loader but with a slight difference. Instead of just parsing to load, I had to identify all the offsets for the sections, parse the relocations/offsets, find a way to get the remote processes memory, and then actually compare them. To do the comparison, I decided to redo the relocations for the binary for every section I wanted to compare and null those sections before comparison. After some back and forth with the team, we finally came up with a solution, parsed the relocations once, stored all the relocations in a linked list, and then applied the relocations to that section only if it fell within that memory range.

Validate All Sections

Once all the details were parsed out from the entire binary and every imported library, we focused on comparing them. To do this, we needed to get the bytes of the section from the remote process, and for this I chose to use “ptrace” and require running as root. If we can attach, we then will pull all the sections for every directly imported library in use. If we can’t attach to the remote process, we skip it.

Once those section bytes are copied over to our process, we patch out the relocations in that range alongside the parsed section, and we then compare them. Since this is just a proof of concept, I made up a random hash algorithm, and if the section differed across processes, we would have a different value and can just print on differences. Once a difference is identified, we then can compare byte by byte and find the offset where it differs and print out the output if we want. 

Because of the way I’m doing this comparison, which is nulling out the range of bytes at the relocation in both sections and then comparing those modified sections, we ran into the issue where the Global Offset Table (GOT) was showing it was clean. One way to possibly fix that was to parse every binary in the remote process and load them up manually, but at that point, I already had the ability to identify function hooks in memory and hollowed out shared objects, so I decided to call that scanner good and approach it a different way as a second scanner.

Noted Problems

One thing I didn’t think of when writing this was “dlopen”/”dlsym” calls. If these functions are used to load up plugins, modules, or to add additional functionality when needed, you end up with libraries that are loaded into the remote process that aren’t required by the process but also aren’t clear identifiers of malicious behavior. There’s also a good chance that if you see specific libraries loaded into a process, and you don’t have any library that requires “,” then that could be an identifier of malicious behavior. This changes with GLIBC 2.34, where “” and “” are removed and are now part of “”, so now we need to identify if any library resolves “dlopen” as an identifier.

GOT Hooks

For the GOT hooks, I decided to limit the scanner to only a specific PID, read and parse that base executable, resolve all the symbols it imports, apply all relocations to it, and compare just the GOT. This introduced a ton of problems because now we had to resolve the symbols to the exact address that the remote process used. In the other one, we already resolved the base address of all the libraries, so we could reuse that and then dlopen the exact files. This turns out to be relatively difficult with newer systems using snap packages and various containers. For now, I just made it skip scanning binaries with “/snap/” in the path. For processes that run inside containers, until I can find a way to map the remote processes mount points to our process, and resolve from those paths, I am going to leave them as false positives.

The downside of restricting to just that process is that if the process uses a library that then calls the function that someone hooked, it would be missed. Luckily for us, the XZ Utils backdoor hooked “RSA_public_decrypt,” which is directly called from the process. To address this limitation, we would also need to recursively load and compare all GOTs of every library loaded in the remote process, which would drastically increase run time. I decided this is a safe tradeoff because if a backdoor is being installed and is targeting a specific process, odds are attackers will be wanting to hijack a function that is directly called.

Do They Work?

Now the big question is, with all this code written up, does it identify any malicious behavior and find those backdoors? I set up a Debian testing system using an ISO I downloaded a week after the backdoor was found, thinking that it would be easy to backdoor it again with a copy of the shared object. But nope, it was already backdoored with the XZ Utils one.

Figure 1: XZ Backdoor Scan Results

Since I don’t have the source code for this, I wrote up a generic GOT hook install function so that I could hijack the accept function to install my proof of concept accept backdoor to validate that these worked the way I expected, and it did.

Figure 2: Accept Backdoor Results

For the “validate all sections” scanner hook myself with the same accept backdoor and run the scanner in memory over all processes. It turns out that a few Linux processes have “wtext” sections, which I assume stands for “writeable text,” and those get modified by things that seem to be related to Nvidia and OpenGL. Because of that, I’m checking if there is a “wtext” section, and if there is, I will print out an identifier of “WTEXT” to label it as a possible false positive.

Another common finding is when there are multiple copies of a library installed, and a binary uses a RUNPATH/RPATH to specify the library path of that copy. Yet another possible false positive is when there are multiple versions of a symbol defined by a library, and “dlsym” ends up resolving the newer version, but the binary uses the older version. I’m less sure of how to deal with that one.

Figure 3: Accept Backdoor as Function Hook Results

To interpret it, we need to look at “FileOffset” 127410 in with nm -D /usr/lib/x86_64-linux-gnu/ | grep -I 127410 and the results will be the function name that was hooked like below.

Figure 4: Identifying Results as the” accept” Function

For anything that doesn’t have differences, you will get output like this or the false positive output below:

Figure 5: Legitimate and False Positives

The false positives here happen to be from a second version of libnghttp2 that “NetworkManager” is loading on my system. You can help narrow down whether it’s a true positive versus a false positive by looking at the 'objdump -x' output or 'nm -D' output and identify if it’s a function offset or in the headers. If the “FileOffset” value seems to be in the header, then its highly likely it is parsing the wrong file and is most likely a false positive. Another example of false positives includes the following, which are the “wtext” modifications I mentioned earlier.

Figure 6: WTEXT False Positives

To mess with the code, visit:

The repo will have a full walkthrough of building it, running it, and notes on how to interpret the results.

Future Work

Because this mostly has benefits for defense and Incident Response, I would like to see this sort of validation being done proactively on services that are considered “critical”. Another example that I would like to see is being used to identify indicators of compromise (IOCs) for stuff like the “ArcaneDoor” and its Authentication, Authorization, and Accounting (AAA) function hook, the actual modifications, and where they are. Even better is expanding on this and identifying the libraries that normal processes use, any libraries that use “RWX”/”R-X” memory, and sections inside processes with those permissions that aren’t known to have them.