I recently tweeted that, as far as I'm aware, Nuix's Workstation/Investigator product is the only commercial product that incorporates RegRipper, or RegRipper-like functionality.
Brian responded to the tweet that both OSForensics and OpenText include the capability as well. The OSForensics page states that the guide was written using RegRipper version 2.02, from May, 2011, making it really quite old. For OpenText, the page states that the EnScript for launching RegRipper had been downloaded 4955 times, but is no longer supported. Also, the reference to RegRipper, at the old Wordpress site, tells us that it's pretty dated.
As such, that leaves us right were we started...Nuix Workstation is the only commercial product that incorporates the use of RegRipper, or some similar functionality. Here's the fact sheet that talks about the extension, and here's where you can get the extension itself, for free. Finally, here's a Nuix blog post that describes not only the RegRipper extension, but the Yara extension (also free), as well.
Okay, full disclosure time...while I was employed at Nuix, I helped direct the creation of the RegRipper extension, as well as updating the Yara extension. I didn't do the programming...that was something provided by Daniel Berry's team. The "two Jasons" did a fantastic job of taking my guidance and turning it into reality. Shoutz and mad props to the Juicy Dragon and his partner in crime, and their boss (Dan).
The RegRipper extension looks at each Windows evidence item, and "knows" where to go to get the Registry hives. This includes knowing, based on the version of Windows, where the NTUSER.DAT and USRCLASS.DAT files are located. It will also get the AmCache.hve file (if it's in that version of Windows), the NTUSER.DAT hives in the Service Profiles subfolders, as well as the Default hive file within the system32\config folder. Automatically.
And it will run the appropriate RegRipper profiles against the hives. Automatically.
And it will incorporate the output from RegRipper, based on the evidence item and file parsed, right back into the Nuix case as a separate evidence item. Automatically.
Let's say you have multiple images of Windows systems, all different versions. Run the RegRipper extension against the case, and then run your keyword searches. This can be extremely critical, as there is data within the Registry that is ROT-13 encoded, as well as ASCII strings that are encoded as hexadecimal character streams that various RegRipper plugins will decode, making the data available to be included in your searches.
The Yara extension is equally as powerful. You select the path you want scanned, whether you want descendants (yes, please!) and the rule file(s) you want included, and let it go. All of the findings are included as tagged items directly back into your case.
Want to give it a shot? Go here, and download the image from challenge #1, the web server case. Add the image file as an evidence item to a Nuix case, and then run the RegRipper extension. Then, use the Yara extension to run Yara rule files that look for web shells across the web server folder and all descendants. In this case, the web server folder is not C:\inetpub.
RegRipper was released almost 11 yrs ago. I've been told time and time again that a lot of people "use" it. I have no stats, nor any real insight into this, but what it means to me is that a lot of people download it, and run it "as is" without getting the full benefit from the tool.
I guess I'm really amazed that in all this time, there hasn't been a concerted effort to incorporate RegRipper, or a RegRipper-like capability, directly into a commercial tool.
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Wednesday, January 30, 2019
Tuesday, January 15, 2019
Just For Fun...
OS/2Warp running in VirtualBox |
After I arrived in CA, I was shown PPP/SLIP dial-up for Windows systems, and was fascinated with the configuration and use, mostly because now, I was one of the cool kids. For a while, I dabbled with the idea of a dual degree (EE/CS, never committed to it) and came in pretty close contact with folks from the CS department. As it turned out, there were more than a few folks in the CS department who were interested in OS/2, including one of the professors who had a program that would sort and optimize the boot-up sequence for OS/2 so that it would load and run faster.
Also, it didn't hurt that I attended grad school was not far from Silicon Valley, the center of the universe for tech coolness at the time. In fact, the EE curriculum had a class each summer called "hot chips" where we studied the latest in microprocessor technology, and corresponded with the "Hot Chips" conference in San Jose. The class was taught by Prof. F. Terman, the son of the other Fred Terman.
While I was in grad school, I was eventually introduced to OS/2. I purchased a copy of OS/2 2.1 at Frye's Electronics (in SunnyVale, next to Weird Stuff), specifically because the box came with a sticker/coupon for $15 off of OS/2 Warp 3.0. I enjoyed working with OS/2 at the time; I could dial into my local ISP (Garlique.com at the time), connect to the school systems, and run multiple instances of MatLab programs I'd written as part of a course. I could then either connect later and collect the output, or simply have it available when I arrived at the school.
I had no idea at the time, but for the first four months or so that I was at NPS, I walked by Gary Kildall's office almost every day. I never did get to see CP/M in action, but anyone who's ever used MS-DOS has interacted with a shadow of what CP/M once was and was intended to be.
While I was in grad school, I had two courses in assembly language programming, both based on the Motorola 68000, the microprocessor used in Amiga systems. I never owned an Amiga as a kid, but while I was in grad school, there was someone in a nearby town who ran a BBS based on Amiga, and after grad school, I did see an Amiga system at a church yard sale.
While I was in the Monterey/Carmel area of California, I also became peripherally aware of such oddities of the tech world as BeOS and NeXT.
Looking back on all this, I was active duty military at the time, and could not have afforded to pull together the various bits and pieces to lay the foundation for a future museum. Also, for anyone familiar with the system the military uses to move service members between duty stations, this likely would not have survived (ah, the stories...). However, thanks to virtualization systems such as VirtualBox, you can now pull together a museum-on-a-hard-drive, by either downloading the virtual images, or collecting the necessary materials and installing them yourself. As you can see from the image in this post, I found an OS/2Warp VB image available online. I don't do much with it, but I do remember that the web browser was the first one I encountered that would let you select an image in a web page and drag it to your desktop. At the time, that was some pretty cool stuff!
Resources (links to instructions, not all include images or ISO files)
VirtualBox BeOS R5
VirtualBox Haiku (inspired by BeOS) images (there's also a Plan 9 image...just sayin'...)
NeXT in VirtualBox: NeXTSTEP, stuffjasondoes
Virtually Fun - CPM/86
VirtualBox Amiga
Wednesday, January 09, 2019
A Tale of Two Analyses
I recently shared my findings from an analysis challenge that Ali posted, and after publishing my post, found out that Adam had also shared his findings. Looking at both posts, it's clear that there are two different approaches, and as I read through Adam's findings, it occurred to me that this is an excellent example of what I've seen time and time again in the industry.
Adam and I have never met, and I know nothing at all about his background. I only know that Adam's blog has (at the time of this writing) a single post, from 2 Jan 2019. On the other hand, my blogging goes back to 2004, and I've been in the industry for over 21 years, with several years of service prior to that. All this is intended to point out is that Adam and I each come to the table with a different perspective, and given the same data, will likely approach it differently.
A good deal of my experience is in the consulting arena, meaning that engagements usually start with a phone call, and in most cases, it's very likely that the folks I work with weren't the first to be called, and won't be the last. Yes, I'm saying that in some cases, consumers of digital analysis services shop around. This isn't a bad thing, it's simply meant to indicate that they're looking for the "best deal". As such, cases are "spec'd out" based on a number of hours...not necessarily the number of hours it will take to complete the work, but more so the number of hours a customer is willing to purchase for the work they want done. The nature outcome of this is that once the customer's questions have been established, the analyst assigned the work needs to focus on answering the questions. Otherwise, time spent pursuing off-topic issues or "rabbit holes" increases the time it takes to complete the work, which isn't billed to the customer. As such, the hourly rate drops, and it can drop to the point where the company looses money on the work.
All of this is meant to say that without an almost pedantic focus on the questions at hand, an analyst is going to find themselves not making friends; reports won't be delivered on time, additional analysts will need to be assigned to actually complete the work, and someone may have their approved vacation rescinded (I've actually seen this happen...) in order to get the work done.
When I read the challenge, my focus was on the text that the admin saw and reported. As such, my analysis goal, before even downloading the challenge image, was to determine the location of the file on the system, and then determine how it got there.
However, I approached Ali's first question of how the system was hacked a little differently; I've dealt with a lot of customers in two decades who've asked how something was "hacked", and I've had to keep in mind that their use of the term is different from mine. For me, "hacked" refers to exploiting a vulnerability to gain access to a system, escalate privileges, and/or essentially take actions for which the system and data were never intended. I didn't go into the analysis assuming that the system was "hacked"...I approached my analysis from the perspective that the focus of main effort for my analysis was the message that the admin had reported, and any "hacking" of the system would be within some modicum of temporal proximity to that file being created and/or modified. As such, a timeline was in order, and this approach helped me with question #2, regarding "evidence". In fact, creating micro-timelines or "overlays" allowed me to target my analysis. At one point, I created a timeline of just logon events from the Security Event Log. In order to prove that the Administrator account was used to create the target readme.txt file, I created micro-timelines for both user profiles, using just web browser and Registry (specifically, shellbags and RecentDocs) data.
From my overall timeline, I found when the "C:\Tools\readme.txt" file had been created, and I then used that as a pivot point for my analysis. This is how I arrived at the finding that the Administrator account had been used to create the file.
From my perspective, the existence of additional text files (i.e., within a folder on the "master" profile desktop), who created them and when, the modification to magnify.exe, and the system time change all fell into the same bucket for question #4 (i.e., "anything you would like to add"). All of these things fell out of the timeline rather easily, but I purposely did not pursue further analysis of things such as the execution of magnify.exe, net.exe, and net1.exe, as I had already achieved my analysis goal.
Again, in my experience as a consultant, completing analysis work within a suitable time frame (one that allows the business to achieve it's margins) hinges upon a focus on the stated analysis goals.
I've seen this throughout my career...a number of years ago, I was reviewing a ransomware case as part of an incident tracking effort, and noticed in the data that the same vulnerability used to compromise the system prior to ransomware being deployed had been used previously to access the system and install a bitcoin miner. As the customer's questions specifically focused on the ransomware, analysis of the bitcoin miner incident hadn't been pursued. This wasn't an issue...the analyst hadn't "missed" anything. In fact, the recommendations in the report that applied to the ransomware issue applied equally well to the bitcoin miner incident.
A Different Perspective
It would seem, from my reading (and interpretation) of Adam's findings that his focus was more on the "hack". Early on in his post, Adam's statement of his analysis goals corresponded very closely to my understanding of the challenge:
We are tasked with performing an IR investigation from a user who reported they've found a suspicious note on their system. We are given only the contents of the message (seen below) without its file path, and also no time at which the note was left.
...and...
Since we can understand that this note was of concern to the user, it is very important to start developing a time frame of before the note was created to understand what led to this point. This will allow the investigator to find the root cause efficiently.
Adam went on to describe analyzing shellbags artifacts, but there was no indication in his write-up that he'd done a side-by-side mapping of date/time stamps, with respect to the readme.txt file to the shellbag artifacts. Shortly after that, the focus shifted to magnify.exe, and away from the text file in question.
Adam continued with a perspective that you don't often see in write-ups or reports; not only did he look up the hash of the file on VT, but he demonstrated his hypothesis regarding how magnify.exe might have been used, in a virtual machine.
In the end, however, I could not find where the question of how the "C:\Tools\readme.txt" file came to be on the system was clearly and directly addressed. It may be there; I just couldn't find it.
Final Words
I did engage with the challenge author early this morning (9 Jan), and he shared with me the specifics of how different files (the files in a folder on one user's desktop) were created on the system. I think that one of the major comments I shared with Ali was that not only was this challenge representative of what I've seen in the industry, but so are the shared findings. No two analysts come to the table with the exact same experience and perspectives, and left to their own devices, no two analysts will approach and solve the same analysis (or challenge) the same way. But this is where documentation and sharing is most valuable; by working challenges such as these, either separately or together, and then sharing our findings publicly, we can all find a way to improve every aspect of what we do.
Adam and I have never met, and I know nothing at all about his background. I only know that Adam's blog has (at the time of this writing) a single post, from 2 Jan 2019. On the other hand, my blogging goes back to 2004, and I've been in the industry for over 21 years, with several years of service prior to that. All this is intended to point out is that Adam and I each come to the table with a different perspective, and given the same data, will likely approach it differently.
A good deal of my experience is in the consulting arena, meaning that engagements usually start with a phone call, and in most cases, it's very likely that the folks I work with weren't the first to be called, and won't be the last. Yes, I'm saying that in some cases, consumers of digital analysis services shop around. This isn't a bad thing, it's simply meant to indicate that they're looking for the "best deal". As such, cases are "spec'd out" based on a number of hours...not necessarily the number of hours it will take to complete the work, but more so the number of hours a customer is willing to purchase for the work they want done. The nature outcome of this is that once the customer's questions have been established, the analyst assigned the work needs to focus on answering the questions. Otherwise, time spent pursuing off-topic issues or "rabbit holes" increases the time it takes to complete the work, which isn't billed to the customer. As such, the hourly rate drops, and it can drop to the point where the company looses money on the work.
All of this is meant to say that without an almost pedantic focus on the questions at hand, an analyst is going to find themselves not making friends; reports won't be delivered on time, additional analysts will need to be assigned to actually complete the work, and someone may have their approved vacation rescinded (I've actually seen this happen...) in order to get the work done.
When I read the challenge, my focus was on the text that the admin saw and reported. As such, my analysis goal, before even downloading the challenge image, was to determine the location of the file on the system, and then determine how it got there.
However, I approached Ali's first question of how the system was hacked a little differently; I've dealt with a lot of customers in two decades who've asked how something was "hacked", and I've had to keep in mind that their use of the term is different from mine. For me, "hacked" refers to exploiting a vulnerability to gain access to a system, escalate privileges, and/or essentially take actions for which the system and data were never intended. I didn't go into the analysis assuming that the system was "hacked"...I approached my analysis from the perspective that the focus of main effort for my analysis was the message that the admin had reported, and any "hacking" of the system would be within some modicum of temporal proximity to that file being created and/or modified. As such, a timeline was in order, and this approach helped me with question #2, regarding "evidence". In fact, creating micro-timelines or "overlays" allowed me to target my analysis. At one point, I created a timeline of just logon events from the Security Event Log. In order to prove that the Administrator account was used to create the target readme.txt file, I created micro-timelines for both user profiles, using just web browser and Registry (specifically, shellbags and RecentDocs) data.
From my overall timeline, I found when the "C:\Tools\readme.txt" file had been created, and I then used that as a pivot point for my analysis. This is how I arrived at the finding that the Administrator account had been used to create the file.
From my perspective, the existence of additional text files (i.e., within a folder on the "master" profile desktop), who created them and when, the modification to magnify.exe, and the system time change all fell into the same bucket for question #4 (i.e., "anything you would like to add"). All of these things fell out of the timeline rather easily, but I purposely did not pursue further analysis of things such as the execution of magnify.exe, net.exe, and net1.exe, as I had already achieved my analysis goal.
Again, in my experience as a consultant, completing analysis work within a suitable time frame (one that allows the business to achieve it's margins) hinges upon a focus on the stated analysis goals.
I've seen this throughout my career...a number of years ago, I was reviewing a ransomware case as part of an incident tracking effort, and noticed in the data that the same vulnerability used to compromise the system prior to ransomware being deployed had been used previously to access the system and install a bitcoin miner. As the customer's questions specifically focused on the ransomware, analysis of the bitcoin miner incident hadn't been pursued. This wasn't an issue...the analyst hadn't "missed" anything. In fact, the recommendations in the report that applied to the ransomware issue applied equally well to the bitcoin miner incident.
A Different Perspective
It would seem, from my reading (and interpretation) of Adam's findings that his focus was more on the "hack". Early on in his post, Adam's statement of his analysis goals corresponded very closely to my understanding of the challenge:
We are tasked with performing an IR investigation from a user who reported they've found a suspicious note on their system. We are given only the contents of the message (seen below) without its file path, and also no time at which the note was left.
...and...
Since we can understand that this note was of concern to the user, it is very important to start developing a time frame of before the note was created to understand what led to this point. This will allow the investigator to find the root cause efficiently.
Adam went on to describe analyzing shellbags artifacts, but there was no indication in his write-up that he'd done a side-by-side mapping of date/time stamps, with respect to the readme.txt file to the shellbag artifacts. Shortly after that, the focus shifted to magnify.exe, and away from the text file in question.
Adam continued with a perspective that you don't often see in write-ups or reports; not only did he look up the hash of the file on VT, but he demonstrated his hypothesis regarding how magnify.exe might have been used, in a virtual machine.
In the end, however, I could not find where the question of how the "C:\Tools\readme.txt" file came to be on the system was clearly and directly addressed. It may be there; I just couldn't find it.
Final Words
I did engage with the challenge author early this morning (9 Jan), and he shared with me the specifics of how different files (the files in a folder on one user's desktop) were created on the system. I think that one of the major comments I shared with Ali was that not only was this challenge representative of what I've seen in the industry, but so are the shared findings. No two analysts come to the table with the exact same experience and perspectives, and left to their own devices, no two analysts will approach and solve the same analysis (or challenge) the same way. But this is where documentation and sharing is most valuable; by working challenges such as these, either separately or together, and then sharing our findings publicly, we can all find a way to improve every aspect of what we do.
Monday, January 07, 2019
Mystery Hacked System
Ali was gracious enough to make several of the challenges that he put together for his students available online, as well as for allowing me to include the first challenge in IWS. I didn't include challenge #3, the "mystery hacked system" in the book, but I did recently revisit the challenge. Ali said that it would be fine for me to post my findings.
As you can see, the challenge is pretty straightforward...an admin found a message "written on their system", and reported it. The questions that Ali posed for the challenge were:
Question 1
Based on what I observed in the data, I would say that the system was not, in fact, hacked. Rather, it appears that the Administrator user logged in from the console, accessed the C:\Tools folder and created the readme.txt file containing the message.
Question 2
I began with a visual inspection of the image, in order to verify that it could be opened, per SOP. An initial view of the image indicated two user profiles (Administrator, master), and that there was a folder named "C:\Tools". Within that folder was a single file named "readme.txt", which contained the text in question.
From there, I created a timeline of system activity, and started my analysis by locating the file 'C:\Tools\readme.txt' within the timeline, and I then pivoted from there.
The readme.txt file was created on 12 Dec 2015 at approx. 03:24:04 UTC. Approx. 4 seconds later, the Automatic JumpList for Notepad within the Administrator profile was modified; at the same time, UserAssist artifacts indicated that Administrator user launched Notepad.
At the time that the file was created, the Administrator user was logged into the system via the console. Shellbag artifacts for the Administrator account indicated that the account was used to navigate to the 'C:\Tools' folder via Windows Explorer.
Further, there were web browser artifacts indicating that Administrator account was used to view the file at 03:24:09 UTC on 12 Dec 2015, and that the 'master' account was used to view the file at 03:27:23 UTC on the same day.
Question 3
I created a timeline of system activity from several sources extracted from the image; file system metadata, Windows Event Log metadata, and Registry metadata. In a few instances, I created micro-timelines of specific data sources (i.e., login events from the Security Event Log, activity related to specific users) to use as "overlays" and make analysis easier.
Question 4
Not related to the analysis goal provided were indications that the Administrator account had been used to access the Desktop\Docs folder for the 'master' user and created the 'readme.txt' file in that folder.
In addition, there was a pretty significant change in the system time, as indicated by the Windows Event Log:
Fri Dec 11 17:30:37 2015 Z
EVTX sensei - [Time change] Microsoft-Windows-Kernel-General/1;
2015-12-11T17:30:37.456000000Z,
2015-12-12T03:30:35.244496800Z,1
*Time was changed TO 2015-12-11T17:30:37 FROM 2015-12-12T03:30:35
Finally, there was some suspicious activity on 12 Dec, at 03:26:13 UTC, in that magnify.exe was executed, as indicated by the creation and last modification of an application prefetch file. This indicates that this may have been the first and only time that magnify.exe had been executed.
Several seconds before that, it appeared that utilman.exe had been executed, and shortly afterward, net.exe and net1.exe were executed, as well.
Concerned with "Image File Execution Option" or accessibility hijacks, I searched the rest of the timeline, and found an indication that on 11 Dec, at approx. 19:18:54 UTC, cmd.exe had been copied to magnify.exe. This was verified by checking the file version information within magnify.exe. Utilman.exe does not appear to have been modified, nor replaced, and the same appears to be true for osk.exe and sethc.exe.
Checking the Software hive, there do not appear to be any further "Image File Execution Option" hijacks.
I should note that per the timeline, the execution of magnify.exe occurred approx. two minutes after the message file was created.
Addendum:
After I posted this article and Ali commented, I found this blog posted by Adam which also provided a solution to the challenge. Adam had posted his solution on 2 Jan, so last week. I have to say, it's great to see someone else working these challenges and posting their solution. Great job, Adam, and thanks for sharing!
Addendum, 9 Jan:
Based on a whim, I took a look at the USN change journal, and it proved to be pretty fascinating...more so, it showed that following the use of the net.exe/net1.exe, no files were apparently written to the system (i.e., output redirected to a file). Very cool.
As you can see, the challenge is pretty straightforward...an admin found a message "written on their system", and reported it. The questions that Ali posed for the challenge were:
- How was the system hacked?
- What evidence did you find that proved your hypothesis?
- How did you approach and solve the case?
- Anything you would like to add?
Question 1
Based on what I observed in the data, I would say that the system was not, in fact, hacked. Rather, it appears that the Administrator user logged in from the console, accessed the C:\Tools folder and created the readme.txt file containing the message.
Question 2
I began with a visual inspection of the image, in order to verify that it could be opened, per SOP. An initial view of the image indicated two user profiles (Administrator, master), and that there was a folder named "C:\Tools". Within that folder was a single file named "readme.txt", which contained the text in question.
From there, I created a timeline of system activity, and started my analysis by locating the file 'C:\Tools\readme.txt' within the timeline, and I then pivoted from there.
The readme.txt file was created on 12 Dec 2015 at approx. 03:24:04 UTC. Approx. 4 seconds later, the Automatic JumpList for Notepad within the Administrator profile was modified; at the same time, UserAssist artifacts indicated that Administrator user launched Notepad.
At the time that the file was created, the Administrator user was logged into the system via the console. Shellbag artifacts for the Administrator account indicated that the account was used to navigate to the 'C:\Tools' folder via Windows Explorer.
Further, there were web browser artifacts indicating that Administrator account was used to view the file at 03:24:09 UTC on 12 Dec 2015, and that the 'master' account was used to view the file at 03:27:23 UTC on the same day.
Question 3
I created a timeline of system activity from several sources extracted from the image; file system metadata, Windows Event Log metadata, and Registry metadata. In a few instances, I created micro-timelines of specific data sources (i.e., login events from the Security Event Log, activity related to specific users) to use as "overlays" and make analysis easier.
Question 4
Not related to the analysis goal provided were indications that the Administrator account had been used to access the Desktop\Docs folder for the 'master' user and created the 'readme.txt' file in that folder.
In addition, there was a pretty significant change in the system time, as indicated by the Windows Event Log:
Fri Dec 11 17:30:37 2015 Z
EVTX sensei - [Time change] Microsoft-Windows-Kernel-General/1;
2015-12-11T17:30:37.456000000Z,
2015-12-12T03:30:35.244496800Z,1
*Time was changed TO 2015-12-11T17:30:37 FROM 2015-12-12T03:30:35
Finally, there was some suspicious activity on 12 Dec, at 03:26:13 UTC, in that magnify.exe was executed, as indicated by the creation and last modification of an application prefetch file. This indicates that this may have been the first and only time that magnify.exe had been executed.
Several seconds before that, it appeared that utilman.exe had been executed, and shortly afterward, net.exe and net1.exe were executed, as well.
Concerned with "Image File Execution Option" or accessibility hijacks, I searched the rest of the timeline, and found an indication that on 11 Dec, at approx. 19:18:54 UTC, cmd.exe had been copied to magnify.exe. This was verified by checking the file version information within magnify.exe. Utilman.exe does not appear to have been modified, nor replaced, and the same appears to be true for osk.exe and sethc.exe.
Checking the Software hive, there do not appear to be any further "Image File Execution Option" hijacks.
I should note that per the timeline, the execution of magnify.exe occurred approx. two minutes after the message file was created.
Addendum:
After I posted this article and Ali commented, I found this blog posted by Adam which also provided a solution to the challenge. Adam had posted his solution on 2 Jan, so last week. I have to say, it's great to see someone else working these challenges and posting their solution. Great job, Adam, and thanks for sharing!
Addendum, 9 Jan:
Based on a whim, I took a look at the USN change journal, and it proved to be pretty fascinating...more so, it showed that following the use of the net.exe/net1.exe, no files were apparently written to the system (i.e., output redirected to a file). Very cool.
Tuesday, January 01, 2019
LNK Toolmarks, Revisted
There was a recent blog post and Twitter thread regarding the parsing and use of weaponized LNK file metadata. Based on some of what was shared during the exchange, I got to thinking about toolmarks again, and how such files might be created. My thinking was that perhaps some insight could be gained from the format and structure of the file itself. For example, the weaponized LNK files from the two campaigns appear to have valid structures, and there appear to have been no modifications to the LNK file structure itself. Also, the files contained specific structures, including a PropertyStoreDataBlock and a TrackerDataBlock. While this may not seem particular unusual, I have seen some LNK files that do not contain these structures. I parsed an LNK file not long ago that did not contain a TrackerDataBlock. As such, I thought I'd look at some ways, or at least one way, to create LNK files and see what the resulting files would "look like".
Several years ago, Adam had an excellent blog post regarding LNK hotkeys; I borrowed the VBS code (other examples and code snippets can be found here and here), and made some minor tweaks. For example, I pointed the target path to calc.exe, ran the code, and then moved the file from the Desktop to a testing folder, changing the file extension in the process. Running my own parser against the resulting LNK file, I found that it had a valid PropertyStoreDataBlock (containing the user SID), as well as a properly structured TrackerDataBlock.
These findings coincide with what I saw when running the parser against LNK files from both the 2016 and 2018 APT29/Cozy Bear campaigns.
You can also use PowerShell to create shortcuts, using the same API as the VBScript:
- Create shortcuts in PowerShell
- PowerShell script to modify an LNK file (target path)
- Editing shortcuts in PowerShell
We also know that the LNK files used in those campaigns were relatively large, much larger than a 'normal' LNK file, because the logical file was extended beyond the LNK file structure to include additional payloads. These payloads were extracted by the PowerShell commands embedded with in the LNK file itself.
Felix Weyne published a blog post that discussed how to create a booby-trapped LNK file (much like those from the APT29 campaigns) via PowerShell.
So, we have a start in understanding how the LNK files may have been created. Let's go back and take another look at the LNK files themselves.
The FireEye blog post regarding the recent phishing campaign included an operational timeline in table 1. From that table, the "LNK weaponized" time stamp was developed (at least in part) from the last modification DOSDate time stamp for the 'system32' folder shell item within the LNK file. What this means is that the FireEye team stripped out and used as much of the LNK file metadata as they had available. From my perspective, this is unusual because most write-ups I've seen regarding the use of weaponized LNK files (or really, any weaponized files) tend to give only a cursory overview of or reference to the file before moving on to the payloads. Even write-ups involving weaponized Word documents make mention of the file extension, maybe give a graphic illustrating the content, and then move on.
As described by the folks at JPCERT/CC, parsing of weaponized LNK files can give us a view into the attacker's development environment. The FireEye team illustrated some of that by digging into the individual shell items and incorporating the embedded DOSDate time stamps in their analysis, and subsequently the operational timeline they shared in their post.
There's a little bit (quite literally, no pun intended...) more that we can tease out of the LNK file. The VBScript that I ran to create an LNK file on my Windows 10 test system. Figure 1 shows the "version" field (in the green box) for one of the shell items in the itemIDList from the LNK file.
In figure 1, the version is "0x09", which corresponds to a Windows 10 system (see section 6.5 of Joachim Metz's Windows Shell Item format specification).
Figure 2 illustrates the shell item version value (in the red box) from the approximate corresponding location in the APT29 LNK file from the 2018 campaign.
In figure 2, the version field is "0x08", which corresponds to Windows 7 (as well as Windows 8.0, and Windows 2008). It may not be a lot, but as the versions of Windows continue to progress, this will have more meaning.
Finally, some things to consider about the weaponized LNK files...
All of the shell items in both LNK files have consistent extension blocks, which may indicate that they were created through some automated means, such as via an API, and not modified (by hand or by script) as a means of defense evasion.
So, what about that? What is the effect of individual shell items, and to the entire LNK file, if modifications are made? From figure 1, I modified the version value to read 0x07 instead of 0x09, and the shortcut performed as expected. Modifications to weaponized LNK files such as these will very likely affect parsers, but for those within the community who wrote the parsers and understand the LNK file structure (along with associated shell item structures), such things are relatively straightforward to overcome (see my previous "toolmarks" blog post).
Final Words
Using "all the parts of the buffalo", and fully exploiting the data that you have available, is the only way that you're going to develop a full intelligence picture. Some of what was discussed in this post may not be immediately useful for, say, threat hunting within an infrastructure, but it can be used to develop a better understanding of the progression of techniques employed by actors, and provide a better understanding of what to hunt for.
Additional Reference Material
Joachim Metz's Windows Shell Item format specification
Link to reported bash and C versions of a tool to create LNK files on Linux
Several years ago, Adam had an excellent blog post regarding LNK hotkeys; I borrowed the VBS code (other examples and code snippets can be found here and here), and made some minor tweaks. For example, I pointed the target path to calc.exe, ran the code, and then moved the file from the Desktop to a testing folder, changing the file extension in the process. Running my own parser against the resulting LNK file, I found that it had a valid PropertyStoreDataBlock (containing the user SID), as well as a properly structured TrackerDataBlock.
These findings coincide with what I saw when running the parser against LNK files from both the 2016 and 2018 APT29/Cozy Bear campaigns.
You can also use PowerShell to create shortcuts, using the same API as the VBScript:
- Create shortcuts in PowerShell
- PowerShell script to modify an LNK file (target path)
- Editing shortcuts in PowerShell
We also know that the LNK files used in those campaigns were relatively large, much larger than a 'normal' LNK file, because the logical file was extended beyond the LNK file structure to include additional payloads. These payloads were extracted by the PowerShell commands embedded with in the LNK file itself.
Felix Weyne published a blog post that discussed how to create a booby-trapped LNK file (much like those from the APT29 campaigns) via PowerShell.
So, we have a start in understanding how the LNK files may have been created. Let's go back and take another look at the LNK files themselves.
The FireEye blog post regarding the recent phishing campaign included an operational timeline in table 1. From that table, the "LNK weaponized" time stamp was developed (at least in part) from the last modification DOSDate time stamp for the 'system32' folder shell item within the LNK file. What this means is that the FireEye team stripped out and used as much of the LNK file metadata as they had available. From my perspective, this is unusual because most write-ups I've seen regarding the use of weaponized LNK files (or really, any weaponized files) tend to give only a cursory overview of or reference to the file before moving on to the payloads. Even write-ups involving weaponized Word documents make mention of the file extension, maybe give a graphic illustrating the content, and then move on.
As described by the folks at JPCERT/CC, parsing of weaponized LNK files can give us a view into the attacker's development environment. The FireEye team illustrated some of that by digging into the individual shell items and incorporating the embedded DOSDate time stamps in their analysis, and subsequently the operational timeline they shared in their post.
There's a little bit (quite literally, no pun intended...) more that we can tease out of the LNK file. The VBScript that I ran to create an LNK file on my Windows 10 test system. Figure 1 shows the "version" field (in the green box) for one of the shell items in the itemIDList from the LNK file.
Fig. 1: Excerpt from "foo" LNK file |
In figure 1, the version is "0x09", which corresponds to a Windows 10 system (see section 6.5 of Joachim Metz's Windows Shell Item format specification).
Figure 2 illustrates the shell item version value (in the red box) from the approximate corresponding location in the APT29 LNK file from the 2018 campaign.
Fig. 2: Excerpt from APT29 LNK file |
In figure 2, the version field is "0x08", which corresponds to Windows 7 (as well as Windows 8.0, and Windows 2008). It may not be a lot, but as the versions of Windows continue to progress, this will have more meaning.
Finally, some things to consider about the weaponized LNK files...
All of the shell items in both LNK files have consistent extension blocks, which may indicate that they were created through some automated means, such as via an API, and not modified (by hand or by script) as a means of defense evasion.
So, what about that? What is the effect of individual shell items, and to the entire LNK file, if modifications are made? From figure 1, I modified the version value to read 0x07 instead of 0x09, and the shortcut performed as expected. Modifications to weaponized LNK files such as these will very likely affect parsers, but for those within the community who wrote the parsers and understand the LNK file structure (along with associated shell item structures), such things are relatively straightforward to overcome (see my previous "toolmarks" blog post).
Final Words
Using "all the parts of the buffalo", and fully exploiting the data that you have available, is the only way that you're going to develop a full intelligence picture. Some of what was discussed in this post may not be immediately useful for, say, threat hunting within an infrastructure, but it can be used to develop a better understanding of the progression of techniques employed by actors, and provide a better understanding of what to hunt for.
Additional Reference Material
Joachim Metz's Windows Shell Item format specification
Link to reported bash and C versions of a tool to create LNK files on Linux
Subscribe to:
Posts (Atom)