Pages

Monday, July 15, 2013

HowTo: Detecting Persistence Mechanisms

This post is about actually detecting persistence mechanisms...not querying them, but detecting them.  There's a difference between querying known persistence mechanisms, and detecting previously unknown persistence mechanisms used by malware; the former we can do with tools such as AutoRuns and RegRipper, but the latter requires a bit more work.

Detecting the persistence mechanism used by malware can be a critical component of an investigation; for one, it helps us determine the window of compromise, or how long it's been since the system was infected (or compromised).  For PCI exams in particular, this is important because many organizations know approximately how many credit card transactions they process on a daily or weekly basis, and combining this information with the window of compromise can help them estimate their exposure.  If malware infects a system in a user context but does not escalate it's privileges, then it will mostly likely start back up after a reboot only after that user logs back into the system.  If the system is rebooted and another user logs in (or in the case of a server, no user logs in...), then the malware will remain dormant.

Detecting Persistence Mechanisms
Most often, we can determine a malware persistence mechanism by querying the system with tools such as those mentioned previously in this post.  However, neither of these tools is comprehensive enough to cover other possible persistence mechanisms, and as such, we need to seek other processes or methods of analysis and detection.

One process that I've found to be very useful is timeline analysis.  Timelines provide us with context and an increased relative confidence in our data, and depending upon which data we include in our timeline, an unparalleled level of granularity.

Several years ago, I determined the existence of a variant of W32/Crimea on a system (used in online banking fraud) by creating a timeline of system activity.  I had started by reviewing the AV logs from the installed application, and then moved on to scanning the image (mounted as a volume) with several licensed commercial AV scanners, none of which located any malware.  I finally used an AV scanner called "a-squared" (now apparently owned by Emsisoft), and it found a malicious DLL.  Using that DLL name as a pivot point within my timeline, I saw that relatively close to the creation date of the malicious DLL, the file C:\Windows\system32\imm32.dll was modified; parsing the file with a PE analysis tool, I could see that the PE Import Table had been modified to point to the malicious DLL.  The persistence mechanism employed by the malware was to 'attach' to a DLL that is loaded by user processes that interact with the keyboard, in particular web browsers.  It appeared to be a keystroke logger that was only interested in information entered into form fields in web pages for online banking sites.

Interestingly enough, this particular malware was very well designed, in that it did not write the information it collected to a file on the system.  Instead, it immediately sent the information off of the system to a waiting server, and the only artifacts that we could find of that communication were web server responses embedded in the pagefile.

Something else to consider is the DLL Search Order "issue", often referred to as hijacking. This has been discussed at length, and likely still remains an issue because it's not so much a specific vulnerability that can be patched or fixed, but more a matter of functionality provided by the architecture of the operating system.

In the case of ntshrui.dll (discussed here by Nick Harbour, while he was still with Mandiant), this is how it worked...ntshrui.dll is listed in the Windows Registry as an approved shell extension for Windows Explorer.  In the Registry, many of the approved shell extensions have explicit paths listed...that is, the value is C:\Windows\system32\some_dll.dll, and Windows knows to go load that file.  Other shell extensions, however, are listed with implicit paths; that is, only the name of the DLL is provided, and when the executable (explorer.exe) loads, it has to go search for that DLL.  In the case of ntshrui.dll, the legitimate copy of the DLL is located in the system32 folder, but another file of the same name had been created in the C:\Windows folder, right next to the explorer.exe file.  As explorer.exe starts searching for the DLL in it's own directory, it happily loaded the malicious DLL without any sort of checking, and therefore, no errors were thrown.

Around the time that Nick was writing up his blog post, I'd run across a Windows 2003 system that had been compromised, and fortunately for me, the sysadmins had a policy for a bit more extensive logging enabled on systems.  As I was examining the timeline, starting from the most recent events to occur, I marveled at how the additional logging really added a great deal of granularity to thing such as a user logging in; I could see where the system assigned a token to the user, and then transferred the security context of the login to that user.  I then saw a number of DLLs being accessed (that is, their last accessed times were modified) from the system32 folder...and then I saw one (ntshrui.dll) from the C:\Windows folder.  This stood out to me as strange, particularly when I ran a search across the timeline for that file name, and found another file of the same name in the system32 folder.  I began researching the issue, and was able to determine that the persistence mechanism of the malware was indeed the use of the DLL search order "vulnerability".

Creating Timelines
Several years ago, I was asked to write a Perl script that would list all Registry keys within a hive file, along with their LastWrite times, in bodyfile format.  Seeing the utility of this information, I also wrote a version that would output to TLN format, for inclusion in the timelines I create and use for analysis.  This allows for significant information that I might not otherwise see to be included in the timeline; once suspicious activity has been found, or a pivot point located, finding unusual Registry keys (such as those beneath the CLSID subkey) can lead to identification of a persistence mechanism.

Additional levels of granularity can be achieved in timelines through the incorporation of intelligence into the tools used to create timelines, something that I started adding to RegRipper with the release of version 2.8. One of the drawbacks to timelines is that they will show the creation, last accessed, and last modification times of files, but not incorporate any sort of information regarding the contents of that file into the timeline.  For example, a timeline will show a file with a ".tmp" extension in the user's Temp folder, but little beyond that; incorporating additional functionality for accessing such files would allow us to include intelligence from previous analyses into our parsing routines, and hence, into our timelines.  As such, we may want to generate an alert for that ".tmp" file, specifically if the binary contents indicate that it is an executable file, or a PDF, or some other form of document.

Another example of how this functionality can be incorporated into timelines and assist us in detecting persistence mechanisms might be to add grep() statements to RegRipper plugins that parse file paths from values.  For example, your timeline would include the LastWrite time for a user's Run key as an event, but because the values for this key are not maintained in any MRU order, there's really nothing else to add.  However, if your experience were to show that file paths that include "AppData", "Application Data", or "Temp" might be suspicious, why not add checks for these to the RegRipper plugin, and generate an alert if one is found?  Would you normally expect to see a program being automatically launched from the user's "Temporary Internet Files" folder, or is that something that you'd like to be alerted on.  The same sort of thing applies to values listed in the InProcServer keys beneath the CLSID key in the Software hive.

Adding this alerting functionality to tools that parse data into timeline formats can significantly increase the level of granularity in our timelines, and help us to detect previously unknown persistence mechanisms.

Resources
Mandiant: Malware Persistence without the Windows Registry
Mandiant: What the fxsst?
jIIR: Finding Malware like Iron Man
jIIR: Tracking down persistence mechanisms


12 comments:

  1. Thanks for the article it was really helpful to me.

    I was wondering, if the AV wouldn't have hit upon the DLL, how would you have proceeded?

    ReplyDelete
  2. @Tim,

    I assume that by your question, you're referring to the section on imm32.dll...would that be correct?

    If it is, I like to think that I might have found the modification to imm32.dll (appeared in the timeline as "M...") as unusual.

    HTH

    ReplyDelete
  3. @Tim,

    Also, I think that something important to take away from this is that I've rolled that finding back into my analysis; I not only look specifically at imm32.dll (particularly in online banking fraud cases), but I also include checking section names as part of my PE file analysis, when necessary.

    I think that's what makes for a better analyst is to take things that you've learned from one case, and incorporate that into your overall analysis. It's unfortunate that the DFIR community isn't more into sharing these kinds of things...I find it valuable to learn from the findings of others, as learning from just my own is somewhat limited.

    ReplyDelete
  4. All we have right now to analyze timelines seems to be Excel (which you don't use much) or a text file. The timeline with context could be nice if something like Forensic Scanner was used to process an image and put various suspicious (packed, av hits, autoruns, inavlid signature, etc.) files like you mentioned in an SQLite database. Then an HTML timeline could be generated and files that show up in the database could be marked in the timeline and clickable for more context.

    Here's some more ideas for HowTo topics you requested:

    How to do a root cause analysis
    How to work with clients who are stressed, want answers now, point fingers, or heads to roll.
    How to hit the ground running when you arrive at a client with little information.
    How to communicate during an incident with respect to security and syngergy with other IRT members.
    How to detect and deal with timestomping, data wiping, or some other antiforensic technique.
    How to get a DFIR job, and keep it.
    How to make sure management understands and applies your recomendations after an incident when they're most likely to listen.
    How to find hidden data; in registry, outside of the partition, ADS, or if you've seen data hidden in the MFT, slackspace, steganography, etc.
    How to contain an incident.

    I'm enjoying these HowTo's and WFA 2e, thanks.

    ReplyDelete
  5. Joe,

    Thanks for the comment.

    The timeline with context could be nice if something like Forensic Scanner was used to process an image and put various suspicious (packed, av hits, autoruns, inavlid signature, etc.) files like you mentioned in an SQLite database.

    Current timeline creation tools allow for context, and available tools also allow for a greater level of granularity. Incorporating this information into a database is then straightforward.

    Here's some more ideas for HowTo topics you requested:

    This is a very interesting list...do you have any insight that you can share on these topics?

    Several of the topics have been covered previously, by others...such as the questions of time stomping and data wiping...is there something specific that you're interested in?

    Thanks.

    ReplyDelete
  6. Harlan,

    Thanks for the response. That was the dll to which I was referring.
    I’m working on the procedures I’ll be following to ensure I don’t miss the clues when searching for malware. I won’t be able to rely on any AV product for detection.

    I have started tinkering around with the plaso - log2timeline program. It looks and works great.

    ReplyDelete
  7. @Tim,

    Great, be sure to let Kristinn know...

    ReplyDelete
  8. This is a very interesting list...do you have any insight that you can share on these topics?

    Not from any personal experience.

    I've heard stories of a manager standing in the doorway while an analyst is working and continuing to update him with how much money they are losing.
    Analysts being pressured for information before they have it.
    Finger pointing, and people trying to protect their job, before they even know what happened.
    I've heard from analysts that it can be tough getting caught up to speed on the incident and client's network.

    Do you sign up with a secure email provider to communicate during an incident? Use a wiki? IM? I've heard someone recommend analysts should frequently have a meeting to touch base.

    Several of the topics have been covered previously, by others...such as the questions of time stomping and data wiping...is there something specific that you're interested in?

    Correlating $STANDARD_INFORMATION timestamps with $FILE_NAME and registry timestamps was recommended in the past, and still useful, but now those timestamps can be modified as well. So what do you do now? The nanoseconds will be all zeros when timestomp is used, but I believe that's tool specific. I think there might also be a way to detect timestomping through MFT entry number analysis. If so, I'm not very familiar with it, how reliable it is, or any other method to use.

    ReplyDelete
  9. Joe,

    I've heard stories...

    I'm sure we all have, and I'm sure that we've all had some of those experiences. I would submit that that's exactly why this is a job, and it's not called an "amusement park" (because if it was, they would charge us admission rather than paying us a salary). ;-)


    I've heard from analysts that it can be tough getting caught up to speed on the incident and client's network.

    I would agree with that, but it can depend in large part on your knowledge and breadth of experience.

    Communications during an incident can be very dependent upon the incident, the infrastructure, as well as a number of other factors. If you assume that your email server is compromised, do you then assume that the intruder has the ability to install sniffers? If you do, none of the other means of communications that use the network are suitable. Many others aren't suitable if you do not want any of the information on the Internet. So you may need to move to cell phone communications, or even to regularly scheduled meetings.

    Also, if your internal phone service is based on an Internet-connected PBX, your use of conference calls may be impacted.

    As far as looking for the use of time modification and anti-forensics techniques, timelines provide a very valuable analysis technique. In your post, you mention several specific items that are targets of attack, as well as of analysis. The idea is to incorporate a sufficient breadth of data sources in your timeline to identify such things...the more complex the technique used, the more artifacts are going to be left.

    However, it is important to point out that experience plays a significant role in identifying such things. If you've never created a timeline before, or if you're using automated tools that do not collect a suitable breadth of data for you, you're likely to miss critical artifacts. That's hard to convey in a single blog post.

    ReplyDelete
  10. Harlan,

    If you assume that your email server is compromised, do you then assume that the intruder has the ability to install sniffers? If you do, none of the other means of communications that use the network are suitable.

    Did you mean keyloggers instead of sniffers? If the communication is encrypted, as it of course should be, then packet sniffers shouldn't be much of a problem... But I see your point.

    ...the more complex the technique used, the more artifacts are going to be left.

    I never thought about it like that. I'll search around then. Hopefully you or someone else covered those timestomping artifacts before as well.

    ReplyDelete
  11. Joe,

    Did you mean keyloggers instead of sniffers?

    No, I meant sniffers...I made no assumptions about encryption being in use. Keyloggers would apply, as well.

    Hopefully you or someone else covered those timestomping artifacts before as well.

    Like I said, I have, and so have others, including (but not limited to) Corey Harrell.

    My point, however, was that for someone to employ both time stomping with 64-bit granularity, as well as modification of Registry key LastWrite times, they'll need to copy over the appropriate tools. The more anti-forensics techniques that are employed, the more tools and activity is required, and the more artifacts may be left.

    It's like anything else in the real (vs digital) world...the more things you do to try to cover your tracks, the more likely you will be to miss something.

    Also, have you ever seen "Battle: Los Angeles"? In the movie, the Marines are being evac'd via helicopter, and as they observe the cityscape at night, they locate the alien's command and control center due to the absence of emanations from the area...the RF emissions are powerful enough to knock out power, leaving a huge blank spot in the cityscape at night. The absence of lights were the indication...just as the use of anti-forensics techniques will leave similar gaps in activity.

    HTH

    ReplyDelete
  12. Harlan,

    Apparently I missed some good stuff. I'll do some googling tomorrow, thanks.

    ReplyDelete