Showing posts with label resources. Show all posts
Showing posts with label resources. Show all posts

Wednesday, January 20, 2016

Resources, Link Mashup

Monitoring
MS's Sysmon was recently updated to version 3.2, with the addition of capturing opens for raw read access to disks and volumes.  If you're interested in monitoring your infrastructure and performing threat hunting at all, I'd highly recommend that you consider installing something like this on your systems.  While Sysmon is not nearly as fully-featured as something like Carbon Black, employing Sysmon along with centralized log collection and filtering will provide you with a level of visibility that you likely hadn't even imagined was possible previously.

This page talks about using Sysmon and NXLog.

The fine analysts of the Dell SecureWorks CTU-SO recently had an article posted  that describes what the bad guys like to do with Windows Event Logs, and both of the case studies could be "caught" with the right instrumentation in place.  You can also use process creation monitoring (via Sysmon, or some other means) to detect when an intruder is living off the land within your environment.

The key to effective monitoring and subsequent threat hunting is visibility, which is achieved through telemetry and instrumentation.  How are bad guys able to persist within an infrastructure for a year or more without being detected?  It's not that they aren't doing stuff, it's that they're doing stuff that isn't detected due to a lack of visibility.

MS KB article 3004375 outlines how to improve Windows command-line auditing, and this post from LogRhythm discusses how to enable Powershell command line logging (another post discussing the same thing is here).  The MS KB article gives you some basic information regarding process creation, and Sysmon provides much more insight.  Regardless of which option you choose, however, all are useless unless you're doing some sort of centralized log collection and filtering, so be sure to incorporate the necessary and appropriate logs into your SEIM, and get those filters written.

Windows Event Logs
Speaking of Windows Event Logs, sometimes it can be very difficult to find information regarding various event source/ID pairs.  Microsoft has a great deal of information available regarding Windows Event Log records, and I very often can easily find the pages with a quick Google search.  For example, I recently found this page on Firewall Rule Processing events, based on a question I saw in an online forum.

From Deus Ex Machina, you can look up a wide range of Windows Event Log records here or here.  I've found both to be very useful.  I've used this site more than once to get information about *.evtx records that I couldn't find any place else.

Another source of information about Windows Event Log records and how they can be used can often be one of the TechNet blogs.  For example, here's a really good blog post from Jessica Payne regarding tracking lateral movement...

With respect to the Windows Event Logs, I've been looking at ways to increase instrumentation on Windows systems, and something I would recommend is putting triggers in place for various activities, and writing a record to the Windows Event Log.  I found this blog post recently that discusses using PowerShell to write to the Windows Event Log, so whatever you trap or trigger on a system can launch the appropriate command or run a batch file the contains the command.  Of course, in a networked environment, I'd highly recommend a SEIM be set up, as well.

One thought regarding filtering and analyzing Windows Event Log records sent to a SEIM...when looking at various Windows Event Log records, we have to look at them in the context of the system, rather than in isolation, as what they actually refer to can be very different.  A suspicious record related to WMI, for example, when viewed in isolation may end up being part of known and documented activity when viewed in the context of the system.

Analysis
PoorBillionaire recently released a Windows Prefetch Parser, which is reportedly capable of handling *.pf files from XP systems all the way up through Windows 10 systems.  On 19 Jan, Eric Zimmerman did the same, making his own Prefetch parser available.

Having tools available is great, but what we really need to do is talk about how those tools can be used most effectively as part of our analysis.  There's no single correct way to use the tool, but the issue becomes, how do you correctly interpret the data once you have it?

I recently encountered a "tale of two analysts", where both had access to the same data.  One analyst did not parse the ShimCache data at all as part of their analysis, while the other did and misinterpreted the information that the tool (whichever one that was) displayed for them.

So, my point is that having tools to parse data is great, but if the focus is tools and parsing data, but not analyzing and correctly interpreting the data, what have the tools really gotten us?

Creating a Timeline
I was browsing around recently and ran across an older blog post (yeah, I know it's like 18 months old...), and in the very beginning of that post, something caught my eye.  Specifically, a couple of quotes from the blog post:

...my reasons for carrying this out after the filesystem timeline is purely down to the time it takes to process.

...and...

The problem with it though is the sheer amount of information it can contain! It is very important when working with a super timeline to have a pivot point to allow you to narrow down the time frame you are interested in.

The post also states that timeline analysis is an extremely powerful tool, and I agree, 100%.  What I would offer to analysts is a more deliberate approach to timeline analysis, based on what Chris Pogue coined as Sniper Forensics.

Speaking of analysis, the folks at RSA released a really good look at analyzing carrier files used during a phish.  The post provides a pretty thorough walk-through of the tool and techniques used to parse through an old (or should I say, "OLE") style MS Word document to identify and analyze embedded macros.

Powershell
Not long ago, I ran across an interesting artifact...a folder with the following name:

C:\Users\user\AppData\Local\Microsoft\Windows\PowerShell\CommandAnalysis\

The folder contained an index file, and a bunch of files with names that follow the format "PowerShell_AnalysisCacheEntry_GUID".  Doing some research into this, I ran across this BoyWonder blog post, which seems to indicate that this is a cache (yeah, okay, that's in the name, I get it...), and possibly used for functionality similar to auto-complete.  It doesn't appear to illustrate what was run, though.  For that, you might want to see the LogRhythm link earlier in this post.

As it turned out, the folder path I listed above was part of legitimate activity performed by an administrator.


Tuesday, July 06, 2010

Links

Malware in PDFs
As a responder and forensic analyst, one of the things I'm usually very interested in (in part, because customers want to know...) is determining how some malware (or someone) was first able to get on a system, or into an infrastructure...what was the Initial Infection Vector? I've posted about this before, and the SANS ISC had an interesting post yesterday, as well, regarding malware in PDF files. This is but one IIV, but

Does this really matter beyond simply determining the IIV for malware or an intrusion? I'd say...yes, it does. But why is that? Well, consider this...this likely started small, with someone getting into the infrastructure, and then progressed from there.

PDF files are one way in...Brian Krebs pointed out another IIV recently, which apparently uses the hcp:// protocol to take advantage of an issue in the HextoNum function and allow an attacker to run arbitrary commands. MS's solution/workaround for the time being is to simply delete a Registry key. More information on exploiting the vulnerability can be seen here (the fact that the vulnerability is actively being exploited is mentioned here)...this is a very interesting read, and I would be interested to see what artifacts there may be to the use of an exploit as described in the post. Don's mentioned other artifacts associated with exploiting compiled HelpHTML files, in particular how CHM functionality can be turned into a malware dropper. But this is a bit different, so I'd be interested to see what analysts may be able to find out.

Also, if anyone knows of a tool or process for parsing hh.dat files, please let me know.

Free Tools
For those interested, here's a list of free forensic tools at ForensicControl.com. I've seen where folks have looked for this sort of thing, and the disadvantage of having lists like this out there is that...well...they're they're out there, and not in one centralized location. I know some folks have really liked the list of network security tools posted at InSecure.org, and it doesn't take much to create something like that at other sites. For example, consider posting something on the ForensicsWiki.

Speaking of tools, Claus has a great post from the 4th that mentions some updates to various tools, including ImDisk, Network Monitor, and some nice remote control utilities. If you're analyzing Windows 2008 or Windows 7 systems, you might want to take a look at AppCrashView from Nirsoft...I've been able to find a good deal of corroborating data in Dr. Watson logs on Windows XP/2003 systems, and this looks like it might be just as useful, if not more so.

Shadow Analyzer
There's been some press lately about a tool called "Shadow Analyzer", developed by Lee Whitfield and Mark McKinnon, which is to be used to access files in Volume Shadow Copies. I also see that this has been talked about on the CyberCrime101 podcast...should be a good listen!

On that note, ShadowExplorer is at version 0.7.

Parsing NTFS Journal Files
Seth recently posted a Python script for parsing NTFS Journal Transaction Log files (ie, $USNJRNL:$J files). I don't know about others but I've been doing a lot of a parsing of NTFS-related files, whether it's the MFT itself, or running $LogFile through BinText.

I'm sure that one of the things that would help with folks adopting tools like this, particularly those that may require Python or Perl being installed, is an explanation or examples of how the information can be useful to an examiner/analyst. So, if folks do find that these tools are useful, post something that lets others know why/how you used it, and what you found that supported your examination goals.