Thursday, January 13, 2011

More Stuff

More on Malware
No pun intended. ;-)

The MMPC has another post up about malware, this one called Kelihos. Apparently, there are some similarities between Kelihos and Waledac, enough that the folks at the MMPC stated that there was likely code reuse. However, there's quite a bit more written about Waledac...and that's what concerns me. The write-up on Kelihos states that the malware "allows unauthorized access and control of an affected computer", but there's no indication as to how that occurs. The only artifact that's listed in the write-up is a file name and the persistence mechanism (i.e., the Run key). So how does this control occur? Might it be helpful to IT and network admins to know a little bit more about this?

Also, take a close look at the Kelihos mentions a file that's dropped into the "All Users" profile and an entry in the HKLM\...\Run key...but that Run key entry apparently doesn't point to the file that's listed.

I understand that the MMPC specifically and AV companies in general aren't in the business of providing more comprehensive information, but what would be the harm, really? They have the information...and I'm not talking about complete reverse engineering of the malware, so there's no need to do a ton of extra work and then post it for free. Given that this affects Microsoft operating systems, I would hope that some organization with MS could provide information that would assist organizations that use those OSs in detecting and reacting to infections in a timely manner.

Eric Huber posted a very illuminating interview with Hal Pomeranz over on the AFoD blog. Throughout the interview, Hal addresses several questions (from his perspective) that you see a lot in lists and particular, there are a lot of "how I got started in the business" responses. I see this sort of question all the time, and it's good to see someone like Hal not only discussing what he did to "break into the business", as it were, but also what he looks for with respect to new employees. If you have the time, take a read through the questions and answers, and see what Hal has to will definitely be worth your time.

Personally, having received an iTouch for Christmas, I think that a podcast would be a great forum for this sort of thing. I'm just sayin'... ;-)

Corey Harrell posted the results of some research on his Journey into Incident Response blog; he's performed some analysis regarding locating AutoPlay and Autorun artifacts. He's done some pretty thorough research regarding this topic, and done a great job of documenting what he did.

Results aside, the most important and valuable thing about what Corey did was share what he found. Have you ever had a conversation with someone where maybe you showed them something that you'd run across, or just asked them a question, and their response was, "yeah, I've been doing that for years"? How disappointing is that? I mean, to know someone in the industry, and to have a problem (or even just be curious about something) and know someone who's known the answer but never actually said anything? And not just not said anything at that moment...but ever.

I think that's where we could really improve as a community. There are folks like Corey who find something, and share it. And there are others in the community who have things that they do all the time, but no one else knows until the topic comes up and that person says, "yeah, I do that all the time."

Process Improvement
I think that one of the best shows on TV now is Undercover Boss. Part of the reason I like it is because rather than showing people treating themselves and each other in a questionable manner, the show has CEOs going out and engaging with front line employees. At the end of the show, the employees generally get recognized in some way for their hard work and dedication.

One topic jumped out in particular from the UniFirst episode...that front line employees who were the ones doing the job were better qualified to suggest and make changes to make the task more efficient. After all, who is better qualified than that person to come up with a way to save time and money at a task?

When I was in the military, I was given training in Total Quality Management (TQM) and certified by the Dept of the Navy to teach it to others. Being a Marine, there were other Marines who told me that TQM (we tried to call it "Total Quality Leadership" to get Marines to accept it) would never be accepted or used. I completely agree now, just as I did then...there are some tasks that process improvement won't provide a great deal of benefit, but there others that will. More than anything else, the one aspect I found from TQM/TQL that Marines could use everywhere was the practice of engaging with the front line person performing the task in order to seek improvement. A great example of this was my radio operators, who had to assemble RC-292 antennas all the time; one of my Marines had used wire, some epoxy and the bottom of a soda can to create "cobra heads", or field-expedient antenna kits that could be elevated (and the radios operational) before other Marines could go to the back of the Hummer, pull out an antenna kit, and start putting the mast together. This improved the process of getting communications up and available, and it was a process developed by those on the "front lines" who actually do the work.

So what does that have to do with forensics or incident response? Well, one of the things I like to do now and again is look at my last engagement, or look back over a couple of engagements, and see what I can improve upon. What can I do better going forward, or what can I do if there's a slight change in one of the aspects of the examination?

While on the IBM team and performing data breach investigations, I tried to optimize what I was doing. Sometimes taking a little more time up front, such as making a second working copy of the image, would allow me to perform parallel operations...I could use one working copy for analysis, and the other would be subject to scans. Or, I could extract specific files and data from one working copy, start my analysis, and start scanning the two working images. Chris Pogue, a SANS Thought Leader who was on our team at the time, got really good at running parallel analysis operations, by setting up multiple VMs to do just that.

The point is that we were the ones tasked with performing the work, and we looked at the requirements of the job, and found ways to do a better, more comprehensive job in a more efficient manner, and get that done in fewer ticks of the clock. One thing that really benefited us was collaborating and sharing what we knew. For example, Chris was really good at running multiple VMs to complete tasks in parallel, and he shared that with the other members of the team. I wrote Perl scripts that would take the results of scans for potential credit card numbers, remove duplicate entries, and then separate the resulting list into separate card brands for archiving and shipping (based on the required process). We shared those with the team, and Chris and I worked together to teach others to use them.

So why does any of this matter? When I was taking the TQM training, we were told that Deming originally shared his thoughts on process improvement with his fellow Americans, who laughed him out of the country, but others (the Japanese) absorbed what he had to say because it makes sense. In manufacturing processes, errors in the process can lead to increase cost, delays in delivery, and ultimately a poor reputation. The same is true for what we do. Through continual process improvement, we can move beyond where we are now, and provide a better, more comprehensive service in a timely manner.

In closing, use this as a starting point...a customer comes to you with an image, and says that they think that there's malware on the system, and that's it. Think about what you can provide them, in a report, at the end of 40 hours...5 days, 8 hrs a day of work. Based on what you do right now, and more specifically, the last malware engagement you did, how complete, thorough, and accurate will your report be?

1 comment:

Corey Harrell said...

I've used multiple methods in order to image numerous systems simultaneously but it never occurred to me to use parallel processing during the examinations. Reflecting back on what I do, I can see how this approach can speed up a few activities. I work in a small shop with little manpower so I've been looking into ways to process cases faster. This approach should work and I can even implement it using an older workstation or VMware.

Thanks for the tip.