Saturday, January 06, 2024

Human Behavior In Digital Forensics, pt II

On the heels of my first post on this topic, I wanted to follow up with some additional case studies that might demonstrate how digital forensics can provide insight into human activity and behavior, as part of an investigation.

Targeted Threat Actor
I was working a targeted threat actor response, and while we were continuing to collect information for scoping, so we could move to containment, we found that on one day, from one endpoint, the threat actor pushed their RAT installer to 8 endpoints, and had the installer launched via a Scheduled Task. Then, about a week later, we saw that the threat actor had pushed out another version of their RAT to a completely separate endpoint, by dropping the installer into the StartUp folder for an admin account.

Now, when I showed up on-site for this engagement, I walked into a meeting that served as the "war room", and before I got a chance to introduce myself, or find out what was going on, one of the admins came up to me and blurted out, "we don't use communal admin accounts." Yes, I know...very odd. No, "hi, I'm Steve", nothing like that. Just this comment about accounts. So, I filed it away.

The first thing we did once we got started was roll out our EDR tech, and begin getting insight into what was going on...which accounts had been compromised, which were the nexus systems the threat actor was operating from, how they were getting in, etc. After all, we couldn't establish a perimeter and move to containment until we determined scope, etc.

So we found this RAT installer in the StartUp folder for an admin account...a communal admin account. We found it because in the course of rolling out our EDR tech, the admins used this account to push out their software management platform, as well as our agent...and the initial login to install the software management platform activated the installer. When our tech was installed, it immediately alerted on the RAT, which had been installed by that point. It had a different configuration and C2 from what we'd seen from previous RAT installations, which appeared to be intentional. We grabbed a full image of that endpoint, so we were able to get information from VSCs, including a copy of the original installer file. 

Just because an admin told me that they didn't use communal admin accounts doesn't mean that I believed him. I tend to follow the data. However, in this case, the threat actor clearly already knew the truth, regardless of what the admins stated. On top of that, they planned out far enough in advance to have multiple means of access, including leaving behind "booby traps" what would be tripped through admin activity, but not have the same configuration. That way, if admins had blocked access to their first C2 IP address at the firewall, or were monitoring for that specific IP address via some other means, having the new, second C2 IP address would mean that they would go unnoticed, at least for a while. 

What I took away from all of the totality of what we saw, largely through historical data on a few endpoints, was that the threat actor seemed to have something of a plan in place regarding their goals. We never saw any indication of search terms, wandering around looking for files, etc., and as such, it seemed that they were intent upon establishing persistence at that point. The customer didn't have EDR in place prior to our arrival, so there's a lot we likely missed out on, but from what we were able to assemble from host-based historical data, it seemed that the threat actor's plan, at the point we were brought in,  was to establish a beachhead.

Pro Bono Legal Case
A number of years ago, I did some work on a legal case. The background was that someone had taken a job at a company, and on their first day, they were given an account and password on a system for them to use, but they couldn't change the password. The reason they were given was that this company had one licensed copy of an application, and it was installed on that system, and multiple people needed access.

Jump forward about a year, and the guy who got hired grew disillusioned, and went in one Friday morning, logged into the computer, wrote out a Word document where they resigned, effective immediately. They sent the document to the printer, then signed it, handed it in, and apparently walked out. 

So, as it turns out, several files on the system were encrypted with ransomware, and this guy's now-former employer claimed that he'd done it, basically "salting the earth" on his way out the door. There were suits and countersuits, and I was asked to examine the image of the system, after exams had already been performed by law enforcement and an expert from SANS.

What I found was that on Thursday evening, the day before the guy resigned, at 9pm, someone had logged into the system locally (at the console) and surfed the web for about 6 minutes. During that time, the browser landing on a specific web site caused the ransomware executable to be downloaded to the system, with persistence written to the user account's Run key. Then, when the guy returned the following morning and logged into the account, the ransomware launched, albeit without his knowledge. Using a variety of data sources, to include the Registry, Event Log, file system metadata, etc., I was able to demonstrate when the infection activity actually took place, and in this instance, I had to leave it up to others to establish who had actually been sitting at the keyboard. I was able to articulate a clear story of human activity and what led to the files being encrypted. As part of the legal battle, the guy had witness statements and receipts from the bar he had been at the evening prior to resigning, where he'd been out with friends celebrating. Further, the employer had testified that they'd sat at the computer the evening prior, but all they'd done was a short web browser session before logging out.

As far as the ransomware itself was concerned, it was purely opportunistic. "Damage" was limited to files on the endpoint, and no attempt was made to spread to other endpoints within the infrastructure. On the surface, what happened was clearly what the former employer described; the former employee came in, typed and printed their resignation, and launched the ransomware executable on their way out the door. However, file system metadata, Registry key LastWrite times, and browser history painted a different story all together. The interesting thing about this case was that all of the activity occurred within the same user account, and as such, the technical findings needed to be (and were) supported by external data sources.

RAT Removal
During another targeted threat actor response engagement, I worked with a customer that had sales offices in China, and was seeing sporadic traffic associated with a specific variant of a well-known RAT come across the VPN from China. As part of the engagement, we worked out a plan to have the laptop in question sent back to the states; when we received the laptop, the first thing I did was remove and image the hard drive.

The laptop had run Windows 7, which ended up being very beneficial for our analysis. We found that, yes, the RAT had been installed on the system at one point, and our analysis of the available data painted a much clearer picture. 

Apparently, the employee/user of the endpoint had been coerced to install the RAT. Using all the parts of the buffalo (file system, WEVTX, Registry, VSCs, hibernation file, etc.), we were able to determine that, at one point, the user had logged into the console, attached a USB device, and run the RAT installer. Then, after the user had been contacted to turn the system over to their employer, we could clearly see where they made attempts to remove and "clean up" the RAT. Again, as with the RAT installation, the user account that performed the various "clean up" attempts logged in locally, and performed some steps that were very clearly manual attempts to remove and "clean up" the RAT by someone who didn't fully understand what they were doing. 

Non-PCI Breach
I was investigating a breach into corporate infrastructure at a company that was part of the healthcare industry. I turned out that an employee with remote access had somehow ended up with a keystroke logger installed on their home system, which they used to remote into the corporate infrastructure via RDP. This was about 2 weeks before they were scheduled to implement MFA.

The threat actors was moving around the infrastructure via RDP, using an account that hadn't accessed the internal systems, because there was no need for the employee to do so. This meant that on all of these systems, the login initiated the creation of the user profile, so we had a really good view of the timeline across the infrastructure, and we could 'see' a lot of their activity. This was before EDR tools were in use, but that was okay, because the threat actor stuck to the GUI-based access they had via RDP. We could see documents they accessed, shares and drives they opened, and ever searches they ran. This was a healthcare organization, which the threat actor was apparently unaware of, because they were running searches for "password", as well as various misspellings of the word "banking" (i.e., "bangking", etc.). 

The organization was fully aware that they had two spreadsheets on a share that contained unencrypted PCI data. They'd been trying to get the data owner to remove them, but at the time of the incident, the files were still accessible. As such, this incident had to be reported to the PCI Council, but we did so with as complete a picture as possible, which showed that the threat actor was both unaware of the files, as well as apparently not interested in credit card, nor billing, data. 

Based on the nature of the totality of the data, we had a picture of an opportunistic breach, one that clearly wasn't planned, and I might even go so far as to describe the threat actor as "caught off guard" that they'd actually gained access to an organization. There was apparently no research conducted, the breach wasn't intentional, and had all the hallmarks of someone wandering around the systems, in shock that they'd actually accessed them. Presenting this data to the PCI Council in a clear, concise manner led to a greatly reduced fine for the customer - yes, the data should not have been there, but no, it hadn't been accessed or exposed by the intruder. 

2 comments:

Brett Shavers said...

Every strange unsolicited statement from a client (or suspect or computer user) is a clue where to start looking.

“Nothing in my trunk, Officer. No need to look there..”

H. Carvey said...

No kidding, right?

For me, it was, "...file this one away, we'll see what the data says...".

Funny thing is, the threat actor proved the admin wrong before I even had a chance to look...