tag:blogger.com,1999:blog-95180422024-03-17T02:28:56.135-05:00Windows Incident ResponseThe Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics",
as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".Unknownnoreply@blogger.comBlogger1379125tag:blogger.com,1999:blog-9518042.post-41296541015804239092024-03-15T08:48:00.003-05:002024-03-15T08:48:40.156-05:00Uptycs Cybersecurity Standup<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwRYRvGK0fv7m98YajFK1-9creDqIfODlGbO-uxA9k-dWh0FqggQhG-z7VH781wy_s7RnuYSVlwql2hs2CLjt0DdmUigTTugsAaIofd5nbIYXF7snW5BevytrLfJLHzChjXyBqY1rV02oKSOoPH48jC3IaMe_w1W_8ZaRXDKY-pBIygG2GbA/s688/uptycs.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="294" data-original-width="688" height="137" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwRYRvGK0fv7m98YajFK1-9creDqIfODlGbO-uxA9k-dWh0FqggQhG-z7VH781wy_s7RnuYSVlwql2hs2CLjt0DdmUigTTugsAaIofd5nbIYXF7snW5BevytrLfJLHzChjXyBqY1rV02oKSOoPH48jC3IaMe_w1W_8ZaRXDKY-pBIygG2GbA/s320/uptycs.png" width="320" /></a></div>I was listening to a couple of fascinating interviews on the <a href="https://www.uptycs.com/cybersecurity-standup">Uptycs Cybersecurity Standup</a> podcast recently, and I have to tell you, there were some pretty insightful comments from the speakers.<br /><br /><div>The first one I listened to was<a href="https://www.linkedin.com/in/beckygaylord/"> Becky Gaylord</a> talking about her career transition from an investigative journalist into cybersecurity.<p>Check out <a href="https://www.uptycs.com/cybersecurity-standup?wchannelid=ujmj5v7mne&wmediaid=45nyqe3sz0">Becky's interview</a>, and be sure to check out the show notes, as well.</p><p>I also listened to <a href="https://www.linkedin.com/in/quinnlanvarcoe/">Quinn Varcoe</a>'s interview, talking about Quinn journey from zero experience in cybersecurity to owning and running her own consulting firm, <a href="https://blueberrysecurity.net/">Blueberry Security</a>.</p><p>Check out <a href="https://www.uptycs.com/cybersecurity-standup?wchannelid=ujmj5v7mne&wmediaid=2otuyktrnu">Quinn's interview</a>, and the show notes.</p><p>More recently, I listened to <a href="https://www.uptycs.com/cybersecurity-standup?wchannelid=ujmj5v7mne&wmediaid=vrc0dbhc5c">Olivia Rose's interview</a>. <a href="https://www.linkedin.com/in/oliviarosecybersecurity/">Olivia</a> and I crossed paths years ago at ISS, and has now hung out her own shingle as a virtual CISO (vCISO). I joined ISS in Feb 2006, about 6 months before their purchase by IBM, which was announced in August 2006. Olivia and I met at the IBM ISS sales kick-off in Atlanta early in 2007.</p><p>All of these interviews are extremely insightful; each speaker brings something unique with them from their background and experiences, and every single one of them has a very different "up-bringing" in the industry.</p><p>There's no one interview that stands out as more valuable than the others. Instead, my recommendation is to listen to them all, in fact, do so several times. Take notes. Take note of what they say.</p></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-9518042.post-3631522299655560652024-03-14T06:48:00.000-05:002024-03-14T06:48:33.294-05:00Investigative Scenario, 2024-03-12<p></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgR69VT7TRCoLZxEcZDd5xe9ahGzW6O8FFdOVv8Bo6NI-90YpwI0Hrg9o7RgmFFLkjMtYvcBcDq_bidmhL2mauYE1GEygG32dwLfpODwHWxR1Vyvh0bfDL5cmqgAm0Qf5gmdfRFFLDCJOrK2U6vWiELZydJbQSl1bbAM_XdyxapLS1tbxmEeg/s554/scen.png" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="262" data-original-width="554" height="189" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgR69VT7TRCoLZxEcZDd5xe9ahGzW6O8FFdOVv8Bo6NI-90YpwI0Hrg9o7RgmFFLkjMtYvcBcDq_bidmhL2mauYE1GEygG32dwLfpODwHWxR1Vyvh0bfDL5cmqgAm0Qf5gmdfRFFLDCJOrK2U6vWiELZydJbQSl1bbAM_XdyxapLS1tbxmEeg/w400-h189/scen.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>Investigative Scenario</i></td></tr></tbody></table>Chris Sanders posted another investigative scenario on Tues, 12 Mar, and this one, I thought, was interesting (see the image to the right).<br /><br />First off, you can find the scenario <a href="https://twitter.com/chrissanders88/status/1767552377304093044">posted on X/Twitter</a>, and <a href="https://www.linkedin.com/feed/update/urn:li:activity:7173318068335546368?updateEntityUrn=urn%3Ali%3Afs_feedUpdate%3A%28V2%2Curn%3Ali%3Aactivity%3A7173318068335546368%29">here on LinkedIn</a>.<br /><br />Now, let's go ahead and kick this off. In this scenario, a threat actor remotely wiped a laptop, and the <i><b>sole </b></i>source of evidence we have available is a backup of "the Windows Registry", made just prior to the system being wiped.<p><b>Goals</b><br />I try to make sure I have the investigative goals written out where I can see them and quickly refer back to them. </p><p>Per the scenario, our goals are to determine:<br />1. How the threat actor accessed the system?<br />2. What were their actions on objectives, prior to wiping the system?</p><p><b>Investigation</b><br />The first thing I'd do is create a timeline from the Software and System hive files, in order to establish a pivot point. Per the scenario, the Registry was backed up "just before the attacker wiped the system". Therefore, by creating a timeline, we can assume that the last entry in the timeline was from just prior to the system being wiped. This would give us a starting point to work backward from, and provide an "aiming stake" for our investigation.</p><p>The next thing I'd do is examine the NTUSER.DAT files for any indication of "proof of life" up to that point. What I'm looking for here is to determine the <i>how</i> of the access; specifically, was the laptop accessed via a means that provided shell- or GUI-based access? <br /></p><p>If I did find "proof of life", I'd definitely check the SAM hive to see if the account is local (not a domain account), and if so, try to see if I could get last login time info, as well as any indication that the account password was changed, etc. However, keep in mind that the SAM hive is limited to local accounts only, and does not provide information about domain accounts.</p><p>Depending upon the version/build of Windows (that info was not available in the scenario), I might check the contents of the BAM subkeys, for some indication of process execution or "proof of life" during the time frame of interest.</p><p>If there are indications of "proof of life" from a user profile, and it's corroborated with the contents of the BAM subkeys, I'd definitely take a look at profile, and create a timeline of activity.</p><p>What we're looking for at this point is:<br />1. Shell-, GUI-based access, via RDP, or an RMM?<br />2. Network-, CLI-based access, such as via ssh, Meterpreter, user creds/PSExec/some variant, or a RAT</p><p>Shell-based access tends to provide us with a slew of artifacts to examine, such as RecentApps, RecentDocs, UserAssist, shellbags, WordWheelQuery, etc., all of which we can use to develop insight into a threat actor actor, via not just their activity, but the timing thereof, as well. </p><p>If there are indications of shell-based access, we check the Registry to determine if RDP was enabled, or if there were RMM tools installed, but without Windows Event Logs and other other logs, we won't know definitively which means was used to access the laptop. Contrary to what some analysts seem to believe, the TSClients subkeys within the NTUSER.DAT hive do <i>not</i> show systems that have connected to the endpoint, but rather which systems were connected to from the endpoint.<br /><br />Something else to consider is if the threat actor had shell-based access, and chose to perform their actions via a command prompt, or via Powershell, rather than navigating the system via the Explorer shell and double-clicking files and applications. As we have only the backed up Registry, we wouldn't be able to examine user's console history, nor the Powershell Event Logs.</p><p>However, if there are no indications of shell-based access, and since we only have the Registry and no access to any other log files from the endpoint, it's going to likely be impossible to determine the exact means of access. Further, if all of the threat actor's activity was via network-based/type 3 logins to the laptop, such as via Meterpreter, or PSExec, </p><p>It doesn't do any good to parse the Security hive for the Security Event Log audit policy, because we don't have access to the Windows Event Logs. We could attempt to recover them via record parsing of the image, <i>if </i>we had a copy of the image. </p><p>I would not put a priority on persistence; after all, if a threat actor is going to wipe a system, any persistence they create is not going to survive, unless the persistence they added was included in a system-wide or incremental backup, from which the system is restored. While this is possible, it's not something I'd prioritize at this point. I would definitely check autostart locations within the Registry for any indication of something that might look suspicious; for example, something that may be a RAT, etc. However, without more information, we wouldn't be able to definitively determine if (a) if the entry was malicious, and (b) if it was used by the threat actor to access the endpoint. For example, without logs, we have no way of knowing if an item in an autostart location started successfully, or generated an error and crashed each time it was launched. Even with logs, we would have no way of knowing if the threat actor accessed the laptop via an installed RAT.</p><p>Something else I would look for would be indications of third-party applications added to the laptop. For example, LANDesk used to have a Software Monitoring module, and it would record information about programs executed on the system, along with how many times it was launched, the last time it was launched, and the user name associated with the last launch. </p><p><b>Findings</b><br />So, where do we stand with our goals? I'd say that at the moment, we're at "inclusive" because we simply do not have enough information to go on. There is no memory dump, no other files collected, no logs, etc., <i>just</i> the backed up Registry. While we won't know definitively <i>how</i> the threat actor was able to access the endpoint, we do know that if access was achieved via some means that allowed for shell-based access, we might have a chance at determining what actions the threat actor took while they were on the system. Of course, the extent to which we'd be able to do that also depends upon other factors, including the version of Windows, the software "load" (i.e., installed applications), actions taken by the threat actor (navigating/running apps via the Explorer shell vs. command prompt/Powershell). It's entirely possible that the threat actor accessed the endpoint via the network, through a means such as Meterpreter, or there was a RAT installed that they used to access the system.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-9518042.post-50023757117235015362024-02-26T20:40:00.001-05:002024-02-27T09:59:51.191-05:00PCAParse<p>I was doing some research recently regarding what's new to Windows 11, and ran across an interesting artifact, which seems to be referred to as "PCA". I found a couple of interesting references regarding this artifact, such as <a href="https://www.sygnia.co/blog/new-windows-11-pca-artifact/">this one from Sygnia</a>, and <a href="https://aboutdfir.com/new-windows-11-pro-22h2-evidence-of-execution-artifact/">this one from AboutDFIR</a>. Taking a look at the samples of <a href="https://github.com/AndrewRathbun/DFIRArtifactMuseum/tree/main/Windows/Amcache/Win11/RathbunVM">files available from the DFIRArtifactMuseum</a>, I wrote a parser for two of the files from the <i>C:\Windows\appcompat\pca</i> folder, converting the time stamps to Unix epoch format and sending the output to STDOUT, in TLN format so that it can be redirected to an events file.</p><p>An excerpt from the output from the PcaAppLaunchDic.txt file:</p><p><span style="font-family: courier;">1654524437|PCA|||C:\ProgramData\ProtonVPN\Updates\ProtonVPN_win_v2.0.0.exe<br />1661428304|PCA|||C:\Windows\SysWOW64\msiexec.exe<br />1671064714|PCA|||C:\Program Files (x86)\Proton Technologies\ProtonVPN\ProtonVPN.exe<br />1654780550|PCA|||C:\Program Files\Microsoft OneDrive\22.116.0529.0002\Microsoft.SharePoint.exe</span></p><p>An excerpt from the output from the PcaGeneralDb0.txt file:</p><p><span style="font-family: courier;">1652387261|PCA|||%programfiles%\freefilesync\bin\freefilesync_x64.exe - Abnormal process exit with code 0x2<br />1652387261|PCA|||%programfiles%\freefilesync\freefilesync.exe - Abnormal process exit with code 0x2<br />1652391162|PCA|||%USERPROFILE%\appdata\local\githubdesktop\app-2.9.9\resources\app\git\cmd\git.exe - Abnormal process exit with code 0x80<br />1652391162|PCA|||%USERPROFILE%\appdata\local\githubdesktop\app-2.9.9\resources\app\git\mingw64\bin\git.exe - Abnormal process exit with code 0x80</span></p><p>This output can be redirected to an events file, and included in a timeline, so that we can validate that the artifact does, in fact, illustrate evidence of execution. Incorporating file system information, Prefect and Windows Event Log data (and any other on-disk resources), as well as EDR telemetry (if available) will provide the necessary data to validate program execution.</p><p><b>Addendum, 2024-02-27</b>: Okay, so I've been actively seeking out opportunities to use this parser in my role at my day job, and while I've been doing so, some things have occurred to me. First, there's nothing in either file that points to a specific user, so incorporating this data into an overall timeline that includes WEVTX data and EDR telemetry is going to help not only validate the information from the file themselves, but provide the necessary insight around process execution, depending of course on the availability of information. <b>Fossilization on Windows systems</b> is a wonderful thing, but not everyone takes advantage of it, nor really understands where it's simply not going to be available.</p><p>Not only is there no user information, there's also no information regarding process lineage. Still, I firmly believe that once we begin using this information in a consolidated timeline, and begin validating the information, we'll see that it adds yet another clarifying overlay to our timeline, as well as possible pivot points.</p>Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-9518042.post-22783923528550330092024-02-24T10:02:00.000-05:002024-02-24T10:02:01.809-05:00A Look At Threat Intel, Through The Lens Of The r77 Rootkit<div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjazEhlN6rOHAY98mctn89LNavNUWNbJME-rD31-Lwzvwm7yawVmYRZ22T-k4C1HRuHQ0e3RFyHl_vzHAc4tw66cXxgObXLVqCsslZCr4vGmIwH4d2aAE3fEXNUB024MQHhG_m1bHuToZu7IxRODoRTXeLKzbeoJSce5QLXW2i4tLP_ifhshA/s510/smile.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="476" data-original-width="510" height="187" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjazEhlN6rOHAY98mctn89LNavNUWNbJME-rD31-Lwzvwm7yawVmYRZ22T-k4C1HRuHQ0e3RFyHl_vzHAc4tw66cXxgObXLVqCsslZCr4vGmIwH4d2aAE3fEXNUB024MQHhG_m1bHuToZu7IxRODoRTXeLKzbeoJSce5QLXW2i4tLP_ifhshA/w200-h187/smile.jpg" width="200" /></a></div></div><div>It's been almost a year, but this <a href="https://www.elastic.co/security-labs/elastic-security-labs-steps-through-the-r77-rootkit">Elastic Security write-up on the r77 rootkit </a>popped up on my radar recently, so I thought it would be useful to do a walk-through of how someone with my background would mine open reporting such as this for actionable intel. </div><div><br /></div><div><div>In this case, the r77 rootkit is described as an "open source userland rootkit used to deploy the XMRig crypto miner". I've <a href="https://www.huntress.com/blog/threat-advisory-xmrig-crypto-mining-by-way-of-teamviewer">seen XMRig before</a> (several times), but not deployed alongside a rootkit.</div></div><div><br /></div><div>The purpose of a rootkit is to hide stuff. Anyone who was around in the late '90s and early 2000s is familiar with the term "rootkit" and what it means. From the article, "<i>r77’s primary purpose is to hide the presence of other software on a system by hooking important Windows APIs, making it an ideal tool for cybercriminals looking to carry out stealthy attacks. By leveraging the r77 rootkit, the authors of the malicious crypto miner were able to evade detection and continue their campaign undetected.</i>"</div><div><br /></div><div>My point in sharing this definition/explanation is because many of us will see this, or generally accept that a rootkit is involved, and then not think critically about what we're seeing, but more importantly, what we're <i>not</i> seeing. For example, in this case, the Elastic Security write-up</div><div><br /></div><div>The installer module is described as being written to the Registry, which is a commonly observed technique, especially when it comes to "fileless malware". The article states that the installer "<i>creates a new registry key called $77stager in the HKEY_LOCAL_MACHINE\SOFTWARE hive and writes the stager module to the key.</i>" However, the code in the image immediately following that statement (images are not numbered in the article) shows the <i><b>RegSetValueExW</b></i> function being called. As such, it's not a Registry key that's created, but a value. </div><div><br /></div><div>This may seem pedantic to many, but the distinction is important. Clearly, a different API function is used to create a value than a key; this is because keys and values are completely different structures all together. You cannot write data to a key (i.e., "<i>writes the stager module to the key</i>"), that data has to be associated with a value. Many EDR frameworks, when monitoring or querying Registry keys vs values, use different API or function calls themselves. As such, monitoring for the creation of or simply searching for the <i>$77stager</i> <b>key</b> will miss this rootkit. </div><div><br /></div><div>Every. </div><div><br /></div><div>Single. </div><div><br /></div><div>Time. </div><div><br /></div><div>What's interesting is that the article later states:<br /><i>It then stores the current process ID running the service module as a value in a registry key named either “svc32” or “svc64” under the key HKEY_LOCAL_MACHINE** SOFTWARE$77config\pid**. The svc32/64 key name is based on the system architecture.</i></div><div><br /></div><div>Here, it looks as if the correct nomenclature is used.</div><div><br /></div><div>And then there's threat hunting; that is, if you're going to write PowerShell code to sweep across your infrastructure and look for malware similar to this, the code to look for a key is different than that to look for a value. The same is true for triage or 'dead box' analysis via tools such as <a href="https://github.com/keydet89/RegRipper4.0">RegRipper</a>. Threat hunting with PowerShell across live systems for <b>direct artifacts</b> of this rootkit likely won't get you very far, because...well...it's a rootkit, and the key is hidden through the use of userland API hooking. Elastic's article even points out that data is filtered when using tools such as RegEdit that rely on the hooked API functions. As such, verifying that the rootkit is actually there may require the use of <i>reg.exe</i> of something like FTK Imager to copy the Software hive off of the endpoint, and then parsing that hive file.</div><div><br /></div><div>Searching for <b>indirect artifacts</b> related to this rootkit, however, is an entirely different matter, and<i> is</i> the reason why <b>indirect artifacts</b> are so valuable. The PowerShell code that is launched is captured in the Windows PowerShell Event Log, in PowerShell/600 event records, as well as in the Microsoft-Windows-PowerShell/Operational Event Log, in Microsoft-Windows-PowerShell/4104 records. This activity/these artifacts allow us to validate that the activity actually occurred, while providing for additional detection opportunities.</div><div><br /></div><div>Some aspects of the malware not covered in the article include initial access, or how the whole kit is deployed. The technical depth of the article is impressive but not entirely actionable. For example, what aspects (direct artifacts) of the infection are hidden by the rootkit, and what indirect artifacts are 'visible'?</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-9518042.post-22263976088516880922024-01-22T20:38:00.000-05:002024-01-22T20:38:09.411-05:00Lists of Images<div>There're a lot of discussions out there on social media regarding how to get started or improve yourself or set yourself apart in cybersecurity, and lot of the advice centers around doing things yourself; setting up a home lab, using various tools, etc. A lot of this advice is also centered around pen testing and red teaming; while it's not discussed as much, there is <i>a lot </i>you can do if you're interested in digital forensics, and the cool thing is that you don't have to "set up a home lab" to fully engage in most of it. All you need is a way to download the images and any tools you want, to a system to do the work on.</div><div><br /></div><div>Fortunately, there are a number of sites where you can find these images, to practice doing analysis, or to engage in tool testing. Also, many of these sites are on lists...I've developed a list of my own, for example. Amongst the various available lists, there's most assuredly going to be duplication, so just be aware of that going in. That being said, let's take a look at some of the lists...</div><div><br /></div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0I_I1DKiVtMJGbPpqSRolVeO4ULx-hacCACwEcBDzebfuTttgB-BbiTKdMFzYrvPULcdIuoD8HhUP2nOXZDZREtgyk1O0YnTw39ntieY8hS1m9IhpWR5to6m4bKgcEPZfu0HeQpHesoHjoYYiRkRabSk2192-SwCkfzJ3uKcSfging9EaGw/s1360/iws.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1360" data-original-width="907" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0I_I1DKiVtMJGbPpqSRolVeO4ULx-hacCACwEcBDzebfuTttgB-BbiTKdMFzYrvPULcdIuoD8HhUP2nOXZDZREtgyk1O0YnTw39ntieY8hS1m9IhpWR5to6m4bKgcEPZfu0HeQpHesoHjoYYiRkRabSk2192-SwCkfzJ3uKcSfging9EaGw/s320/iws.jpg" width="213" /></a></div>The folks at ArsenalRecon <a href="https://arsenalrecon.com/insights/publicly-accessible-disk-images-grid-for-dfir">posted a list of publicly available images</a>, and Brett Shavers followed up by sharing a <a href="https://www.dfir.training/downloads/test-images">DFIR Training link of "test" images</a>.</div><div><br /></div><div>Dr. Ali Hadi has a <a href="https://www.ashemery.com/dfir.html">list of challenge images</a> (he graciously allowed me to use one of them in <i><a href="https://www.amazon.com/Investigating-Windows-Systems-Harlan-Carvey/dp/0128114150">Investigating Windows Systems</a></i>), as well as a <a href="https://www.binary-zone.com/">blog with some very valuable posts</a>.</div><div><br /></div><div>While "test" and CTF images are a great way to practice using various tools, and even developing new techniques, they lack the <i>fossilization</i> of user and system activity seen in real-world images. There's not a great deal that can be done about that; suffice to say that this is just something that folks need to be aware of when working with the images. It's also possible within the limited scope of the "incident" to develop not just threat intel, but also discern insights into the threat actor; that is, to observe human behavior rendered from digital forensics.</div><div><br /></div><div>Many of the CTF images will be accompanied by a list of questions that need to be answered (i.e., the flags), few of which are ever <i>actually</i> asked for by customers, IRL. I've seen CTFs with 37 or even 51 questions, and across 25 yrs of DFIR experience, I've never had customers ask more than 5 questions, with one or two of them being duplicates. </div><div><br /></div><div>The point is that CTF images are a great place to start, particularly if you take more "real world" approach to the situation and define your own goals. "Is this system infected with malware? If so, how did this happen, what did the malware do, and was any data stolen as a result?"</div><div><br /></div><div>It's also a great idea to do more than just answer the questions, but to also go beyond. For example, in the write up of your findings, did you consider control efficacy? What controls were in place, did they work or not, and what controls would you recommend?</div><div><br /></div><div>I once worked a case where the endpoint was infected due to a phishing email and the customer responded that this couldn't be the case, because they had a package specifically designed to address such things on their email gateway. However, the phishing email had gotten on the system because the user accessed their personal email via a browser, bypassing the email gateway all together.</div><div><br /></div><div>Can you recommend controls or <a href="https://www.huntress.com/blog/addressing-initial-access">system configuration changes</a> that may have inhibited or even obviated the attack/infection? What controls either on the network, or on the endpoint itself may have had an impact on the attack?</div><div><br /></div><div>What about detections? How would you detect this malware or activity on future cases? Can you write a Yara or Sigma rule that would address the attack at any point? Is there one data source that proved to be more valuable than others, something you can clearly delineate as, "...if you see <i>this</i>, then the attack succeeded..."?</div><div><br /></div><div>What can you tell about the "attacker", as a person? Was this a human operated attack, and if so, what insights can you develop about the attacker from your DF analysis? Hours of operations, capabilities, situational awareness are all aspects you can look at. Were there failed attempts to log in, run commands, or install applications, or did the attacker seem to be prepared and good to go when they got on the box? What insights can be rendered from your analysis, and are there any gaps that would shed more light on what was happening?</div><div><br /></div><div>Finally, set up a Github site or blog, and share your experience and findings. Write up a blog post, a series of blog posts, or upload a document to a Github repo, and invite others to review, and ask questions, make comments, etc.</div>Unknownnoreply@blogger.com7tag:blogger.com,1999:blog-9518042.post-3354296516509533982024-01-15T12:47:00.000-05:002024-01-15T12:47:27.725-05:00EDRSilencer<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfBhtHSyZ_u4TLMl9VC14voMwZkzoxh51gQj8hPkJCnXbxsSb1Em-No_75J8wNAGavpRNPAimbsYZjlwjIeHN-drmCtbkvLqifNGJwJ0eR0F41zb6H7fMkLmg2aXVZ3E_rWDjUR5k8iYaG0vtcWeJGexRaGXsQaH4i8sKDsDHQ7Dv9SyyEyg/s1920/cartoon-sneaking-thief-free-vector.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1920" data-original-width="1317" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfBhtHSyZ_u4TLMl9VC14voMwZkzoxh51gQj8hPkJCnXbxsSb1Em-No_75J8wNAGavpRNPAimbsYZjlwjIeHN-drmCtbkvLqifNGJwJ0eR0F41zb6H7fMkLmg2aXVZ3E_rWDjUR5k8iYaG0vtcWeJGexRaGXsQaH4i8sKDsDHQ7Dv9SyyEyg/w138-h200/cartoon-sneaking-thief-free-vector.jpg" width="138" /></a></div>There's been a good bit of discussion in the cybersecurity community regarding "EDR bypasses", and most of these discussions have been centered around technical means a threat actor can use to "bypass" EDR. Many of these discussions do not seem to take the logistics of such thing into account; that is, you can't suddenly "bypass EDR" on an endpoint without first accessing the endpoint, setting up a beachhead and then bringing your tools over. Even then, where is the guarantee that it will actually work? I've seen ransomware threat actors fail to get their file encryption software to run on some endpoints.<p></p><p>Going unnoticed on an endpoint when we believe or feel that EDR is prevalent can be a challenge, and this could be the reason why these discussions have taken hold. However, the fact of the matter is that the "feeling" that EDR is prevalent is just that...a feeling, and not supported by data, nor situational awareness. If you look at other aspects of EDR and SOC operations, there are plenty of opportunities using minimal/native tools to achieve the same effect; to have your actions not generate alerts that a SOC analyst investigates.</p><p><i>Situational Awareness</i><br />Not all threat actors have the same level of situational awareness. I've seen threat actors where EDR has blocked their process from executing, and they respond by attempting to uninstall AV that isn't installed on the endpoint. Yep, that's right...this was not preceded by a query attempting to determine which AV product was installed; rather, the threat actor when right to uninstalling ESET. In another instance, the threat actor attempted to uninstall Carbon Black; the monitored endpoint was running <EDR>. Again, no attempt was made to determine what was installed.</p><p>However, I did see one instance where the threat actor, before doing anything else or being blocked/inhibited, ran queries looking for <EDR> running on 15 other endpoints. From our dashboard, we knew that only 4 of those endpoints had <EDR> running; the threat actor moved to one of the 11 that didn't.</p><p>The take-away from this is that even beyond "shadow IT", there are likely endpoints within an infrastructure that don't have EDR installed; 100% coverage, while preferred, is not guaranteed. I remember an organization several years ago that was impacted by a breach, and after discovering the breach, installed EDR on only about 200 endpoints, out of almost 15,000. They also installed the EDR in "learning mode", and several of the installed endpoints were heavily used by the threat actors. As such, the EDR "learned" that the threat actor was "normal" activity.</p><p><i>EDRSilencer</i><br />Another aspect of EDR is that for the tool to be effective, most need to communicate to "the cloud"; that is, send data off of the endpoint and outside of the network, were it will be processed. Yes, I know that Carbon Black started out with an on-prem approach, and that Sysmon writes to a local Windows Event Log file, but most EDR frameworks send data to "the cloud", in part so that users with laptops will still have coverage. </p><p><a href="https://github.com/netero1010/EDRSilencer">EDRSilencer</a> takes advantage of this, not by stopping, altering or "blinding" EDR, but by preventing it from communicating off of the endpoint. See <a href="https://blog.p1k4chu.com/security-research/adversarial-tradecraft-research-and-detection/edr-silencer-embracing-the-silence">p1k4chu's write up here</a>; EDRSilencer works by creating a WFP rule to block the EDR EXE from communicating off of the host, which, to be honest, is a great idea. </p><p>Why a "great idea"? For one, it's neither easy nor productive to create a rule to alert when the EDR is no longer communicating. Some organizations will have hundreds or thousands of endpoints with EDR installed, and there's no real "heartbeat" function in many of them. Employees will disconnect laptops, offices (including WFH) may have power interruptions, etc., so there are <i>LOT</i> of reasons why an EDR agent may cease communicating. </p><p>In 2000, I worked for an organization that had a rule that would detect significant time changes (more than a few minutes) on all of their Windows endpoints. The senior sysadmin and IT director would not do anything about the rules, and simply accepted that twice a year, we'd be inundated with these alerts for <i>every</i> endpoint. My point is that when you're talking about global/international infrastructures, or MDRs, having a means of detecting when an agent is not communicating is a tough nut to crack; do it wrong and don't plan well for edge cases, and you're going to crush your SOC. </p><p>If you read the EDRSilencer Github page and p1k4chu's write-up closely, you'll see that EDRSilencer uses a hard-coded list of EDR executables, which doesn't include all possible EDR tools.</p><p>Fortunately, <a href="https://blog.p1k4chu.com/security-research/adversarial-tradecraft-research-and-detection/edr-silencer-embracing-the-silence">p1k4chu's write up</a> provides some excellent insights as to how to detect the use of EDRSilencer, even pointing out specific audit configuration changes to ensure that the appropriate events are written to the Security Event Log.<br /><br />As a bit of a side note, <a href="https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/auditpol">audtipol.exe</a> <i>is</i>, in fact, natively available on Windows platforms.</p><p>Once the change is made, the two main events of interest are <a href="https://www.ultimatewindowssecurity.com/securitylog/encyclopedia/event.aspx?eventid=5441">Security-Auditing/5441</a> and <a href="https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-5157">Security-Auditing/5157</a>. P1k4chu's write-up also includes a Yara rule to detect the EDRSilencer executable, which is based in part on a list of the hard-coded EDR tools.</p><p><a href="https://github.com/amjcyber/EDRNoiseMaker/">EDRNoiseMaker</a> detects the use of EDRSilencer, by looking for filters blocking those communications.</p><p><i>Other "Opportunities"</i><br />There's another, perhaps more subtle way to inhibit communications off of an endpoint; modify the <a href="https://support.microsoft.com/en-us/topic/microsoft-tcp-ip-host-name-resolution-order-dae00cc9-7e9c-c0cc-8360-477b99cb978a">hosts file</a>. Credit goes to Dray (<a href="https://www.linkedin.com/in/drayagha/">LinkedIn</a>, <a href="https://twitter.com/Purp1eW0lf">X</a>) for reminding me of this sneaky way to inhibiting off-system communications. The difference is that rather than blocking by executable, you need to know to where the communications are going, and add an entry so that the returned IP address is localhost.</p><p>I thought Dray's suggestion was both funny and timely; I used to do this for/to my daughter's computer when she was younger...I'd modify her hosts file right around 10pm, so that her favorites sites (MySpace, Facebook, whatever) resolved to localhost, but other sites, like Google, were still accessible. </p><p>One of the side effects would likely be the difficulty in investigating an issue like this; how many current or relatively new SOC/DFIR analysts are familiar with the hosts file? How many understand or know <a href="https://support.microsoft.com/en-us/topic/microsoft-tcp-ip-host-name-resolution-order-dae00cc9-7e9c-c0cc-8360-477b99cb978a">the host name resolution process</a> followed by Windows? I think that the first time I became aware of MS's documentation of the host name resolution process was 1995, when I was attempting to troubleshoot an issue; how often is this taught in networking classes these days?</p><p><i>Conclusion</i><br />Many of us have seen the use of offensive security tools (OSTs) by pen tester and threat actors alike, so how long do you think it will be before EDRSilencer, or something like it, makes its way into either toolkit? The question becomes, how capable is your team of detecting and responding to the use of such tools, particularly when used in combination with other techniques ("silence" EDR, then clear all Windows Event Logs)? Tools and techniques like this (EDRSilencer, or the technique it uses) shed a whole new light on initial recon (process/service listing, query the Registry for installed applications, etc.) activities, particularly when they're intentionally and purposefully used to create situational awareness.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-9518042.post-76942358253897391212024-01-10T16:34:00.000-05:002024-01-10T16:34:54.716-05:00Human Behavior In Digital Forensics, pt III<div>So far, parts <a href="https://windowsir.blogspot.com/2024/01/human-behavior-in-digital-forensics.html">I</a> and <a href="https://windowsir.blogspot.com/2024/01/human-behavior-in-digital-forensics-pt.html">II</a> of this series have been published, and at this point, there's something that we really haven't talked about.</div><div><br /></div><div>That is, the "So, what?". Who cares? What are the benefits of understanding human behavior rendered via digital forensics? Why does it even matter?</div><div><br /></div><div>Digital forensics can provide us insight into a threat actor's sophistication and situational awareness, which can, in turn, help us understand their intent. Are they new to the environment, and trying to get the "lay of the land", or are their actions extremely efficient, and do they appear to be going directly to the data they're looking for, as if they have been here before or had detailed prior knowledge?</div><div><br /></div><div>Observing the threat actor's actions (or the impacts thereof) helps us understand not just their intent, but what else we should be looking for. For example, observing the <a href="https://www.secureworks.com/blog/ransomware-deployed-by-adversary">Samas ransomware threat actors in 2016</a> revealed no apparent interest in data collection or theft; there was no searching or discovery, no data staging, etc. This is in contrast to the <i>Non-PCI Case</i> from <a href="https://windowsir.blogspot.com/2024/01/human-behavior-in-digital-forensics-pt.html">my previous blog post</a>; the threat actor was apparently interested in data, but did not appear to have an understanding of the infrastructure they'd accessed (searching for "banking" in a healthcare environment).</div><div><br /></div><div>Carrying this forward, we can then use what we learn about the threat actor, by observing their actions and impacts, to better understand our own control efficacy; what worked, what didn't, and what can work better at preventing, or detection and responding to, the threat actor?</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2e9oxD_tJrZPnP-L2J_8VLWLhUim7y1hQNq0WfzaUKcVig2IJ5nDPUExso8r6aOZRdt4pNrylB0Ai60UY5yfZ8yEm-uAjdvTC3MYNMYyQbzFGzmwqmBnM72lLbEHcFtvvQjX0NVbneyRDR0TQ2dkPbQ6xUIl494u41Rt2ifBVZA0SuaOazg/s558/how.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="558" data-original-width="429" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2e9oxD_tJrZPnP-L2J_8VLWLhUim7y1hQNq0WfzaUKcVig2IJ5nDPUExso8r6aOZRdt4pNrylB0Ai60UY5yfZ8yEm-uAjdvTC3MYNMYyQbzFGzmwqmBnM72lLbEHcFtvvQjX0NVbneyRDR0TQ2dkPbQ6xUIl494u41Rt2ifBVZA0SuaOazg/s320/how.png" width="246" /></a></div>Per the graphic to the left, understanding human behavior rendered via digital forensics is thought to provide insight into future attacks...but can it really? And if this <i>is</i> the case, how so?<div><br /></div><div>Well, we've known for some time that there's really no single actor or group that focuses solely on one type of target. Consider <a href="https://www.secureworks.com/blog/vertical-hopscotch">this blog post </a>from 2015, making it almost 9 yrs old at the time of this writing. The findings presented in the blog post remain true, and are repeated, even today. <br /><div><br /><div>So, "profiling" a threat actor may not allow you to anticipate who (what target infrastructure) they're going to attack next, but within a limited window, it will provide a great deal of insight into how you can expect them to conduct the follow-on stages of an attack. The target may not be known, but the actions taken, particularly in the near term, will be illuminated by what was observed on a previous attack.</div><div><br /></div><div>In 2016, the team I was with responded to about half a dozen <a href="https://www.secureworks.com/blog/ransomware-deployed-by-adversary">Samas ransomware attacks</a>, across a wide range of verticals; they were targeting vulnerable JBoss CMS systems, regardless of the underlying business. What we learned by looking across those multiple attacks allowed us to identify other potential targets, as well as respond to and shut down some attacks that were underway; we saw that the threat actors took an average of 4 months to go from initial access to deploying the ransomware. During this time, there was no apparent interest in data staging or theft; the intent appeared to be to identify "critical" systems within the infrastructure, and obtain the necessary privileges to deploy ransomware to those systems.</div></div><div><br /></div><div><i>Reacting to Stimulus</i></div><div>Additional insight can be found by observing how a threat actor reacts to "stimulus". There may be times when a threat actor's activities are unfettered; they proceed about their actions without being inhibited or blocked in anyway. They aren't blocked by EDR tools, nor AV. From these incidents, we can learn a good deal about the threat actor's playbook, and we may see how it evolves over time. However, there may be times where the threat actor encounters issues, either with security tooling blocking their efforts, or tools they bring in from the outside crashing and not executing on the endpoint. It's during these incidents that we get a more expansive view of the threat actor, as we observe their actions in response to stimulus.</div><div><br /></div><div>While I was with Crowdstrike, we'd regularly "see", via the EDR telemetry, the actions taken by various threat actors when the Crowdstrike product blocked their processes from executing. In one instance, the Crowdstrike agent stopped the threat actor's process, and their reaction was to attempt to disable and remove Windows Defender. They then moved to another endpoint, and when they encountered the same issue, they attempted to remove an AV product that was not installed anywhere within the infrastructure. They finally moved to a third endpoint, and when their attempts continued to be blocked, they ran a batch file intended to remove several AV products, none of which were installed on the endpoint. Interestingly, they left the infrastructure without ever running a command to see what processes were running, nor what applications were installed.</div><div><br /></div><div>We saw threat actors on endpoints monitored by the Crowdstrike agent doing queries to see if Carbon Black was installed. To be clear, the commands were not general, "...give me a list of processes..." commands, but were specific to identifying Carbon Black.</div><div><br /></div><div>In another instance, we observed the threat actor land on a monitored endpoint, and begin querying other endpoints within the infrastructure to see if they were running the Falcon agent. They reached out to 15 endpoints, and while we could not see the responses, we knew from our dashboard that the agent was only on 4 of the queried endpoints. The threat actor then moved to one of the endpoints that did not have an agent installed. The interesting thing about this was that when they landed on the monitored endpoint, we saw no commands run nor any other indication of the threat actor checking that endpoint for the agent; it was as if they already knew. </div><div><br /></div><div>Even without EDR or AV blocking the threat actor's attempts, we may still be able to observe how the threat actor responds to stimulus. I've seen more than a few times where a threat actor will attempt to run something, and Windows Error Reporting kicks off because their EXE crashes. What do they do? I've seen ransomware threat actors unable to encrypt files on an endpoint, and running their tool with the "--debug" command switch, multiple times. They may also attempt to download newer or different copies of their tools, and try running them again. </div><div><br /></div><div>In other instances, I've seen commands fail, and the threat actor try something else. I've also seen tools crash, and the threat actor take no action. Seeing how a threat actor responds to the issues they encounter, watching their behavior and whether they encounter any issues, provides significant insight into their intent.</div><div><br /></div><div><i>Other Aspects of the Attack</i></div><div>There are other aspects of an attack that we can look to to better understand the threat actor. For example, when the threat actor initially accesses an endpoint, how do they do so? RDP? MSSQL? Some other application, like <a href="https://www.huntress.com/blog/threat-advisory-xmrig-crypto-mining-by-way-of-teamviewer">TeamViewer</a>?</div><div><br /></div><div>Is the access preceded by failed login attempts, or does the source IP address for the threat actors successful access to the system not appear on the list of IP addresses for failed login attempts?</div><div><br /></div><div>Once they have access, what do they do, how soon/fast do they do it, and how do they go about their activities? If they access the endpoint via RDP, do they use all GUI tools, do they go to PowerShell, do they use cmd.exe, etc.? Do they use WSL, if it's installed? Do they use native utilities/LOLBins? Do they use batch files? <br /><br />Did they create any additional persistence? If so, what do they do? Create user accounts? Add services or Scheduled Tasks? Do they lay any "booby traps", akin to the <i>Targeted Threat Actor</i> from<a href="https://windowsir.blogspot.com/2024/01/human-behavior-in-digital-forensics-pt.html"> my previous blog post</a>?<br /><br />During their time on the endpoint, do they seem prepared, or do they "muck about", as if they're wandering around a dark room, getting the lay of the land? Do they make mistakes, and if so, how do they overcome them? </div><div><br /></div><div>Do they use LOLBins? Do they bring tools with them, and if so, are the tools readily available? When the <a href="https://www.secureworks.com/blog/ransomware-deployed-by-adversary">Samas ransomware actors were attacking JBoss CMS systems in 2016</a>, they used <a href="https://github.com/joaomatosf/jexboss">the JexBoss exploit</a>, which was readily available. </div><div><br /></div><div>When they disconnect their access, how do they go about it? Do they simply break the connection and log out, or do they "salt the earth", clearing Windows Event Logs, deleting files, etc.?</div><div><br /></div><div>An important caveat to these aspects is we have to be very careful about how we view and understand the actions we observe. There have been more than a few times where I've worked with analysts with red team experience, and have heard them say, "...if I were the attacker, I would have...". This sort of bias can be detrimental to understanding what's actually going on, and can lead to resources being deployed in the wrong direction. </div><div><br /></div><div><i>Conclusion</i><br />As Blade stated during the first movie (<a href="https://www.youtube.com/watch?v=HvB_76nL6jI">quote 3</a>), "...when you understand the nature of thing, you know what it's capable of." Understanding a threat actor's nature provides insight into what they're capable of, and what we should be looking for on endpoints and within the infrastructure.</div></div><div><br /></div><div>This also helps us understand control efficacy; what controls did we have in place for prevention, detection, and response? Did they work, or did they fail? How could those controls be improved, or better implemented? </div>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-9518042.post-30731660816967177702024-01-06T13:34:00.005-05:002024-01-09T12:31:19.389-05:00Human Behavior In Digital Forensics, pt II<p>On the <a href="https://windowsir.blogspot.com/2024/01/human-behavior-in-digital-forensics.html">heels of my first post on this topic</a>, I wanted to follow up with some additional case studies that might demonstrate how digital forensics can provide insight into human activity and behavior, as part of an investigation.</p><p><i>Targeted Threat Actor</i><br />I was working a targeted threat actor response, and while we were continuing to collect information for scoping, so we could move to containment, we found that on one day, from one endpoint, the threat actor pushed their RAT installer to 8 endpoints, and had the installer launched via a Scheduled Task. Then, about a week later, we saw that the threat actor had pushed out another version of their RAT to a completely separate endpoint, by dropping the installer into the StartUp folder for an admin account.</p><p>Now, when I showed up on-site for this engagement, I walked into a meeting that served as the "war room", and before I got a chance to introduce myself, or find out what was going on, one of the admins came up to me and blurted out, "we don't use communal admin accounts." Yes, I know...very odd. No, "hi, I'm Steve", nothing like that. Just this comment about accounts. So, I filed it away.</p><p>The first thing we did once we got started was roll out our EDR tech, and begin getting insight into what was going on...which accounts had been compromised, which were the nexus systems the threat actor was operating from, how they were getting in, etc. After all, we couldn't establish a perimeter and move to containment until we determined scope, etc.</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXOS8g_oE62PgZAwViCihYEBqWVfdUYTaf8yELh1ALKWLJq6vWoWGJBD4oz9j0HVNUP54ikTsA4esqi9qszJJ8n7eNNHPQK_wUsm4vm92Gu2OIDXPCTtyRtCe9SbJ7JVCqmvUxQjo5cAYnHgAuaTjSTdW9w00XDXJvqBHQGq7YhEkTOQOTug/s100/start_startup_link.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="100" data-original-width="100" height="100" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXOS8g_oE62PgZAwViCihYEBqWVfdUYTaf8yELh1ALKWLJq6vWoWGJBD4oz9j0HVNUP54ikTsA4esqi9qszJJ8n7eNNHPQK_wUsm4vm92Gu2OIDXPCTtyRtCe9SbJ7JVCqmvUxQjo5cAYnHgAuaTjSTdW9w00XDXJvqBHQGq7YhEkTOQOTug/s1600/start_startup_link.jpg" width="100" /></a></div>So we found this RAT installer in the StartUp folder for an admin account...a communal admin account. We found it because in the course of rolling out our EDR tech, the admins used this account to push out their software management platform, as well as our agent...and the initial login to install the software management platform activated the installer. When our tech was installed, it immediately alerted on the RAT, which had been installed by that point. It had a different configuration and C2 from what we'd seen from previous RAT installations, which appeared to be intentional. We grabbed a full image of that endpoint, so we were able to get information from VSCs, including a copy of the original installer file. <p></p><p>Just because an admin told me that they didn't use communal admin accounts doesn't mean that I believed him. I tend to follow the data. However, in this case, the threat actor clearly already knew the truth, regardless of what the admins stated. On top of that, they planned out far enough in advance to have multiple means of access, including leaving behind "booby traps" what would be tripped through admin activity, but not have the same configuration. That way, if admins had blocked access to their first C2 IP address at the firewall, or were monitoring for that specific IP address via some other means, having the new, second C2 IP address would mean that they would go unnoticed, at least for a while. </p><p>What I took away from all of the totality of what we saw, largely through historical data on a few endpoints, was that the threat actor seemed to have something of a plan in place regarding their goals. We never saw any indication of search terms, wandering around looking for files, etc., and as such, it seemed that they were intent upon establishing persistence at that point. The customer didn't have EDR in place prior to our arrival, so there's a lot we likely missed out on, but from what we were able to assemble from host-based historical data, it seemed that the threat actor's plan, at the point we were brought in, was to establish a beachhead.</p><p><i>Pro Bono Legal Case</i><br />A number of years ago, I did some work on a legal case. The background was that someone had taken a job at a company, and on their first day, they were given an account and password on a system for them to use, but they couldn't change the password. The reason they were given was that this company had one licensed copy of an application, and it was installed on that system, and multiple people needed access.</p><p>Jump forward about a year, and the guy who got hired grew disillusioned, and went in one Friday morning, logged into the computer, wrote out a Word document where they resigned, effective immediately. They sent the document to the printer, then signed it, handed it in, and apparently walked out. </p><p>So, as it turns out, several files on the system were encrypted with ransomware, and this guy's now-former employer claimed that he'd done it, basically "salting the earth" on his way out the door. There were suits and countersuits, and I was asked to examine the image of the system, after exams had already been performed by law enforcement and an expert from SANS.</p><p>What I found was that on Thursday evening, the day before the guy resigned, at 9pm, someone had logged into the system locally (at the console) and surfed the web for about 6 minutes. During that time, the browser landing on a specific web site caused the ransomware executable to be downloaded to the system, with persistence written to the user account's Run key. Then, when the guy returned the following morning and logged into the account, the ransomware launched, albeit without his knowledge. Using a variety of data sources, to include the Registry, Event Log, file system metadata, etc., I was able to demonstrate when the infection activity <i>actually</i> took place, and in this instance, I had to leave it up to others to establish who had actually been sitting at the keyboard. I was able to articulate a clear story of human activity and what led to the files being encrypted. As part of the legal battle, the guy had witness statements and receipts from the bar he had been at the evening prior to resigning, where he'd been out with friends celebrating. Further, the employer had testified that they'd sat at the computer the evening prior, but all they'd done was a short web browser session before logging out.</p><p>As far as the ransomware itself was concerned, it was purely opportunistic. "Damage" was limited to files on the endpoint, and no attempt was made to spread to other endpoints within the infrastructure. On the surface, what happened was clearly what the former employer described; the former employee came in, typed and printed their resignation, and launched the ransomware executable on their way out the door. However, file system metadata, Registry key LastWrite times, and browser history painted a different story all together. The interesting thing about this case was that <i>all</i> of the activity occurred within the same user account, and as such, the technical findings needed to be (and were) supported by external data sources.</p><p><i>RAT Removal</i><br />During another targeted threat actor response engagement, I worked with a customer that had sales offices in China, and was seeing sporadic traffic associated with a specific variant of a well-known RAT come across the VPN from China. As part of the engagement, we worked out a plan to have the laptop in question sent back to the states; when we received the laptop, the first thing I did was remove and image the hard drive.</p><p>The laptop had run Windows 7, which ended up being very beneficial for our analysis. We found that, yes, the RAT <i>had been</i> installed on the system at one point, and our analysis of the available data painted a much clearer picture. </p><p>Apparently, the employee/user of the endpoint had been coerced to install the RAT. <a href="https://www.semanticscholar.org/paper/Using-every-part-of-the-buffalo-in-Windows-memory-Kornblum/3311ed0c63d4ca707c49256655e401f37f25ec50">Using all the parts of the buffalo</a> (file system, WEVTX, Registry, VSCs, hibernation file, etc.), we were able to determine that, at one point, the user had logged into the console, attached a USB device, and run the RAT installer. Then, after the user had been contacted to turn the system over to their employer, we could clearly see where they made attempts to remove and "clean up" the RAT. Again, as with the RAT installation, the user account that performed the various "clean up" attempts logged in locally, and performed some steps that were very clearly manual attempts to remove and "clean up" the RAT by someone who didn't fully understand what they were doing. </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfIVLaGQqMHNhk2rimtxpTmRt9ei2zSf6eVz7N1k_0KnnyXMOxBll_uPWhmLaBJj7h87N0_DdjFN5xCXKYiUqkeXfq8Pl_nxOQVq4fTdmPfHFIrBlX3VPe9Pl7SbvBoCGZ-Y-4V8FnS-IX_aHRaCR5B1BGnqtH0IT3umPUMOLtMAsSS-BWGg/s400/ccn.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="225" data-original-width="400" height="113" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfIVLaGQqMHNhk2rimtxpTmRt9ei2zSf6eVz7N1k_0KnnyXMOxBll_uPWhmLaBJj7h87N0_DdjFN5xCXKYiUqkeXfq8Pl_nxOQVq4fTdmPfHFIrBlX3VPe9Pl7SbvBoCGZ-Y-4V8FnS-IX_aHRaCR5B1BGnqtH0IT3umPUMOLtMAsSS-BWGg/w200-h113/ccn.jpg" width="200" /></a></div><i>Non-PCI Breach</i><br />I was investigating a breach into corporate infrastructure at a company that was part of the healthcare industry. I turned out that an employee with remote access had somehow ended up with a keystroke logger installed on their home system, which they used to remote into the corporate infrastructure via RDP. This was about 2 weeks before they were scheduled to implement MFA.<p></p><p>The threat actors was moving around the infrastructure via RDP, using an account that hadn't accessed the internal systems, because there was no need for the employee to do so. This meant that on all of these systems, the login initiated the creation of the user profile, so we had a really good view of the timeline across the infrastructure, and we could 'see' a lot of their activity. This was before EDR tools were in use, but that was okay, because the threat actor stuck to the GUI-based access they had via RDP. We could see documents they accessed, shares and drives they opened, and ever searches they ran. This was a healthcare organization, which the threat actor was apparently unaware of, because they were running searches for "password", as well as various misspellings of the word "banking" (i.e., "bangking", etc.). </p><p>The organization was fully aware that they had two spreadsheets on a share that contained unencrypted PCI data. They'd been trying to get the data owner to remove them, but at the time of the incident, the files were still accessible. As such, this incident had to be reported to the PCI Council, but we did so with as complete a picture as possible, which showed that the threat actor was both unaware of the files, as well as apparently not interested in credit card, nor billing, data. </p><p>Based on the nature of the totality of the data, we had a picture of an opportunistic breach, one that clearly wasn't planned, and I might even go so far as to describe the threat actor as "caught off guard" that they'd actually gained access to an organization. There was apparently no research conducted, the breach wasn't intentional, and had all the hallmarks of someone wandering around the systems, in shock that they'd actually accessed them. Presenting this data to the PCI Council in a clear, concise manner led to a greatly reduced fine for the customer - yes, the data should not have been there, but no, it hadn't been accessed or exposed by the intruder. </p>Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-9518042.post-61390912053516743192024-01-03T14:30:00.000-05:002024-01-03T14:30:00.171-05:00Human Behavior In Digital Forensics<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYgLrQ2GNOd0hjiYR4nEn3QOTe0E8ZVFdAk6Y-sUgLhFlkHZcgpI-oUS9SAhkkYO81BWHwIvcGL2dgItB6dFgeHWER70R1Bw3qRR0JoQhc1unaUtWoHOT4U3DE0TzawSvkR_B3hFDti16Bgjtso9F9h4jiyV-IVcCiEIiAuuWv3NIw6JsxQg/s2048/fbi.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1152" data-original-width="2048" height="113" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYgLrQ2GNOd0hjiYR4nEn3QOTe0E8ZVFdAk6Y-sUgLhFlkHZcgpI-oUS9SAhkkYO81BWHwIvcGL2dgItB6dFgeHWER70R1Bw3qRR0JoQhc1unaUtWoHOT4U3DE0TzawSvkR_B3hFDti16Bgjtso9F9h4jiyV-IVcCiEIiAuuWv3NIw6JsxQg/w200-h113/fbi.jpg" width="200" />I</a></div><p></p><p>I've always been a fan of books or shows where someone follow clues and develops an overall picture to lead them to their end goal. I've always like the "hot on the trail" mysteries, particularly when the clues are assembled in a way to understand that the antagonist was going to do next, what their next likely move would be. Interestingly enough, a lot of the shows I've watched have been centered around the FBI, shows like "The X-Files", and "Criminal Minds". I know intellectually that these shows are contrived, but assembling a trail of technical bread crumbs to develop a profile of human behavior is a fascinating idea, and something I've tried to bring to my work in DFIR. </p><p>Former FBI Supervisory Special Agent and Behavioral Profiler <a href="https://www.linkedin.com/in/cameron-malin-jd-cissp-07688320/">Cameron Malin</a> recently shared that his newest endeavor, <a href="https://moduscyberandi.com/">Modus Cyberandi</a>, has gone live! The main focus of his effort, cyber behavior profiling, is right there at the top of the main web page. In fact, the main web page even includes a brief history of behavioral profiling.</p><p>This seems to be similar to <a href="https://www.linkedin.com/in/len-opanashuk-bb5b6197/">Len Opanashuk</a>'s endeavor, <a href="https://motivesunlocked.com/">Motives Unlocked</a>, which leads me to wonder, is this a <i>thing</i>? </p><p>Is this something folks are interested in?</p><p>Apparently ,it is, as there's research to suggest that this is, in fact, the case. Consider <a href="https://www.researchgate.net/publication/354283905_Behavioural_Evidence_Analysis_A_Paradigm_Shift_in_Digital_Forensics">this research paper</a> describing behavioral evidence analysis as a "paradigm shift", or <a href="https://commons.erau.edu/cgi/viewcontent.cgi?article=1160&context=jdfsl">this paper on idiographic digital profiling</a> from the Journal of Digital Forensics, Security, and Law, to name but a few. Further, Google lists a number of (mostly academic) resources <a href="https://scholar.google.com/scholar?q=cyber+behavioral+profiling&hl=en&as_sdt=0&as_vis=1&oi=scholart">dedicated to cyber behavioral profiling</a>.</p><p>This topic seems to be talked about here and there, so maybe there is an interest in this sort of analysis, but the question is, is the interest more academic, is the focus more niche (law enforcement), or is this something that can be effectively leveraged in the private sector, particularly where digital forensics and intrusion intelligence intersect?</p><p>I ask the question, as this is something I've looked at for some time now, in order to not only develop a better understanding of targeted threat actors who are still active during incident response, but to also determine the difference between a threat actor's actions during the response, and those of others involved (IT staff, responders, legitimate users of endpoints, etc.). </p><p>In a recent comment on social media, Cameron used the phrase, "...<i>adversary analysis and how human behavior renders in digital forensics</i>...", and it occurred to me that this really does a great job of describing going beyond just individual data points and malware analysis in DFIR, particularly when it comes to hands-on targeted threat actors. By going beyond just individual data points and looking at the multifaceted, nuanced nature of those artifacts, we can begin to discern patterns that inform us about the intent, sophistication, and situational awareness of the threat actor.</p><p>To that end, <a href="https://www.linkedin.com/in/joe-slowik/">Joe Slowik</a> has correctly stated that there's a need in CTI (and DFIR, SOC, etc.) to <a href="https://www.domaintools.com/resources/blog/analyzing-network-infrastructure-as-composite-objects/">view indicators as composite objects</a>, that things like hashes and IP addresses have greater value when other aspects of their nature is understood. Many times we tend to view IP addresses (and other indicators) one-dimensionally; however, there's so much more about those indicators that can provide insight to the threat actor behind them, such as <i>when,</i> <i>how</i>, and <i>in what context</i> that IP address was used. Was it the source of a login, and if so, what type? Was it a C2 IP address, or the source of a download or upload? If so, how...via HTTP, curl, msiexec, BITS, etc?</p><p><a href="https://www.huntress.com/blog/cant-touch-this-data-exfiltration-via-finger">Here's an example</a> of an IP address; in this case, 185.56.83.82. We can <a href="https://www.virustotal.com/gui/ip-address/185.56.83.82">get some insight</a> on this IP address from VirusTotal, enough to know that we should probably pay attention. However, if you read the blog post, you'll see that this IP address was used as the target for data exfiltration. </p><p>Via <i>finger.exe</i>.</p><p>Add to that the use of the LOLBin is identical to what was <a href="https://hyp3rlinx.altervista.org/advisories/Windows_TCPIP_Finger_Command_C2_Channel_and_Bypassing_Security_Software.txt">described in this 2020 advisory</a>, and it should be easy to see that we've gone well beyond <i>just</i> an IP address, by this point, as we've started to unlock and reveal the composite nature of that indicator. </p><p>The point of all this is that there's more to the data we have available than just the one-dimensional perspective that we're used to thinking in, in which we've been viewing that data. Now, if we begin to incorporate other data sources that are available to us (EDR telemetry, endpoint data and configurations, etc.), we'll being to see exactly how, as Cameron stated, <i>human behavior renders in digital forensics</i>. Some of the things I've pursued and been successful in demonstration during previous engagements includes things like hours of operations, preferred TTPs and approaches, enough so to separate the actions of two different threat actors on a single endpoint. </p><p>I've also gained insight into the situational awareness of a threat actor by observing how they reacted to stimulus; during one incident, the installed EDR framework was blocking the threat actor's tools from executing on different endpoints. The threat actor never bothered to query any of the three endpoints to determine what was blocking their attempts; rather, on one endpoint, they attempted to disable Windows Defender. On the second endpoint, they attempted to delete a specific AV product, without ever first determining if it was installed on the endpoint; the batch file they ran to delete all aspects and variations of the product were not preceded by query commands. Finally, on the third endpoint, the threat actor ran a "spray-and-pray" batch file that attempted to disable or delete a variety of products, none of which were actually installed on the endpoint. When none of these succeeded in allowing them to pursue their goals, they left.</p><p>So, yes, viewed through the right lens, with the right perspective, human behavior can be discerned through digital forensics. But the question remains...is this useful? Is the insight that this approach provides valuable to anyone?</p>Unknownnoreply@blogger.com7tag:blogger.com,1999:blog-9518042.post-13727684194543320822023-12-31T08:02:00.001-05:002023-12-31T08:02:24.427-05:002023 Wrap-up<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSIG6cCnYaEYJTfeBpE889UMLSe5ujJDGCIY7uL1AlLHJiaQDT1iBQfEOOEGt0e73hcVfvY1H5k6BCGsDU-uLqhu7ZcVUPLI0tYFWPc1YWcRcbZwhcr6-M8mhA8gDck5ONbCk9qR9QQtIyEbTIlX45J5tFDO_jnveABbN2LKQRfHEMtmZ98w/s600/2023.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="375" data-original-width="600" height="125" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSIG6cCnYaEYJTfeBpE889UMLSe5ujJDGCIY7uL1AlLHJiaQDT1iBQfEOOEGt0e73hcVfvY1H5k6BCGsDU-uLqhu7ZcVUPLI0tYFWPc1YWcRcbZwhcr6-M8mhA8gDck5ONbCk9qR9QQtIyEbTIlX45J5tFDO_jnveABbN2LKQRfHEMtmZ98w/w200-h125/2023.jpg" width="200" /></a></div>Another trip around the sun is in the books. Looking back over the year, I thought I'd tie a bow on some of the things I'd done, and share a bit about what to expect in the coming year.<p>In August, I released<a href="https://github.com/keydet89/RegRipper4.0"> RegRipper 4.0</a>. Among the updates are <a href="https://windowsir.blogspot.com/2023/08/the-next-step-expanding-regripper.html">some plugins with JSON output</a>, and I found a way to<a href="https://windowsir.blogspot.com/2023/08/the-next-step-integrating-yara-with.html"> integrate Yara into RegRipper</a>.</p><p>I also continued <a href="https://windowsir.blogspot.com/2023/08/events-ripper-updates.html">updating Events Ripper</a>, which I've got to say, has proven (for me) time and again to be well worth the effort, and extremely valuable. As a matter of fact, within the last week or so, I've used Events Ripper to great effect, <a href="https://windowsir.blogspot.com/2023/12/round-up.html">specifically with respect to MSSQLServer</a>, not to "save my bacon", as it were, but to quickly illuminate what was going on on the endpoint being investigated. </p><p>For anyone who's followed me for a while, either via my blog or on LinkedIn or X, you'll know that I'm a fan of (to steal a turn of phrase from Jesse Kornblum) "<a href="https://www.researchgate.net/publication/221947836_Using_every_part_of_the_buffalo_in_Windows_memory_analysis">using all the parts of the buffalo</a>", particularly when it comes to LNK file metadata.</p><p>For next year, I'm working on an LNK parser that will allow you to automatically generate a bare-bones Yara rule for detecting other similar LNK files (if you have a repository from a campaign), or submitting as a retro-hunt to VirusTotal. </p><p>Finally, I'm working on what I hope to be the first of several self-published projects. We'll see how the first one goes, as the goal is to provide the foundation of other subsequent projects.</p><p>That being said, I hope everyone had a great 2023, and that you're looking forward to a wonderful 2024...even though for many of us, it's probably going to be April before we realize that we're writing 2023 on checks, etc.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-9518042.post-64243236662429563642023-12-18T08:14:00.001-05:002023-12-18T08:19:48.700-05:00Round Up<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYRftD6fZGKoyntKt1RBouyrwsp1igRD0dyiW9Bq3CcKl8UcHtorzcoiV4bYR6XTIc3FxnLYCS2sQ5tjtoqA8IVBaEgo_LdX1qVN10iXbEAg_0GfR09X5IOdjr4m-gcwaHWBTkDAXU8UJDl_pS_mVQyx3RdjxMU3deF3QE871UWk417qFNOA/s250/mssql.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="201" data-original-width="250" height="161" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYRftD6fZGKoyntKt1RBouyrwsp1igRD0dyiW9Bq3CcKl8UcHtorzcoiV4bYR6XTIc3FxnLYCS2sQ5tjtoqA8IVBaEgo_LdX1qVN10iXbEAg_0GfR09X5IOdjr4m-gcwaHWBTkDAXU8UJDl_pS_mVQyx3RdjxMU3deF3QE871UWk417qFNOA/w200-h161/mssql.jpg" width="200" /></a></div><i>MSSQL is still a thing</i><br />TheDFIRReport recently <a href="https://thedfirreport.com/2023/12/04/sql-brute-force-leads-to-bluesky-ransomware/">posted an article regarding BlueSky ransomware</a> being deployed following MSSQL being brute forced. I'm always interested in things like this because it's possible that the author will provide clear observables so that folks can consider the information in light of their infrastructure, and write EDR detections, or create filter rules for DFIR work, etc. In this case, I was interested to see how they'd gone about determining that MSSQL had been brute forced.<p></p><p>You'll have to bear with me...this is one of those write-ups where images and figures aren't numbered. However, in the section marked "Initial Access", there's some really good information shared, specifically where it says, "SQL Server event ID 18456 Failure Audit Events in the Windows application logs:"...specifically, what they're looking at is MSSQLServer/18456 events in the Application Event Log, indicating a failed login attempt to the server (as opposed to the OS). This is why I wrote the <a href="https://github.com/keydet89/Events-Ripper">Events Ripper</a><a href="https://github.com/keydet89/Events-Ripper/blob/main/plugins/mssql.pl"> mssql.pl</a> plugin. I'd seen a number of systems running Veeam and MSSQL, and needed a straightforward, consistent, repeatable means to determine if a compromise of Veeam was the culprit, or if something else had occurred.</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtwIHzDlR3k8GKzUTHApXa-myY6KgOOlLd43uElNX4hpOuGX3PlZhMgKP42Td1G_TXorW5CH2NCEREqFx6q-PvSdOXcIgsfBdPWzIOLnkY5yiqF4ETlIlv85s-r4vAqWv9YxZyEm0zUhA1C7yZYUPaSPX8R4mfP0sdYpJDU3et-JM91t-hCA/s225/lnk.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="225" data-original-width="225" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtwIHzDlR3k8GKzUTHApXa-myY6KgOOlLd43uElNX4hpOuGX3PlZhMgKP42Td1G_TXorW5CH2NCEREqFx6q-PvSdOXcIgsfBdPWzIOLnkY5yiqF4ETlIlv85s-r4vAqWv9YxZyEm0zUhA1C7yZYUPaSPX8R4mfP0sdYpJDU3et-JM91t-hCA/w200-h200/lnk.png" width="200" /></a></div><i><br />LNK Files</i><br />TheDFIRSpot <a href="https://www.thedfirspot.com/post/a-lnk-to-the-past-utilizing-lnk-files-for-your-investigations">had an interesting write-up</a> on using LNK files in your investigations, largely from the perspective of determining what a user or threat actor may have done or accessed while logged in via the Windows Explorer shell. Lining up creation and last modification times of shortcuts/LNK files in the account's Recent folder can provide insight into what might have occurred. Again, keep in mind that for this to work, for the LNK files to be present, access was obtained via the shell (Windows Explorer). If that's the case, then you're likely going to also want to look at the automatic JumpLists, as they will provide similar information, and LNK files in the Recent folder, and the RecentDocs and shellbags keys for the account can provide a great deal of insight into, and validation of activity. Note that automatic JumpLists are OLE/structured storage format files, with the individual streams consisting of data that follows the LNK format.<p></p><p>While I do agree that blog posts like this are extremely valuable in reminding of us of the value/importance of certain artifacts, we need to take an additional step to normalize a more comprehensive approach; that is, we need to consistently drive home the point that <a href="https://windowsir.blogspot.com/2023/12/and-question-is.html">we shouldn't just be looking at a single artifact</a>. We need to normalize and reinforce the understanding that there is no <i>go-to</i> artifact for any evidence category, when we should be considering artifact constellations, and that constellation will depend upon the base OS version and software load of the endpoint. Understanding default constellations, as part of a base software load (OS, minimal applications) is imperative, as is having a process to build out that constellation based on additional installed software (Sysmon, LANDesk Software Monitoring, etc.).</p><p>Something to keep in mind is that access via the shell has some advantages for the threat actor, one being that using GUI tools means that EDR is blind to most activity. EDR tools are great at recording process creation events, for example, but when the process (explorer.exe) already exists, what happens via the process that does not involve cmd.exe, PowerShell, WSL, or WSA (Windows Subsystem for Android) may not be visible to EDR. Yes, some EDR frameworks also monitor network connections, as well as Registry and file system modifications, but by necessity, those are often filtered. When a GUI tool is opened, EDR based on process creation events is largely blind to activity that occurs via drop-down boxes, check boxes, text fields, and buttons being pushed.</p><p>For example, check out <a href="https://www.huntress.com/blog/curling-for-data-a-dive-into-a-threat-actors-malicious-ttps">this recent Huntress blog</a> where <i>curl.exe</i> was observed being used for data exfil (on the heels of <a href="https://www.huntress.com/blog/cant-touch-this-data-exfiltration-via-finger">this Huntress blog</a> showing <i>finger.exe</i> being used for data exfil). In the curl blog, there's a description of <a href="https://github.com/ufrisk/MemProcFS">MemProcFS</a> being used for memory dumping; using a GUI tool essentially "blinds" EDR, because you (the analyst) can't see which buttons the threat actor pushes. We can assume that the 4-digit number listed in the minidump file path was the process ID, but the creation of that process was beyond the data retention window (the endpoint had not been recently rebooted...), so we weren't able to verify which process the threat actor targeted for the memory dump.</p><p><i>Malware Write-ups</i><br />Malware and threat actor write-ups need to include clear observables so that analysts can implement them, whether they're doing DFIR work, threat hunting, or working on writing detections. <a href="https://detect.fyi/rhysida-ransomware-and-the-detection-opportunities-3599e9a02bb2">Here is Simone Kraus's write-up on the Rhysida ransomware</a>; I've got to tell you, it's chock full of detection and hunting opportunities. Like many write-ups, the images and listings aren't numbered, but about 1/4 of the way down the blog post, there's a listing of <i>reg.exe</i> commands meant to change the wallpaper to the ransom note, many of which are duplicates. What I mean by that is that you'll see a "cmd /c reg add" command, followed by a "reg.exe add" command with the same arguments in the command line. As Simone says, these are commands that the ransomware would execute...these commands are embedded in the executable itself; this is something we see with RaaS offerings, where commands for disabling services and the ability to recover the system are embedded within the EXE itself. In 2020, a sample of the Sodinokibi ransomware contained 156 unique commands, just for shutting off various Windows services. If your EDR tech allows for monitoring the Registry and disabling processes at the endpoint, this may be a good option to enable automated response rules. Otherwise, detecting these processes can lead to isolating endpoints, or the values themselves can be used for threat hunting across the enterprise.</p><p>Something else that's interesting about the listing is that the first two entries are misspelled; since the key path doesn't exist by default, the command will fail. It's likely that Simone simply cut-n-pasted these commands, and since they're embedded within the EXE, they likely will not be corrected without the EXE being recompiled. This misspelling provides an opportunity for a high fidelity threat hunt across EDR telemetry.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-9518042.post-38083538744237613242023-12-11T08:17:00.004-05:002023-12-11T08:17:52.358-05:00...and the question is...I received an interesting question via LinkedIn not long ago, but before we dive into the question and the response...<div><br /></div><div>If you've followed me for any amount of time, particularly recently, you'll know that I've put some effort forth in correcting the assumption that individual artifacts, particularly ShimCache and AmCache, provide "evidence of execution". The is a massive oversimplification of the nature and value of each of these artifacts, in addition to just being an extremely poor analytic process; that is, viewing single artifacts in isolation to establish a finding.</div><div><br /></div><div>Okay, so now, the question I was asked was, what is my "go to" artifact to demonstrate evidence of execution?</div><div><br /></div><div>First, let me say, I get it...I really do. During my time in the industry, I've heard customers ask, "..what is <i>the product</i> I need to purchase to protect my infrastructure?", so an analyst asking, "...what is the artifact that illustrates evidence of execution?" is not entirely unexpected. After all, isn't that the way things work sometimes? What is the <i>one thing</i>, which button do I push, which is <i>the lever</i> I pull, what is the <i>one action</i> I need to take, or <i>one choice</i> I need to make to move forward?</div><div><br /></div><div>So, in a way, the question of the "go to" artifact to demonstrate...well, anything...is a trick question. Because there should not be one. Looking just at "evidence of execution", some might think, "...well, there's Prefetch files...right?", and that's a good option, but what do we know about application prefetching? </div><div><br /></div><div>We know that the prefetcher monitors the first 10 seconds of execution, and tracks files that are loaded.</div><div><br /></div><div>We know that beginning with Windows 8, Prefetch files can hold up to 8 "last run" times, embedded within the file itself. </div><div><br /></div><div>We know that application prefetching is enabled by default on workstations, but not servers. </div><div><br /></div><div>Okay, this is great...but what happens <i>after</i> those first 10 seconds? What I mean is, what happens if code within the program throws an error, doesn't work, or the running application is detected by AV? Do we consider that the application "executed" only if it started, or do we consider "evidence of execution" to include the application completing, and impacting the endpoint in some manner?</div><div><br /></div><div>So, again, the answer is that there is no "go to" artifact. Instead, there's a "go to" process, one that includes multiple, disparate data sources (file system, Registry, WEVTX, SRUM, etc.), normalized and correlated based on some common element, such as time. Windows Event Log records include time stamps, as do MFT records, Registry keys and some values.</div><div><br /></div><div>Our analytic process needs to encompass two concepts...artifact constellations, and validation. First off, we don't ever look at single artifacts to establish findings; rather, we need to incorporate multiple, disparate data sources, through a process of parsing, normalization, decoration and enrichment to truly determine the context of an event. Looking at <i>just</i> a log entry, or entry from EDR telemetry by itself does not truly tell us if something executed successfully. If it was launched, did it complete successfully? Did it have the intended impact on the endpoint, leaving traces of its execution?</div><div><br /></div><div>Second, artifact constellations lead to validation. By looking at multiple, disparate data sources, we can determine if what we <i>thought</i> was executed, what <i>appeared</i> to have been executed, was able to "survive". For example, I've seen malware launched, visible through EDR telemetry and log sources, that never succeeded. Each time it launched, it generated an error, per Windows Error Reporting. I've seen malicious installation processes (MSI files) fail to install. I've seen threat actors push out their ransomware EXE to multiple endpoints and run each instance, resulting in files on those systems being encrypted, but not be able to get the executable to run on the nexus endpoint; I've seen threat actors run their ransomware EXE multiple times with the "--debug" option, and the files on that endpoint were never encrypted.</div><div><br /></div><div>If you're going to continue to view single artifacts in isolation, then please understand the nature and nuance of the artifacts themselves. Thoroughly review (and understand) <a href="https://cyber.gouv.fr/sites/default/files/2019/01/anssi-coriin_2019-analysis_amcache.pdf">this research regarding AmCache</a>, as well as <a href="https://www.mandiant.com/resources/blog/caching-out-the-val">Mandiant's findings regarding ShimCache</a>. However, over the years, I've found it so much more straightforward to incorporate these artifacts into an overall analysis process, as it continually demonstrates the value of the individual artifacts, as well as provides insights into the intent and capabilities of the threat actor.</div><div><br /></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-9518042.post-31996089426286960192023-11-28T18:06:00.000-05:002023-11-28T18:06:47.803-05:00Roll-up<p>One of the things I love about the industry is that it's like fashion...given enough time, the style that came and went comes back around again. Much like the fashion industry, we see things time and again...just wait.</p><p>A good example of this is the <a href="https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/finger">finger</a> application. I first encountered finger toward the end of 1994,</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCznecKxHc8VW5dK3j0QWfw2QMCYZM8ow1AyRiFT5u8woHC-Iid2ystzxKxzHUePAY3976PP0xPMKBDsVh8ACbNM7LOxidGdoF5v88Drq1I26uVHijv50sHhxmH2pi3B7g91_GRrJp3-9rs2OVpR7dyTHVsgNtk8yP8H13qaqJd5nYNK4ong/s894/finger.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="872" data-original-width="894" height="195" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCznecKxHc8VW5dK3j0QWfw2QMCYZM8ow1AyRiFT5u8woHC-Iid2ystzxKxzHUePAY3976PP0xPMKBDsVh8ACbNM7LOxidGdoF5v88Drq1I26uVHijv50sHhxmH2pi3B7g91_GRrJp3-9rs2OVpR7dyTHVsgNtk8yP8H13qaqJd5nYNK4ong/w200-h195/finger.jpg" width="200" /></a></div>during my first 6 months in grad school. I was doing some extracurricular research, and came across a reference to finger as making systems vulnerable, but it wasn't clear why. I asked the senior sysadmin in our department; they looked at me, smiled, and walked away.<p></p><p>Jump forward about 29 years to just recently, and I saw <a href="https://www.huntress.com/blog/cant-touch-this-data-exfiltration-via-finger">finger.exe, on a Windows system, used for data exfiltration</a>. <a href="https://hyp3rlinx.altervista.org/advisories/Windows_TCPIP_Finger_Command_C2_Channel_and_Bypassing_Security_Software.txt">John Page/hyp3rlinx wrote an advisory</a> (published 2020-09-11) describing how to do this, and yes, from the client side, what I saw looked like it was taken directly from John's advisory.</p><p>What this means to us is that the things we learn may <i>feel</i> like they fade with time, but wait long enough, and you'll see them, or some variation, again. I've seen this happen with ADSs; more recently, the specific <a href="https://github.com/nmantani/archiver-MOTW-support-comparison/">MotW</a> variations have taken precedence. I've also seen it happen with shell items (i.e., the "building blocks" of LNK files, JumpLists, and shellbags), as well as with the OLE file format. You may think, "...man, I spent all that time learning about that thing, and now it's no longer used..."; wait. It'll come back, like bell bottoms.</p><p><i>Deleted Things</i><br />In DFIR, we often say that just because you delete something, that doesn't mean that it's gone. For files, Registry keys and values, etc., this is all very true.</p><p><b>Scheduled Tasks</b><br />A while back, I <a href="https://windowsir.blogspot.com/2020/01/developing-and-using-lessons-learned.html">blogged about an ops debrief call</a> that I'd joined, and listened to an analyst discuss their findings from their engagement. At the beginning of the call, they'd mentioned something, almost in passing, glossing over it like it was inconsequential; however, some research revealed that it was actually an extremely high-fidelity indicator based on specific threat actor TTPs.</p><p>In many instances, threat actors will create Scheduled Tasks as a means of persisting on endpoints. In fact, not too long ago, I saw a threat actor create two Scheduled Tasks for the same command; one to run based on a time trigger, and the other to run ONSTART. </p><p>In the case this analyst was discussing, the threat actor had created a Scheduled Task on a Windows 7 (like I said, this was a while back) system. The task was for a long-running application; essentially, the application would run until it was specifically stopped, either directly or by the system being turned off. Once the application was launched, the threat actor deleted the Scheduled Task, removing to the XML and binary task files; Windows 7 used a combination of the XML-format task files we see today on Windows 10 and 11 endpoints, as well as the binary *.job file format we saw on Windows XP.</p><p><b>Volume Shadow Copies</b><br />About 7 yrs ago or so, I <a href="https://windowsir.blogspot.com/2016/08/links-and-updates.html">published a blog post</a> that included a reference to a presentation from 2016, and to a Carbon Black blog post that had been published in August, 2015. The short version of what was discussed in both was that a threat actor performed the following:</p><p>1. Copied their malware EXE to the root of a file system.<br />2. Created a Volume Shadow Copy (VSC).<br />3. Mounted the VSC they'd created, and launched the malware/Trojan EXE from within the mounted VSC.<br />4. Deleted the VSC they'd created, leaving the malware EXE running in memory.</p><p>I tried replicating this...and it worked. Not a great persistence mechanism...reboot the endpoint and it's no longer infected...but fascinating nonetheless. What's interesting about this approach is that if the endpoint hadn't had an EDR agent installed, all a responder would have available to them by dumping process information from the live endpoint, or by grabbing a memory dump, is a process command line with a file path that didn't actually exist on the endpoint. </p><p><i>WSL</i><br />We've known about the Windows Subsystem for Linux (WSL) for a while. </p><p>Not too long ago, an <a href="https://dl.acm.org/doi/fullHtml/10.1145/3538969.3544439">academic paper addressing WSL2 forensics</a> was published illustrating artifacts associated with the installation and use of Linux distributions. The authors reference the use of RegRipper (version 3.0, apparently) in several locations, particularly when examining the System and Software Registry hives; for some reason, they chose to not use RegRipper to parse the AmCache.hve file. </p><p>Now, let's keep our eyes open for a similar paper on the Windows Subsystem for Android...just sayin'...</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-9518042.post-83391466957450199752023-11-10T12:47:00.000-05:002023-11-10T12:47:59.717-05:00Roll-up<p>I don't like checklists in #DFIR. </p><p>Rather, I don't like how checklists are used in #DFIR. Too often, they're used as a replacement for learning and knowledge, and looked at as, "...if I do just this, I'm good...". Nothing could be further from the truth, which is why even in November 2023, we still see analysts retrieving just the Security, Application, and System Event Logs from Windows 10 & 11 endpoints.</p><p>I'm also not a fan of lists in #DFIR. Rather than a long list of links with no context or insight, I'd much rather see just a few links with descriptions of how useful they are (or, they <i>aren't</i>, as the case may be...), and how they were incorporated into an analysis workflow.</p><p><i>SRUM DB</i><br /><a href="https://www.fancy4n6.com/docs/shanna-daly/as-seen-on/">Shanna Daly</a> recently shared some <a href="https://www.fancy4n6.com/posts/2023-11-05-srum_forensics/">excellent content regarding SRUMDB</a>, excellent in the sense that it was not only enjoyable to read, but it was thorough in its content, particularly regarding the fact that the database contents are written on an hourly basis. As such, this data source is not a good candidate for being included in a timeline, but it <i>is</i> an excellent pivot point.</p><p>This is where timelines and artifact constellations cross paths, and lay a foundation for validation of findings. Most analysts are familiar with ShimCache and AmCache artifacts, but many still mistakenly believe that these are "evidence of execution"; in fact, the recently published <i><a href="https://www.amazon.com/Windows-Forensics-Analyst-Field-Guide/dp/1803248475/">Windows Forensics Analysts Field Guide</a></i> states this, as well. So, what happens is that analysts will see an entry in either artifact for apparent malware and declare victory, basing their finding on that one artifact, in isolation. All either of these artifacts tells us definitively is that file existed on the endpoint; we need additional information, other elements of the constellation, to confirm execution. So, there's Prefetch files...unless you're examining a server. One place to pivot to for validation is the SRUM DB, which Shanna does a thorough job of addressing and describing. </p><p><i>Dev Drive</i><br />Grzegorz recently <a href="https://twitter.com/0gtweet/status/1720532496847167784">tweeted regarding Windows "dev drive"</a> (<a href="https://www.linkedin.com/posts/grzegorztworek_by-design-av-bypass-with-dev-drive-i-activity-7126825794135334912-pkvf">LinkedIn post here</a>), a capability that allows a developer to optimize an area of their hard drive for storage operations. Apparently, part of this allows the developer to "disallow" AV, which sounds similar to designating exclusions in Windows Defender. However, in this case, it <i>sounds</i> as if it's for all AV, not just Defender. </p><p>MS provides information on "dev drive", including describing how to <a href="https://learn.microsoft.com/en-us/windows/dev-drive/group-policy">enable it via GPO</a>.</p><p><i>Finger</i><br />I was doing some research recently for a blog post on the use of <i>finger.exe</i> for both file download, as well as exfil, and ran across a couple of very similar articles and posts, all of which seemed to be derived from<a href="https://hyp3rlinx.altervista.org/advisories/Windows_TCPIP_Finger_Command_C2_Channel_and_Bypassing_Security_Software.txt"> a single resource (from hyp3rlinx)</a>.</p><p>And yes, you read that right...the LOLBin/<a href="https://lolbas-project.github.io/">LOLBAS</a> <i>finger.exe</i> used for data exfil. When I was in graduate school and working on my master's thesis (late '95 through '96), I was teaching myself Java programming in order to facilitate data collection for my thesis. As part of my self-study, I wrote networking code to implement SMTP, finger, etc., clients on Windows (at the time, Windows 3.11 for Workgroups and Windows 95). However, at the time, I wasn't as focused on things like data exfil and digital forensics...rather, I was focused on implementing networking sockets and protocols to replicate various client applications. What's wild about this one is that I don't think I ever expected to see it "in the wild", but in October 2023, I did. </p><p>Actively used, "in the wild". </p><p>And to be quite honest, it's pretty freaking cool! </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjL7LX3KJL6PPd6fqmiUP-R-_Jm_HE5aRBk4za4yrP3yzfKResGxWTrZ00b4iEPpmuSAJ_ZlSZ73aZDnwGwymskFp2CaKgKk7jMbLige4BYl9eaxbofF90GAKXzNHNVCuE7AEwq3Y0zLCMl2Dwb7kijhmlQ7Xx4fjcW5ZNKGlYVanmbv7ChWg/s1248/apple_iie.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="930" data-original-width="1248" height="149" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjL7LX3KJL6PPd6fqmiUP-R-_Jm_HE5aRBk4za4yrP3yzfKResGxWTrZ00b4iEPpmuSAJ_ZlSZ73aZDnwGwymskFp2CaKgKk7jMbLige4BYl9eaxbofF90GAKXzNHNVCuE7AEwq3Y0zLCMl2Dwb7kijhmlQ7Xx4fjcW5ZNKGlYVanmbv7ChWg/w200-h149/apple_iie.jpg" width="200" /></a></div>Ancillary to this, something I've encountered/been thinking of for some time now is that there are things that have been around for years that have confounded current analysis and led to mistakes via assumptions. For example, about 40 or so years ago, I took a BASIC programming course (on the Mac IIe), and one of the first things we learned was preceding lines to be "commented out" with "REM". Commenting lines was part of the formal instruction, using "REM" as a "poor man's debugger" was part of the informal instruction. Anyway, I've seen "obfuscated" code that contained long strings of what looked like base64-encoded lines, only to see them preceded by "REM" or an apostrophe. And yet, instead of skipping those lines, some analysts have been bogged down trying to decode the apparent base64-encoded strings. <p></p><p>Another example is NTFS alternate data streams (ADSs). This NTFS file system artifact has been around since...well...NTFS, but there are more than a few analysts who haven't experienced them and aren't familiar with them. </p><p>The point of this isn't to point out shortcomings in training, education, experience, or knowledge; rather, that threat actors can use (and <i>have</i> used) something "old" with great success, because it's not recognized by current analysts. Think about it for a second...think DOS batch files are "lame" when compared to PowerShell or some more "modern" scripting languages? They may be but they work, really well, in fact. There's two Windows Event Logs that PowerShell code can end up in, but batch files don't get "recorded" anywhere. Further, there are some pretty straightforward things you can do with DOS batch files that will not only work, but have the added benefit of confusing the crap out of "modern" analysts. </p><p>So, here's something to think about...there's a lot of different ways to data exfiltration as part of recon activities, but one that folks may not be expecting is to do so via <i>finger.exe</i>. Do you employ EDR technology, or have an MDR? If so, how often is <i>finger.exe</i> launched in your infrastructure? Would it be a good idea to have a rule that simply monitors for the execution of that LOLBAS?</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-9518042.post-29444922743495506992023-10-09T07:38:00.003-05:002023-10-09T07:38:25.054-05:00Investigating Time Stomping<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFT5uYRTJIPuPJtsI36PYssw_Q1dwF-hhu0dBcY3i0qzGqL2hFEEBXEjb6yyDAIGh2syQ0EuGJewAL3l4E5r2NDIHVt4z29E3rS7pOsQRezhkKjgi9YMOl_2ByhDUtldMgMs0U0AADfqBuRKFR0RLwB2dZ2Qd0S4RuO0O2nBdJTSao8yEePg/s612/stomp.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="612" data-original-width="612" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFT5uYRTJIPuPJtsI36PYssw_Q1dwF-hhu0dBcY3i0qzGqL2hFEEBXEjb6yyDAIGh2syQ0EuGJewAL3l4E5r2NDIHVt4z29E3rS7pOsQRezhkKjgi9YMOl_2ByhDUtldMgMs0U0AADfqBuRKFR0RLwB2dZ2Qd0S4RuO0O2nBdJTSao8yEePg/w200-h200/stomp.jpg" width="200" /></a></div>Some analysts may be familiar with the topic of time stomping, particularly as it applies to the NTFS file system, and is <a href="https://www.inversecos.com/2022/04/defence-evasion-technique-timestomping.html">explained in great detail by Lina Lau in her blog</a>. If you're not familiar with the topic, give Lina's article a very thorough read-thru. This can be important, as threat actors have been observed modifying time stamps on the files they drop on endpoints, performing "<a href="https://attack.mitre.org/techniques/T1070/006/">defense evasion</a>" in order to avoid detection and inhibit root cause analysis (RCA). Keep in mind, however, that if your analysis includes a focus on developing artifact constellations rather than single artifacts taken in isolation, then the use of this technique will likely be much more evident during the course of your analysis.<p></p><p>Analysts may be less familiar with time stomping as it applies to Registry keys, also <a href="https://www.inversecos.com/2022/04/malicious-registry-timestamp.html">discussed in great detail by Lina in her blog</a>, <a href="https://twitter.com/errno_fail/status/1511758243000995843">discussed by Maxim on X</a>, as well as discussed by Shane McCulley and <a href="https://www.linkedin.com/in/kimberly-stone/">Kimberly Stone</a> in <a href="https://www.youtube.com/watch?v=HO0TbQHfYwg">their SANS DFIR Summit 2023 presentation</a>. During their presentation, Kimberly discussed (mentioned several times) using the Registry transaction logs to detect Registry key time stomping by examining intermediate states of a hive file, which <a href="https://dfir.ru/2018/11/19/exploring-intermediate-states-of-a-registry-hive-using-transaction-log-files/">Maxim discussed in his blog</a> in November, 2018. In a way, this technique is no different from examining intermediate states of the file system using the USN change journal.</p><p>All of these resources together provide a pretty thorough overview of time stomping activity; what it is, how it's achieved, etc. Lina's blog on time stomping files provides several means by which both $STANDARD_INFORMATION and $FILE_NAME attributes within an MFT record can be modified. I saw $STANDARD_INFORMATION attribute time stamps being modified quite often during the time I was engaged in PCI forensic investigations; in most cases, the time stamps were copied from kernel32.dll, so that the file dropped by the threat actor would appear to be part of normal system files.</p><p>Also during the presentation, Kimberly specifically discusses the <a href="https://github.com/strozfriedberg/notatin">StrozFriedberg open source library</a>, written in Rust, for parsing offline Windows Registry hive files, named "notatin". The library includes Python bindings, and <a href="https://github.com/strozfriedberg/notatin/releases/tag/v1.0.1">two utilities that ship as Windows binaries</a>. Kimberly proves examples of the utilities' use during the presentation, and it's definitely a library worth looking into if you're engaged in some form of Registry parsing and analysis.</p><p><i>Detecting Time Stomping/Detection Methodologies</i><br />During the presentation, Kimberly mentioned several times that determining time stomping of Registry keys should be a more regular part of the analysis process, and to some extent, I agree with her. However, this perhaps requires a level of understanding of the issue, as well as automation that may not necessarily be accessible to all organizations, as they may not be set up in a manner similar to StrozFriedberg. This level of parsing and tagging/decoration of data is not something you see in commercially available forensic suites, and as such, would require a bit of developer "heavy lifting" to get the automation <a href="https://twitter.com/errno_fail/status/1511758243000995843">described by Maxim</a> set up and enabled. This is not to say that it cannot be achieved; rather, fully exploiting the Windows Registry is not something we see in regular, normal practice, <a href="https://medium.com/@katyakandratovich/tryhackme-secret-recipe-room-writeup-1c229b2c66ab">either via CTFs</a> or via open reporting, and as such, I wouldn't think that this is something we see being part of regular processing anytime in the near future.</p><p>Another way to address the analysis issue, particularly (albeit not solely) when it comes to detecting this form of defense evasion, would to be subject the DF analysis culture to a dramatic shift, to where artifact constellations are developed and used as part of analysis on a regular basis, rather than viewing single artifacts in isolation, which appears to be more common place today.</p><p>In <a href="https://www.inversecos.com/2022/04/malicious-registry-timestamp.html">Lina's blog post</a>, she provides an attack demonstration, the first two steps focusing on the Run key within the Software hive. At approximately 15:37 in the presentation video, Kimberly provides an example of the Run key within the Software hive being "time stomped". Something to consider when investigating incidents where the Run key, in either the Software or NTUSER.DAT hives, may have been time stomped, analysts should keep the <i>Microsoft-Windows-Shell-Core%4Operational.evtx</i> Windows Event Log file in mind; this log provides a record of when values within the Run and RunOnce keys are processed. If you subject this log to frequency of least occurrence analysis, you can determine those values that are "new" and haven't been processed but once, or even a few times, as well as get the first time they were processed. Comparing this to the full breadth of the log itself, you can determine when the value was processed for the first time. Note that this log file can also assist analysts in tracking <a href="https://www.microsoft.com/en-us/security/blog/2022/10/27/raspberry-robin-worm-part-of-larger-ecosystem-facilitating-pre-ransomware-activity/">Raspberry Robin persistence</a>, as well.</p><p>In step 3 of the attack demonstration in her blog, Lina moved from the Run key to the Uninstall key, as this key has a number of subkeys. Not long after Lina's blog post was published, I wrote a RegRipper plugin in an attempt to implement the detection methodology, part 2; what I found was that there are a LOT of Registry keys with subkeys that have more recent LastWrite times than the key itself.</p><p><i>Conclusion</i><br />During my time in DFIR, I can't say that I've seen time stomping of Registry keys employed as a defense evasion technique, and so far, I haven't seen anything in public, open reporting to suggest that it has been used "in the wild". It's pretty clear from Lina's blog post that it's possible, but when conducting analysis thus far, I haven't seen any issues where the LastWrite time on the Run key, or any other key for that matter, did not align with the other artifacts within the constellation. However, to Kimberly's point, creating the necessary automation to detect the possible use of this technique, applied during the data parsing, normalization, and decoration phase, would mean that the possible used of the technique would be raised to the analyst's awareness (for further investigation) without requiring a manual examination of the data.</p><p><i>EndNote</i><br />Another possible means for detecting the use of this technique is to purposely configure your Windows endpoints to support this detection, by enabling the <i><a href="https://learn.microsoft.com/en-us/troubleshoot/windows-client/deployment/system-registry-no-backed-up-regback-folder">EnablePeriodicBackup</a></i> value in the Registry.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-9518042.post-46223851624166248862023-09-19T19:57:00.000-05:002023-09-19T19:57:58.595-05:00The State of Windows Digital Analysis, pt II<p>On the heels of <a href="http://windowsir.blogspot.com/2023/09/the-state-of-windows-digital-analysis.html">my previous blog post on this topic</a>, I read a report that, in a lot of ways, really highlighted some of the issues I mentioned in that earlier post. The <a href="https://www.binalyze.com/blog/new-idc-report-the-state-of-digital-forensics-and-incident-report-2023">recent IDC report from Binalyze</a> is telling, as it highlights a number of issues. One of those issues is that the average time to <i>investigate</i> an issue is over 26 days. As the report points out, this time could be greatly reduced through the use of automation, but I want to point out that not just any automation that is purchased from an external third party vendor is going to provide the necessary solution. Something I've seen over the years is that the single best source of intelligence comes from your own incident investigations, and what we know about our infrastructure and learn from our own investigations can help guide our needs and purchases when it comes to automation.</p><p>Further, making use of open reporting and applying indicators from those reports to your own data can be extremely valuable, although sometimes it can take considerable effort to distill actionable intelligence from open reporting. This is due to the fact that many organizations that are publishing open reports regarding incidents and threat actors do not themselves have team members who are highly capable and proficient in DF analysis; this is something we see quite often, actually. </p><p>Before I go on, I'm simply using the post that I mention as an example. This is <i>not</i> a post where I'm intent on bashing anyone, or highlighting any team's structure as a negative. I fully understand that not all teams will or can have a full staffing complement; to be quite honest, it may simply not be part of their business model, nor structure. The point of this post is simply to say that when mining open reporting for valuable indicators and techniques to augment our own, we need to be sure to understand that there's a difference between what we're reading and what we're assuming. We may assume that the author of one of the reports we're reading did not observe any Registry modifications, for example, where others may have. We may reach this conclusion because we make the assumption that the team in question has access to the data, as well as someone on their team to correctly interpret and fully leverage it. However, the simple truth may be that this is not the case at all. </p><p>So, again...this post is not to bash anyone. Instead, it's an attempt to bring awareness to where readers may fill gaps in open reporting with assumptions, and to not necessarily view some content as completely authoritative.</p><p>Consider <a href="https://redcanary.com/blog/raspberry-robin/">this blog post</a> originally published on 5 May 2022, and then most recently updated on 23 Mar 2023. I know through experience that many orgs, including ones I've worked for, will publish a blog post and then perhaps not revisit it at a later date, likely because they did not encounter any data on subsequent analyses that would lead to a modification in their findings.</p><p>Within the blog post, the section titled "Initial Access" includes the statement highlighted in figure 1.</p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfOtfU7nxeBLolODC2NERbZEMXGC-tsyCZYs1V2tG1U5I7SU5doxbSaKVdRKauhzoTNFr1fqgjkyudf6E8ic0cJPiANYfqKAomBb_BIuknruCq6xNb8fh1noxN-NbyznO9HDcGDlfGuaaq3ppbrSbetfy-P1ZjREusva9T5QYNZdNHx0BdoA/s759/rr1.png" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="94" data-original-width="759" height="50" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfOtfU7nxeBLolODC2NERbZEMXGC-tsyCZYs1V2tG1U5I7SU5doxbSaKVdRKauhzoTNFr1fqgjkyudf6E8ic0cJPiANYfqKAomBb_BIuknruCq6xNb8fh1noxN-NbyznO9HDcGDlfGuaaq3ppbrSbetfy-P1ZjREusva9T5QYNZdNHx0BdoA/w400-h50/rr1.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>Fig. 1: Initial Access Entry</i></td></tr></tbody></table><br /><p><br /></p><p><br /></p><p><br /></p><p>This statement appears to (incorrectly) indicate that this activity happens automatically; however, simple testing (testing is discussed later in the post) will demonstrate that the value is created when the LNK file on the USB device is double-clicked. Or, you could look to<a href="https://www.wired.com/story/china-usb-sogu-malware/"> this recent Wired.com article</a> that talks about USB-borne malware, and includes the statement highlighted in figure 2.</p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0TmiJaerPd3JmTXIjPBs27QZuQqle-MLAoy0LPME9mqL4Y-ta5uqpHnQhg7jsb6W0NfTbxu_Iu2JUXUkLwaDzLza8zLAm98wSYM7U5MYVaotr6ZTdDr9a0Zbf1VGz3eRJsPwTkVnU2XU2BvHgr19kdxV-ASszKluWYjECnG05pSZRehDirg/s605/usb.png" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="192" data-original-width="605" height="127" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0TmiJaerPd3JmTXIjPBs27QZuQqle-MLAoy0LPME9mqL4Y-ta5uqpHnQhg7jsb6W0NfTbxu_Iu2JUXUkLwaDzLza8zLAm98wSYM7U5MYVaotr6ZTdDr9a0Zbf1VGz3eRJsPwTkVnU2XU2BvHgr19kdxV-ASszKluWYjECnG05pSZRehDirg/w400-h127/usb.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>Fig 2: Excerpt from Wired article</i></td></tr></tbody></table><br /><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p>The sections on "Command and Control" and "Execution" mention the use of MSIExec, but neither one mentions that the use of MSIExec results in MsiInstaller records being written to the Application Event Log, as described in <a href="https://www.huntress.com/blog/evolution-of-usb-borne-malware-raspberry-robin">this Huntress blog post</a>.</p><p>Figure 3 illustrates a portion of the section of the blog post that addresses "Persistence".</p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgB3M7pkjBtG0Xee7-2cgPa3SV5-EDIMCs7bEwegNcHD3uK4MRS0RG3mV77wpubbp_YrJtWU0dg1c5rDP_50tC7Tfl6KnZ1xOZKRWsz7tl34ewpamprwlfzAlkyjYI7R5XllI9dcG_cy3AE3exkOYmNFiCjKOY4FYwoCRy8ZqjJdjU77_F7uA/s769/rr2.png" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="139" data-original-width="769" height="73" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgB3M7pkjBtG0Xee7-2cgPa3SV5-EDIMCs7bEwegNcHD3uK4MRS0RG3mV77wpubbp_YrJtWU0dg1c5rDP_50tC7Tfl6KnZ1xOZKRWsz7tl34ewpamprwlfzAlkyjYI7R5XllI9dcG_cy3AE3exkOYmNFiCjKOY4FYwoCRy8ZqjJdjU77_F7uA/w400-h73/rr2.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>Fig. 3: Persistence Section</i></td></tr></tbody></table><br /><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p>As described in <a href="https://www.cybereason.com/blog/threat-alert-raspberry-robin-worm-abuses-windows-installer-and-qnap-devices">this Cybereason blog post</a>, the Raspberry Robin malware persists by writing a value to the RunOnce Registry key. When the value is read and deleted, the malware rewrites the value once it is executed, allowing the malware to persist across reboots and logins. This method of persistence is also described in <a href="https://www.microsoft.com/en-us/security/blog/2022/10/27/raspberry-robin-worm-part-of-larger-ecosystem-facilitating-pre-ransomware-activity/">this Microsoft blog post</a>. Otherwise, the malware would simply exist until the next time the user logged in. One should also note that "Persistence" is not mentioned in the MITRE ATT&CK table in the Appendix to the blog post.</p><p>Even though the blog post was originally written and then updated at least once over the course of ten months, there's a dearth of host-based artifacts, including those from MsiExec. Posts and articles published by others, between May 2022 and Mar 2023, on the same topic could have been used to extend the original analysis, and fill in some of the gaps. Further, the blog post lists a bunch of "testing" in a section of the blog post of the same name, but doesn't illustrate host-based impacts that would have been revealed as a result of the testing. </p><p>Just to be clear, the purpose of my comments here are not to bash anyone's work or efforts, but rather to illustrate that while open reporting can be an invaluable resource for pursuing and even automating your own analysis, the value derived from the open reporting often varies depending upon the skill sets that make up the team conducting the analysis and writing the blog, article, or report. If there is not someone on the team who is familiar with the value and nuances of the Windows Registry, this will be reflected in the report. The same is true if there is not someone on the team with more than a passing familiarity of Windows host-based artifacts (MFT, Windows Event Log, Registry, etc.); there will be gaps, as host-based impacts and persistence mechanisms are misidentified or not even mentioned. We may read these reports and use them as a structure on which to model our own investigations; doing so will simply lead to similar gaps.</p><p>However, this does not diminish the overall value of pursuing additional resources, not just to identify a wider breadth of indicators but also to get different perspectives. But this should serve as a warning, bringing awareness to the fact that there will be gaps.</p><p>With respect to host-based impacts, something else I've observed is where analysts will 'see' a lot of failed login attempts in the Security Event Log, and assume a causal relationship to the successful login(s) that are finally observed. However, using tools like <a href="https://github.com/keydet89/Events-Ripper">Events Ripper</a>, something I've observed more than a few times is that the failed login attempts will continue well <i>after</i> the successful login, and the source IP address of the successful login does <i>not</i> appear in the list of source IP addresses for failed login attempts. As such, there are not a flurry of failed login attempts from a specific IP address, attempted to log into a specific account, and then suddenly, a successful login for the account, from that IP address. Essentially, there is no causal relationship that has been observed on those cases.</p>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-9518042.post-91234667430155533172023-09-12T18:44:00.001-05:002023-09-17T07:37:56.520-05:00The State of Windows Digital Analysis<p>Something that I've seen and been concerned about for some time now is the state of digital analysis, particularly when it comes to Windows systems. From open reporting to corporate blog posts and webinars, it's been pretty clear that there are gaps and cracks in the overall knowledge base when it comes to the incidents and issues being investigated. These "gaps and cracks" range from simple terminology misuse to misinterpreting single data points on which investigation findings are hung.</p><p>Consider <a href="https://www.inceptionsecurity.com/post/shimcache-a-crucial-tool-for-digital-forensics-and-incident-response">this blog post</a>, dated 28 April. There is not year included, but checking archive.org on 11 Sept 2023, there are only two archived instances of the page, from 9 and 15 June 2023. As such, we can assume that the blog post was published on 28 April 2023. </p><p>The post describes ShimCache data as being "a crucial tool" for DFIR, and then goes on...twice...to describe ShimCache entries as containing "the time of execution". This is incorrect, as the time stamps within the ShimCache entries are the file system last modification times, retrieved from the $STANDARD_INFORMATION attribute in the MFT record for the file (which is easily modified via "time stomping"). The nature of the time stamp can easily be verified by developing a timeline using just the two data sources (ShimCache, MFT).</p><p>The blog post also contains other incorrect statements, such as:<br /></p><p><i>Several tools are available for analyzing shimcache, including the Microsoft Sysinternals tool, sdelete...</i></p><p>The description of the <a href="https://learn.microsoft.com/en-us/sysinternals/downloads/sdelete"><i>sdelete</i> tool</a>, captured from the SysInternals site on 11 Sept 2023, is illustrated in figure 1.</p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3AVTvJL0ddEOrFn6ONoRBQazdjP_LkIOUND-rWnwTa_A3MUHnqt7jDfE-WIf-tw8KTIbvhvCOAUwLXkGPsG_nKFKRQpiJ0Tw0vqlfXiLApdyw9m7TXfxAK8S09jgi9PeKg_CL3VSJroo9VbxSFInEYPsDHbpOWWHNFmGt39DE7DAgbe1zfA/s663/sdelete.png" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="111" data-original-width="663" height="68" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3AVTvJL0ddEOrFn6ONoRBQazdjP_LkIOUND-rWnwTa_A3MUHnqt7jDfE-WIf-tw8KTIbvhvCOAUwLXkGPsG_nKFKRQpiJ0Tw0vqlfXiLApdyw9m7TXfxAK8S09jgi9PeKg_CL3VSJroo9VbxSFInEYPsDHbpOWWHNFmGt39DE7DAgbe1zfA/w400-h68/sdelete.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>Fig. 1: Sdelete tool description</i></td></tr></tbody></table><br /><p><br /></p><p><br /></p><p><br /></p><p>As you can see, the <i>sdelete</i> tool has nothing to do with "analyzing" ShimCache.</p><p>Suffice to say, there is a great deal more that is technically incorrect in the blog post, but there are two important items to note here. First, when searching Google for "shimcache", this blog post is the fourth entry on the first page of responses. Second, the blog post is from a company that offers a number of services, including digital forensics and incident response.</p><p>I'd published <a href="https://windowsir.blogspot.com/2023/04/program-execution.html">this blog post</a> the day prior (27 Apr 2023), listing references that describe ShimCache entries, as well as AmCache, and their context. One of the <a href="https://www.mandiant.com/resources/blog/caching-out-the-val">ShimCache references, from Mandiant</a>, from 2015, states (emphasis added):</p><p><i>It is important to understand there may be entries in the Shimcache that were<b> not actually executed.</b></i></p><div>There are a number of other free resources out there that are similarly incorrect, or even more so. For example, <a href="https://www.thedigitalforensics.com/windows-forensics/evidence-of-execution-shimcache">this article</a> was the tenth listing on the first page of results from the Google search for "shimcache". It was apparently published in 2019, and starts off by equating the ShimCache and AmCache artifacts. Further, the title of the blog post incorrectly refers to the ShimCache as providing "evidence of execution", and the browser tab title for the page is illustrated in figure 2.</div><div><br /></div><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4GekHRVeLbRU5VuuPle5UyCK8zKlwCD55gG5-w-6Hn4U61MgD68_mVFMEckNgEmp77hSfYfpq-ETTN9TM4l86IeEaA6rTTAChsynjKD9Fu9RbDs9g_OJZA4ppqs-N-WM-WbGDYs63_OrV1F-urr1mfrvKNTwLC5YOtQhWErJ7E20r-hrgFQ/s214/shim.png" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="47" data-original-width="214" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4GekHRVeLbRU5VuuPle5UyCK8zKlwCD55gG5-w-6Hn4U61MgD68_mVFMEckNgEmp77hSfYfpq-ETTN9TM4l86IeEaA6rTTAChsynjKD9Fu9RbDs9g_OJZA4ppqs-N-WM-WbGDYs63_OrV1F-urr1mfrvKNTwLC5YOtQhWErJ7E20r-hrgFQ/s16000/shim.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>Fig. 2: Browser Tab Title</i></td></tr></tbody></table><br /><div><br /></div><p><br /></p><p><br /></p><p>Similarly, artifact misinterpretation applies to AmCache entries. For example, <a href="https://www.securityartwork.es/2023/09/07/raspberry-robin-caso-real-de-analisis-forense/">this blog post</a> that discusses Raspberry Robin includes the following statement:</p><p><i>...it is possible to evidence the execution of msiexec with the user's Amcache.hve artifact.</i></p><p>Aside from the fact that there is apparently no such thing (that I'm aware of) as "the user's Amcache.hve artifact", multiple rounds of testing (<a href="https://dfir.ru/2018/12/02/the-cit-database-and-the-syscache-hive/">here</a>, <a href="https://www.ssi.gouv.fr/uploads/2019/01/anssi-coriin_2019-analysis_amcache.pdf">here</a>) have demonstrated that, similar to ShimCache, the AmCache data source can contain references to executables that were<i> not</i> actually executed. This clearly demonstrates the need to cease relying on single artifacts viewed in isolation to support findings, and a need to rely upon <a href="https://windowsir.blogspot.com/2023/04/on-validation-pt-iii.html">validation</a> via multiple data sources and artifact constellations.</p><p>I will say this, though...the blog post correctly identifies the malware infection chain, but leaves out one piece of clarifying, validating information. That is, when the <i>msiexec</i> command line is launched, a great place to look is the Application Event Log, specifically for MsiInstaller records, such as mentioned briefly in <a href="https://www.huntress.com/blog/evolution-of-usb-borne-malware-raspberry-robin">this Huntress blog post</a> regarding the same malware.</p><p>These are just a couple of examples, but remember, these examples were all found on the first page of responses when Googling for "shimcache". So, if someone's attended training, and wants to "level up" and expand their knowledge, they're likely going to start searching for resources, and a good number of the resources available are sharing incorrect information. </p><p>And the issue isn't just with these artifacts, either. Anytime we look to single artifacts or indicators in isolation from other artifacts or data sources, we're missing important context and we're failing to validate our findings. For example, while ShimCache or AmCache entries are incorrectly interpreted to illustrate evidence of execution, where are the other artifacts that should also be evident? Are there impacts of the execution on the endpoint, in the file system, Registry, or Windows Event Log? Or does the Windows Event Log indicate that the execution did not succeed at all, either due to an antivirus detection and remediation, or because the execution led to a crash?</p><p><i>So, What?</i><br />Why does any of this matter? Why does it matter <i>what</i> a DFIR blog or report says?</p><p>Well, for one, we know that the findings from DFIR engagements and reports are used to make decisions regarding the allocation (or not) of resources. Do we need more people, do we need to address our processes, or do we need another (or different) tool/product?</p><p>On 26 Aug 2022, the case of <i><a href="https://global.lockton.com/us/en/news-insights/travelers-v-ics-underscores-need-to-respond-carefully-to-cyber-insurance">Travelers Insurance vs ICS</a></i> was dismissed, with judgement in favor of Travelers. ICS had purchased a cyber insurance policy from Travelers, and as part of the process, included an MFA attestation signed by the CEO. Then, ICS was subject to a successful cyber attack, and when they submitted their claim, the DFIR report indicated that the initial means of access was via an RDP server that did<i> not</i> have MFA, counter to the attestation. As a result, Travelers sought, via the courts, to have the policy rescinded. And they succeeded. </p><p>This case was mentioned here to illustrate that, yes, what's in a DFIR report <i>is</i>, in fact, used and relied upon by someone to make a decision. Someone looked at the report, compared the findings to the policy documentation, and made the decision to file in court to have the policy rescinded. For Travelers, the cost of filing was clearly less than the cost of paying on the policy claim. <br /><br />What about DFIR report contents, and what we've got to look forward to, out on the horizon? On 21 Aug 2023, JD Work <a href="https://twitter.com/HostileSpectrum/status/1693479556756201915">shared this tweet</a>, which states, in part:</p><p><i>Threat actors monetizing nonpayment negotiations by issuing their own authored breach reporting...</i></p><p>Okay, wow. Yes, "wow", and that does seem like the next logical step in the the development and growth of the ransomware economy. I mean, really...first, it was encrypt files and demand a ransom to be able to decrypt them. Then, it was, "oh, yeah, hey...we stole some sensitive data and we'll release it if you don't pay the ransom." During all of this "growth" (for want of a better term), we've seen reports in the media stating, "...<i>sophisticated</i> threat actor...", implying, "...there's nothing we could do in the face of overwhelming odds." So, it makes sense that the next step would be to threaten to release a report (with screen captures) that clearly demonstrated how access was achieved, which could have an affect on attestation documentation as part of the policy process, or impact the responding DFIR firm's findings.</p><p>But is this something that will ever actually happen? Well, there's <a href="https://www.linkedin.com/posts/dr-siegfried-rasthofer_now-snatch-published-it-on-their-blog-activity-7099265113529966592-qMK8/">this LinkedIn post</a> that contains the offering illustrated in figure 3.</p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjs1uy3dwEXKjuYhnTm3GyIxOm3ZvOckHB2AgbrTJhsdSp_f3wzhXo1nzNYzWY-zNlMrzph0P5qzmERbzcj3jALNcjZLcEPJP5ktr6m-YzTVGU7gnKL6NXUDu2IB5CgJ9JhvI5qSQpsaf8EPSc6-3v8rnbgxszEM53t0eR3pMQwkvVgPK-FZw/s492/snatch.png" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="205" data-original-width="492" height="166" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjs1uy3dwEXKjuYhnTm3GyIxOm3ZvOckHB2AgbrTJhsdSp_f3wzhXo1nzNYzWY-zNlMrzph0P5qzmERbzcj3jALNcjZLcEPJP5ktr6m-YzTVGU7gnKL6NXUDu2IB5CgJ9JhvI5qSQpsaf8EPSc6-3v8rnbgxszEM53t0eR3pMQwkvVgPK-FZw/w400-h166/snatch.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>Fig. 3: Snatch Ransom Note Offering</i></td></tr></tbody></table><br /><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p>"We will give you a full access gaining report of the company". Given what Travelers encountered, what impact would such a report have on the policy itself, had the DFIR report not mentioned or described the means by which the threat actor accessed the ICS environment? Or, what impact would it have on the report issued by the DFIR firm recommended by the insurance provider?</p><p>But wait, there's more! In 2007, I was part of the IBM ISS X-Force ERS team, and we became "certified" to conduct PCI forensic investigations. At the time, we were one of 7 teams on the list of certified firms. Visa, the organization that initially ran the PCI Council, provided a structure for reporting that included a "dashboard" across the top of the report. This dashboard included several items, to include the "window of compromise", or the time between the initial infection (as determined by the forensic investigation) and when incident was addressed. This value provided a means for the PCI Council to determine fines for merchants; most merchants had an idea of how many credit cards they processed on a regular basis, even adjusted for holidays. As such, the "window of compromise" could be used as an indicator of how many credit card numbers were potentially at risk as a result of the breach, and help guide the Council when assessing a fine against the merchant.</p><p>In 2015, an analyst was speaking at a conference, describing a PCI forensic investigation they'd conducted in early summer, 2013. When determining the "window of compromise", they stated that they'd relied solely on the ShimCache entry for the malware, which they'd (mis)interpreted to mean, "time of execution". What they hadn't done was parse the MFT, and see if there was an indication that the file had been "time stomped" ($STANDARD_INFORMATION attribute time stamps modified) when it was placed on the system, which was something we were seeing pretty regularly at the time. As a result, the "window of compromise" was determined (and reported) to be 4 yrs, rather than 3 weeks, all because the analyst had relied on a single artifact, in isolation, particularly one that they'd misinterpreted.</p><p><i>Breakin' It Down</i><br />The fundamental issues here are that (a) analysts are not thinking in terms of validating findings through the use of multiple data sources and artifact constellations, and (b) that accountability is extremely limited.</p><p>Let's start with that first one...what we see a good bit of in open reporting is analysts relying on a single artifact, in isolation and very often misinterpreted, to support their findings. From above, refer back to <a href="https://www.securityartwork.es/2023/09/07/raspberry-robin-caso-real-de-analisis-forense/">this blog post</a>, which includes the statement shown in figure 4.</p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4hl5OdHioWzHbm_kKBE_79andBmbCr7negss2XSEjp93reckaLqqwkX143spIMZtsST3Hgm0vHZyrh6Wx8ox5IPRRTJRFMmaKGBnKtECvjJ8DtODdhPHlMjTfGpieB3igqta4lCVKfhBtfXf3Zvh1CGwSrQerj84mZaGZSRInFwcMFkINmA/s770/rr.png" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="177" data-original-width="770" height="93" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4hl5OdHioWzHbm_kKBE_79andBmbCr7negss2XSEjp93reckaLqqwkX143spIMZtsST3Hgm0vHZyrh6Wx8ox5IPRRTJRFMmaKGBnKtECvjJ8DtODdhPHlMjTfGpieB3igqta4lCVKfhBtfXf3Zvh1CGwSrQerj84mZaGZSRInFwcMFkINmA/w400-h93/rr.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>Fig. 4: Blog statement regarding evidence of execution</i></td></tr></tbody></table><br /><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p>First, as I've tried to illustrate through this post, this artifact is regularly misinterpreted, as are others. Further, it is clearly an artifact viewed in isolation; when <i>msiexec</i> commands are run, we would expect to find <a href="https://docs.logrhythm.com/devices/docs/evid-1040-1042-msiinstaller">MsiInstaller records</a> in the Application Event Log, so there are corroborating artifacts within the constellation. These can be very useful in identifying the start of the installation attempt, as well as the success or failure of the installation, as was observed in <a href="https://www.huntress.com/blog/evolution-of-usb-borne-malware-raspberry-robin">this Raspberry Robin blog post</a> from Huntress.</p><p>With respect to "accountability", what does this mean? When a DFIR consulting firm responds to an incident, who reviews the work, and in particular, the final work product? A manager? I'm not a fan of "peer" reviews because what you want is for your work to be reviewed by someone with more knowledge and experience than you, no someone who's on the same level.</p><p>Once the final product (report, briefing, <i>whatevs</i>) is shared with the customer, do<i> they</i> question it? In many cases I've seen, no, they don't. After all, they're relying on the analyst to be the "expert". I've been in the info- and cyber-security industry for 26 yrs, and in that time, I've known of one analyst who was asked by two different customers to review reports from other firms. That's it. I'm not saying that's across the hundreds of cases I've worked, but rather across the thousands of cases worked across all of the analysts, at all of those places where I've been employed.</p><p>The overall point is this...forensic analysis is <i>not</i> about guessing. If you're basing your findings on a single artifact, in isolation from everything else that's available, then you're guessing. Someone...whomever is receiving your findings...needs correct information on which to base <i>their</i> decisions, and from which they're going to allocate resources...or not, as the case may be. If the information that analysts are using to keep themselves informed and up-to-date is incorrect, or it's not correctly understood, then this all has a snowball effect, building through they collection, parsing, and analysis phases of the investigation, ultimately crashing on the customer with the analyst's report.</p><p><i><b>Addendum</b></i><br />I ran across <a href="https://twitter.com/DFS_JasonJ/status/1701549072790777923">this tweet</a> from @DFS_JasonJ recently, and what Jason stated in his tweet struck a chord with me. The original tweet that Jason references states that it's "painful to watch" the cross examination...I have to agree, I didn't last 90 seconds (the video is over 4 min and 30 seconds long). Looking through more of his tweets, it's easy to see that Jason has seen other issues with folks "dabbling" in DF work; while he considers this "dangerous" in light of the impact it has (and I agree), I have to say that if the findings are going to be used for something important, then it's incumbent upon the person who's using those results to seek out someone qualified. I've seen legal cases crumble and dissolve because the part-time IT guy was "hired" to do the work.</p><p>Further, as <a href="https://redcanary.com/blog/sec-rules-cybersecurity/">Red Canary recently pointed out</a>, the SEC is now requiring organizations to "show their work"; how soon before that includes specifics of investigations in response to those "material" breaches? Not just "what was the root cause?", but also, "...show your work...".</p><p><i><b>Addendum, 17 Sept:</b></i><br />Something I've seen throughout my time in the industry is that we share hypotheticals that eventually become realities. In this case, it became a reality pretty quickly...following publication of the <a href="https://www.vox.com/technology/2023/9/15/23875113/mgm-hack-casino-vishing-cybersecurity-ransomware">ransomware attack against MGM</a>, someone apparently from the ALPHV group <a href="https://gist.githubusercontent.com/BushidoUK/20b81335c6729dc8e0b5997ca83fa35f/raw/a0697117e905f5094e7a5feae928806b2ba65b20/gistfile1.txt">shared a statement clarifying how they went about the attack</a>. Of course, always take such things with a grain of salt, but there you have it, folks...it's already started.<br /><br />On a side note, <a href="https://www.bloomberg.com/news/articles/2023-09-13/caesars-entertainment-paid-millions-in-ransom-in-recent-attack">Caeser's was also attacked, apparently be the same group, and paid the ransom</a>.</p>Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-9518042.post-77791803274154772902023-08-28T20:08:00.001-05:002023-08-28T20:08:07.479-05:00Book Review: Effective Threat Investigation for SOC AnalystsI recently had an opportunity to review the book,<i><a href="https://www.amazon.com/Effective-Threat-Investigation-SOC-Analysts/dp/1837634785/"> Effective Threat Investigation for SOC Analysts</a></i>, by<br /> Mostafa Yahia. <div><br /><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPHb3UZKBuzzpuoyV9TcX56QTK2BPjKg86UxAWXSMPfR1rjDp0e5ZPSmds1E2uUx2_tjs8NJ3dtzbvf4c8j_GnudRiRpyEIQhUKxkdsOmYKldL0aGrrd7mY7bOX-VrbT6cfNhCR1IqY5UwIg5XgqLA9OVQ8epFBiDbNPqnQ-SjNk7NSbmmUQ/s1000/mostafa.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="1000" data-original-width="811" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPHb3UZKBuzzpuoyV9TcX56QTK2BPjKg86UxAWXSMPfR1rjDp0e5ZPSmds1E2uUx2_tjs8NJ3dtzbvf4c8j_GnudRiRpyEIQhUKxkdsOmYKldL0aGrrd7mY7bOX-VrbT6cfNhCR1IqY5UwIg5XgqLA9OVQ8epFBiDbNPqnQ-SjNk7NSbmmUQ/w163-h200/mostafa.jpg" width="163" /></a></div></div><div>Before I start off with my review of this book, I wanted to share a little bit about my background and perspective. I started my grown-up "career" in 1989 after completing college. I had a "technical" (at the time) role in the military, as a Communications Officer. After earning an MSEE degree, I left active duty and started consulting in the private sector...this is to say that I did <i>not</i> stay with government work. I started off by leading teams conducting vulnerability assessments, and then over 22 yrs ago, moved over to DFIR work, exclusively. Since then, I've done FTE and consulting work, I ran a SOC, and I've written 9 books of my own, all on the topic of digital forensic analysis of Windows systems. Hopefully, this will give you some idea of my "aperture".</div><div><br /></div><div>My primary focus during my review of Mostafa's book was on parts 1, 2, and 4, as based on my experience I am more familiar with the material covered in part 2. My review covers about 7 of the 15 listed chapters, not because I didn't read them, but because I wanted to focus more on areas where I could best contribute.</div></div><div><br /></div><div>That being said, this book serves as a good introduction to materials and general information for those looking to transition to being a SOC analyst, or those newly-minted SOC analysts, quite literally in their first month or so. The book addresses some of the data sources that a SOC analyst might expect to encounter, although in my experience, this may not always be the case. However, the familiarization is there, with Mostafa demonstrating examples of each data source, and how to use them, addressed in the book.</div><div><br /></div><div>I would hesitate to use the term "effective" in the title of the book, as most of what's provided in the text is introductory material, and should be considered intended for familiarization, as it does not lay the groundwork for what I would consider "effective" investigations.</div><div><br /></div><div><i>Some recommendations, specifically regarding the book:</i></div><div>Be consistent in terminology; refer to the Security Event Log as the "Security Event Log", rather than as "Security log file", the "security event log file", etc. </div><div><br /></div><div>Be clear about what settings are required for various records and fields within those records to be populated. </div><div><br /></div><div>Take more care in the accuracy of statements. For example, figure 6.5 is captioned "PSReadline file content", but the name of the file is "consolehost_history.txt". Figure 7.1 illustrates that Run key found within the Software hive, but the following text of the book incorrectly states that the malware value is "executed upon user login".</div><div><br /></div><div><i>Some recommendations, in general:</i></div><div>Windows event IDs are not unique; as such, records should be referred to by their source/ID pair, rather than solely by event ID. While the <i>Microsoft-Windows-Security-Auditing/4624</i> event refers to a successful login, the <i><a href="https://kb.eventtracker.com/evtpass/evtPages/EventId_4624_Microsoft-Windows-EventSystem_61656.asp">EventSystem/4624</a></i> event refers to something completely different. </div><div><br /></div><div>What's logged to the Security Event Log is heavily dependent upon the audit configuration, which is accessible via Group Policies, the Local Security Editor, or <i>auditpol.exe</i>. As such, many of the Security Event Log event IDs described may not be available on the systems being examined. Just this year (2023), I've examined systems where successful login events were not recorded.</div><div><br /></div><div>Analysts should not view artifacts (in this case, Windows Event Log records, Run key values, etc.) in isolation. Instead, viewing artifacts or data sources together, based on time stamps (i.e., timelining) from the beginning of an investigation, rather than manually developing a timeline in a spreadsheet at the end of an investigation, is a much more efficient, comprehensive, and effective process.</div><div><br /></div><div>Multiple data sources, including multiple Windows Event Logs, can provide insight into various activities, such as user logins, etc. Do not focus on a single artifact, such as an event ID, but instead look to develop artifact constellations. For example, with respect to user logins, looking to the Security Event Log can prove fruitful, as can the LocalSessionManager/Operational, User Profile Service, Shell-Core/Operational, and TaskScheduler Event Logs. Developing a timeline at the beginning of the investigation is a great process for developing, observing, and documenting those constellations.</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-9518042.post-61373067327471858942023-08-27T06:44:00.001-05:002023-08-27T06:44:28.620-05:00The Next Step: Integrating Yara with RegRipper, pt II<p>Okay, so we've <a href="http://windowsir.blogspot.com/2023/08/integrating-yara-with-regripper.html">integrated Yara into the RegRipper workflow</a>, and created "YARR"...now what? The capability is great...at least, I think so. The next step (in the <a href="http://windowsir.blogspot.com/2023/08/the-next-step-expanding-regripper.html">vein of the series</a>) is really leveraging it by creating rules that allow analysts to realize this capability to it's full potential. To take advantage of this, we need to consider the types of data that might be present, and leverage what may already be available and apply to the use case (data written to Registry values) at hand.</p><p><i>Use Available Rules</i><br />A great place to start is by <a href="https://gist.github.com/RachidAZ/d3c469cde5cf2498a451a7b9ba251b2d">using what is already available</a>, and applying those to our use case; however, not everything will apply. For example, using a Yara rule for something that's never had any indication that it's been written to a Registry value likely won't make a great deal of sense to use, at least not at first. That doesn't mean that something about the rule won't be useful; I'm simply saying that it might make better sense to start by looking at <a href="https://attack.mitre.org/techniques/T1027/011/">what's being written to Registry values</a> first, and start there.</p><p>It certainly makes sense to use what's already available as a basis for building out your rule set to run against Registry values. Some of the things I've been looking around for, to see what's already out there and available, are looking for indications of PE files within Registry values, using different techniques and not relying solely on the data beginning with "MZ"; encoded data; strings that include "http://" or "https://"; etc. From these more general cases, we can start to build a corpus of what we're seeing, and begin excluding those things that we determine to be "normal", and highlighting those things we find to be "suspicious" or "bad".</p><p><i>Writing Rules</i><br />Next, we can <a href="https://yara.readthedocs.io/en/stable/writingrules.html">write our own rules</a>, or modify existing ones, based on what we're seeing in our own case work. After all, this was the intention behind RegRipper in the first place, that analysts would see the value in such a tool, not just as something to run but as something to grow and evolve, to add to and develop.</p><p>For writing your own rules, there are loads and loads of resources available, one of the most recent from Hexacorn, with his <a href="https://www.hexacorn.com/blog/2023/08/26/writing-better-yara-rules-in-2023/">thoughts on writing better Yara rules in 2023</a>. Also be sure to check out <a href="https://github.com/Neo23x0/YARA-Style-Guide">Florian's style guide</a>, as well as any number of repositories you can find via Google.</p><p>Speaking of Florian, did you know that Thor already has <a href="https://thor-manual.nextron-systems.com/en/latest/usage/custom-signatures.html#thor-yara-rules-for-registry-detection">rules for Registry detection</a>? Very cool!</p><p><i>What To Look For</i><br />Okay, writing RegRipper plugins and Yara rules is a bit like detection engineering. Sometimes you have to realize that you won't be able to write the <i>perfect</i> rule or detection, and that it's best to write several detections, starting with a "brittle" detection that, at first glance, is trivial to avoid. I get it..."...a good hacker will change what they do the next time...". Sure. But do you know how many times I've seen encoded Powershell used to run<a href="https://github.com/Hackndo/lsassy"> lsassy</a>? The only thing that's changed is output file names; most of the actual command doesn't change, making it really easy to recognize. Being associated with SOCs for some time now, and working DFIR investigations as a result, there are a lot of things we see repeatedly, likely due to large campaigns, tool reuse, etc. So there<i> is</i> value in a brittle detection, particularly given the fact that it's really easy to write (and <a href="https://www.hexacorn.com/blog/2023/08/26/writing-better-yara-rules-in-2023/">document</a>), usually taking no more than a few seconds, and if we leverage automation in our processes, it's not something we have to remember to do.</p><p><i>So, What?</i><br />Why is adding Yara capability to RegRipper important or valuable?</p><p>The simple fact is that processes are created, executed, and measured by people. As such, they <i>will</i> break or fail.</p><p>In <a href="https://militaryjusticeforall.com/1991/02/">1991, AFOSI was investigating</a> one of their own in the death of his wife. During an interrogation, floppy disks collected from the Sgt's home were placed on the table, and he grabbed some of them and cut them up with shears. This story is usually shared to demonstrate the service's capability to recover data, even when the disk is cut up, which is exactly what was done in this case. However, over the years, few have questioned <i>how</i> the Sgt was able to get the shears into the interrogation room; after all, wouldn't he have been patted down at least once?</p><p>The point is that processes (frisking, checking for hidden weapons) is a process created, executed, and managed/measured by people, and as a result, things will be missed, steps skipped, things will go unchecked. So, by incorporating this capability into RegRipper, we're providing something that many may <i>assume</i> was already done at another point or level, but may have been missed. For example, the <i>findexes.pl</i> plugin looks for Registry values that start with "MZ", but what if the value is a binary data type (instead of a string), and the first two bytes are "4D 5A" instead? Yara provides a fascinating (if, in some cases, overlapping) capability that, when brought to bear against Registry value data, can be very powerful. With one rule file, you can effectively look for executables (in general), specific executables or shell code, encoded data, etc.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-9518042.post-4149939431775286632023-08-22T19:42:00.001-05:002023-08-22T19:42:19.336-05:00Yet Another Glitch In The Matrix<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgaRqvwBAvhsxv0YVTRgLWVdgx_kQWW5k6ob4Gr9_iwgtwdpc-a2MgHW5lYrt-k-BFxbcrgk16u6UXk9308V215TXdiK564JxTkAFRCbYhCaQ2cwRUFxG0CCg1WFPTrMQcTN0vYDoqcnhcenCrgO7TlTQJA3lsl8Jl51VAJmwwFjDhFYnggig/s870/matrix.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="489" data-original-width="870" height="180" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgaRqvwBAvhsxv0YVTRgLWVdgx_kQWW5k6ob4Gr9_iwgtwdpc-a2MgHW5lYrt-k-BFxbcrgk16u6UXk9308V215TXdiK564JxTkAFRCbYhCaQ2cwRUFxG0CCg1WFPTrMQcTN0vYDoqcnhcenCrgO7TlTQJA3lsl8Jl51VAJmwwFjDhFYnggig/s320/matrix.jpg" width="320" /></a></div>It's about that time again, isn't it? It's been a while since we've had a significant (or, depending upon your perspective, radical) shift in the cyber crime eco-system, so maybe we're due. <div><br /></div><div>What am I referring to? Back in 2019, we saw a shift in ransomware attacks, where threat actors began not only stealing data, but leveraging it as "double extortion". Up to that point, the "plan" had been to encrypt files, maybe post something publicly to let the world know that this organization had been impacted by your efforts, and hope to collect a ransom. The shift to "double extortion" moved things to a whole new level, and while there's some discussion as to whether this started in November 2019 with Maze, or if it actually started sooner...some have anecdotal information but cannot point to any public statement to the effect...the fact remains that the game shifted. In the ensuing four years, we've seen quite a bit of damaging information released, and maybe none was more disturbing than what was discussed in the <a href="https://www.kare11.com/article/news/local/ransomware-criminals-dumping-kids-files-online-after-minneapolis-school-hacks/89-733c04e2-2eb0-4743-b1bc-60198953049f#:~:text=Complete%20sexual%20assault%20case%20folios,medical%20records%20and%20discrimination%20complaints.">ransomware attack against Minnesota Public Schools</a>, in Feb, 2023. The school system refused to pay the ransom, and the stolen data was released publicly...a brief reading of what was in the dump gives you a brief look into the devastation caused by the release of this data.</div><div><br /></div><div>Something else to consider is the impact of the insurance industry on the cyber security market, a topic that was <a href="https://www.danielwoods.info/assets/pdf/WBWS2023_LessonsLost_USENIX.pdf">covered extensively by Woods, et al, at Usenix</a>. The insurance industry itself has, in recent years, started pulling back from the initial surge of issuing policies to developing more stringent requirements and attestations that impact the premium and policy coverage.</div><div><br /></div><div><i>So, what?</i><br />Okay, so, what? Who cares? Well,<a href="https://twitter.com/HostileSpectrum/status/1693479556756201915"> here's the change, from @HostileSpectrum</a>:<br /><div><br /></div><div><i>Threat actors monetizing nonpayment negotiations by issuing their own authored breach reporting...</i></div><div><br /></div><div>Yes, and that's exactly what it sounds like. Not convinced? Check out <a href="https://www.linkedin.com/posts/dr-siegfried-rasthofer_now-snatch-published-it-on-their-blog-activity-7099265113529966592-qMK8/?utm_source=share&utm_medium=member_desktop">this LinkedIn post</a> from Dr. Siegfried Rasthofer, regarding the Snatch ransomware actors; "...contact us...you will get a full access gaining report...".</div><div><br /></div><div>I know what you're thinking...so, what? Who cares? The org files a claim with their insurance provider, the provider recommends a DFIR firm, that DFIR firm issues their report and it'll just say that same thing, right?</div><div><br /></div><div>Will it?</div><div><br /></div><div>What happens if counsel tells the DFIR firm, "...no notes, no report..."? <a href="https://twitter.com/RootkitRanger/status/1693379457824735623">RootkitRanger gets it</a>, sees the writing on the wall, as it were. No notes, no report, then how is the DFIR analyst held accountable for their work?</div><div><br /></div><div>Why is this important? <br /><br />For one, there are <a href="https://www.lw.com/admin/upload/SiteAttachments/War-Exclusion-Developments-in-Cyber-Insurance-Policies.pdf">insurance provider war exclusions</a>, and they can have a significant impact on organizations. <a href="https://www.securityweek.com/court-rules-in-favor-of-merck-in-1-4-billion-insurance-claim-over-notpetya-cyberattack/#:~:text=Cyber%20Insurance-,Court%20Rules%20in%20Favor%20of%20Merck%20in,Insurance%20Claim%20Over%20NotPetya%20Cyberattack&text=The%20Superior%20Court%20of%20New,by%20the%202017%20NotPetya%20cyberattack.">Merck filed their $1.4B (yes, "billion") claim</a> following the 2017 NotPetya attack, and the judgement wasn't decided until May, 2023, almost 6 yrs later. What happens when attribution based on the DFIR firm's work and the decision made by counsel goes on way, and the threat actor's report goes another?</div><div><br /></div><div>We also need to consider what happens when attestations submitted as part of the process of obtaining a policy turn out to be incorrect. After all, <a href="https://global.lockton.com/us/en/news-insights/travelers-v-ics-underscores-need-to-respond-carefully-to-cyber-insurance">Travelers was able to rescind a policy</a> <i><b>after</b></i> a successful attack against one of their policy holders. So, in addition to having to clean up and recover, ICS did not have their policy/safety net to fall back on. Let's say the threat actor says, "...we purchased access from a broker, and accessed an RDP server with no MFA...", and the org, like ICS, had attestations stating that MFA was in place?</div></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-9518042.post-79403028678636775402023-08-13T17:12:00.002-05:002023-08-13T17:12:19.856-05:00Integrating Yara with RegRipper<p>A lot of writing and training within DFIR about the Registry refers to it as a database where configuration settings and information is maintained. There's really a great deal of value in that, and there is also so much more in the Registry than just "configuration information". Another aspect of the Registry, one we see when discussing "fileless" malware, is its use as a storage facility. As Prevailion stated in their DarkWatchman write-up:</p><p><i style="background-color: white; color: #444444; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13px;">Various parts of DarkWatchman, including configuration strings and the keylogger itself, are stored in the registry to avoid writing to disk.</i></p><p>The important part of that statement is that the Registry is and can be used for storage. Yes, you can store configuration settings, as well as information that can be used to track user activity, connected devices, connected networks, etc., but the Registry can just as easily be used to store other information, as well. As we can see from the <a href="https://attack.mitre.org/techniques/T1027/011/">Fileless Storage page</a> (part of the <a href="https://attack.mitre.org/matrices/enterprise/">MITRE ATT&CK</a> framework) there are quite a few examples of malware that use the Registry for storage. In some cases, the keys and values are specific to the malware, whereas in other instances, the storage location within the hive file itself may change depending upon the variant, or even selections made through a builder. Or, as with Qakbot, the data used by the malware is stored in values beneath a randomly-named key.</p><p>As such, it makes sense leverage Yara, which is great for detecting a wide range of malware, via RegRipper. One way to find indications of malware that writes to the Registry, specifically storing its configuration information, is by creating a timeline and looking for keys being added or updated during the time of the presumed compromise. Another is to comb through the Registry, looking for indications of malware, shell code, encoded commands, etc., embedded within values, and this is where leveraging Yara can really prove to be powerful.</p><p>One example would be to look for either the string "MZ" or the bytes "4D 5A" at offset 0. If malware is stored in the Registry with those bytes stripped, then searching for other strings (PDB strings) or sequences of bytes would be an effective approach, and this is something at which Yara excels. As such, <a href="http://windowsir.blogspot.com/2023/08/the-next-step-expanding-regripper.html">leveraging Yara to extend RegRipper</a> makes a great deal of sense.</p><p>Maybe we can call this "YARR", in honor of International Talk Like A Pirate Day.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-9518042.post-44337100443958454822023-08-12T08:04:00.000-05:002023-08-12T08:04:19.299-05:00The Next Step: Expanding RegRipper<p>I thought I'd continue <a href="http://windowsir.blogspot.com/2023/07/the-next-step-vhd-files-and-metadata.html">The Next Step series</a> of blog posts with something a little different. This "The Next Step" blog post is about taking a tool such as RegRipper to "the next step", which is something I started doing in August, 2020. At first, I added MITRE ATT&CK mapping and Analysis Tips, to provide information as to why the plugin was written, and what an analyst should look for in the plugin output. The Analysis Tips also served as a good way of displaying reference URLs, on which the plugin may have been based. While the reference URLs are very often included in the header of the plugin itself, it's often simply much easier to have them available in the output of the plugin, so that they follow along and are available with the data and the case itself. </p><p>So, in the spirit of the blog series, here are a couple of "the next steps" for RegRipper...</p><p><i>JSON<br /></i>Something I've looked at doing is creating plugins that provide JSON-formatted output. This was something a friend asked for, and more importantly, was willing to discuss. When he asked about the format, my concern was that I would not be able to develop a consistent output format across all plugins, but during the discussion, he made it clear that that wasn't necessary. I was concerned about a consistent, normalized format, and he said that as long as it was JSON format, he could run his searches across the data. I figured, "okay, then", and gave it a shot. I started with the <i>appcompatcache.pl</i> plugin, as it meant just a little bit of code that repeated the process over and over again...an easy win. From there, I modifying the <i>run.pl </i>plugin, as well.</p><p>An excerpt of sample output from the <i>appcompatcache_json.pl</i> plugin, run against the System hive from <a href="https://www.ashemery.com/dfir.html#Challenge5">the BSides Amman image</a> appears as follows:</p><p><span style="font-family: courier;"><span><span> </span>{<br /></span> "value": "C:\Users\Joker\DCode.exe"<br /> "data": "2019-02-15 04:59:23"</span></p><p><span style="font-family: courier;"> },</span></p><p><span style="font-family: courier;"><span> {<br /></span> "value": "C:\Windows\SysWOW64\OneDriveSetup.exe"<br /> "data": "2018-04-11 23:34:02"</span></p><p><span style="font-family: courier;"><span> },<br /></span> ]<br />}</span></p><p>So, pretty straightforward. Now, it's a process of expanding to other plugins, and having the ability with the tool itself to select those plugin output types the analyst is most interested in.</p><p><i>Yara</i><br />Something else I've looked at recently is adding the ability to incorporate <a href="https://virustotal.github.io/yara/">Yara</a> into RegRipper. While I was at Nuix, I worked with David Berry's developers to get some <a href="https://www.nuix.com/resources/step-step-guide-adding-yara-and-regripper-nuix-workstation">pretty cool extensions</a> added to the product; one for RegRipper, and one for <a href="https://github.com/Nuix/Yara-Integration">Yara</a>. I then thought to myself, why not incorporate Yara into RegRipper in some manner? After all, doing things like detecting malware embedded in value data might be something folks wanted to do; I'm sure that there are a number of use cases.</p><p>Rather than integrating Yara <i>into</i> RegRipper, I thought, why re-invent the wheel when I can just access Yara as an external application? I could take a similar approach as to the one used by the Nuix extensions, and run Yara rules against value data. And, it wouldn't have to be <i>all</i> value, as some types won't hold base64-encoded data. In other instances, I may only want to look at binary data, such as searching for payloads, executables, etc. Given that there are already plugins that recursively run through a hive file looking at values and separating the actions taken based on data type, it should be pretty easy to gin up a proof of concept.</p><p>And, as it turns out, it was. I used the <i>run.pl</i> plugin as a basis, and instead of just displaying the data for each value, I ran some simple Yara rules against the contents. One of the rules in the rule file appears as follows:</p><p><span style="font-family: courier;">rule Test3<br />{<br /> strings: <br /> $str1 = "onedrive" nocase<br /> $str2 = "vmware" nocase</span></p><p><span style="font-family: courier;"> condition: <br /> $str1 or $str2<br />}</span></p><p>Again, very simple, very straightforward, and simply designed to produce some output, nothing more.</p><p>The output from the plugin appears as follows:</p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIUBpyUHxfqZ_VPRFnaSjf5IV2We8NL3Mfu8ipUgE9CZy2NzDqORfZ52VpmTf_TESiRtKIYu91TlPbBU0_9OZp6UrO37LojStgbp8KMoSd_dhcrD7SUAS6CT7ZoHAS3cDWkfR5lb3bNLzgmnuTH4MJ7vo0BHhh1Fx9osuExKX-D56_xU5Jkw/s560/yar1.png" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="289" data-original-width="560" height="206" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIUBpyUHxfqZ_VPRFnaSjf5IV2We8NL3Mfu8ipUgE9CZy2NzDqORfZ52VpmTf_TESiRtKIYu91TlPbBU0_9OZp6UrO37LojStgbp8KMoSd_dhcrD7SUAS6CT7ZoHAS3cDWkfR5lb3bNLzgmnuTH4MJ7vo0BHhh1Fx9osuExKX-D56_xU5Jkw/w400-h206/yar1.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>Run_yara.pl output</i></td></tr></tbody></table><br /><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p>Now, I'll admit up front...this is just a proof of concept. However, it illustrates the viability of this technique. Now, using something like the<i> sizes.pl</i> plugin, I can remove the code that determines the number of values beneath a key, and focus on just scanning the value data...all of it. Or, I can have other plugins, such as <i>clsid.pl</i>, comb through a specific key path, looking for payloads, base64-encoded data, etc. Why re-write the code when there are Yara rules available that do such a great job,<i> and</i> the rules themselves may already be part of the analyst's kit.</p><p>Techniques like this are pretty powerful, particularly when faced with threat actor TTPs, such as those described by <a href="https://app.box.com/s/56sxvcquj6jzrs1vsl7fccws3j7qqihn">Prevalion in their DarkWatchman write-up</a>: </p><p><i>Various parts of DarkWatchman, including configuration strings and the keylogger itself, are stored in the registry to avoid writing to disk.</i></p><p>So, with things like configuration strings and an entire keylogger written to the Registry, there are surely various ways to go about detecting the presence of these items, including key LastWrite times, the size of value data, and now, the use of Yara to examine data contents.</p><p>As with the JSON output plugins, now it's simply a matter of building out the capability, in a reasonable fashion. </p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-9518042.post-37504496102208820172023-08-07T19:10:00.001-05:002023-08-07T19:10:45.153-05:00Ransomware Attack Timeline<p>The morning of 1 Aug, I found an article in my feed about a ransomware attack against a municipality; specifically, Montclair Township in New Jersey. Ransomware attacks against municipalities are not new, and they can be pretty devastating to staff and citizenry, as well, and this is even <i>before</i> a ransom is paid. Services are impacted or halted, and I've even seen reports where folks lost considerable amounts of money because they weren't able to get the necessary documentation to process the purchase of a home.</p><p>I decided to dig a bit and see what other information I could find regarding the issue, and the earliest mention I could find was <a href="https://www.montclairnjusa.org/news/headlines/cyber_incident_hits_township_computer_systems">this page from 6 June 2023</a> that includes a link to a video message from the mayor, informing everyone of a "cyber incident". I also found <a href="https://www.northjersey.com/story/news/essex/montclair/2023/06/06/montclair-township-nj-cyber-attack-mayor/70295593007/">this article from North Jersey dot com</a>, reporting on the mayor's message. Two days later, <a href="https://therecord.media/montclair-new-jersey-cyberattack">this article from The Record</a> goes into a bit more detail, including a mention that the issue was not related to the <a href="https://www.huntress.com/blog/moveit-transfer-critical-vulnerability-rapid-response">MOVEit vulnerability</a>.<br /></p><p>At this point, it looks as if the incident occurred on 5 June 2023. As anyone who's investigated a ransomware attack likely knows, the fact that files were encrypted on 5 June likely means that the threat actor was inside the environment well prior to that...2 days, 2 weeks, 2 months. Who knows. If access was purchased from an IAB, it could be completely variable, and as time passes and artifacts oxidize and decay, as the system just goes about operating, it can become harder and harder to determine that initial access point in time. </p><p>What caught my attention on 28 July was <a href=" https://montclairlocal.news/cyber-attack-on-montclair-township-led-to-450k-settlement/">this article from Montclair Local News</a> stating that had a bit of a twist on the terminology used in such incidents; rather, should I say, <i>another</i> twist. Yes, these are referred to many times as a "cyber incident" or "cyber attack" without specifically identifying it as ransomware, and in this instance, there's this quote from the article (emphasis added):</p><p><i>To end a cyber attack on the Montclair Township’s IT Department, the township’s insurer negotiated a <b>settlement</b> of $450,000 with the attackers.</i></p><p>It's not often that a ransom paid is referred to as a <i>settlement</i>, at least not in articles I've read. I can't claim to have seen all articles associated with such "cyber attacks", but at the same time, I haven't seen this turn of phrase to refer to the ransom payment.</p><p>Shortly after the above statement, the article goes on to say:</p><p><i>Some data belonging to individual users remains to be recovered...</i></p><p>Ah, yes...a lot of times you'll see folks say, "...don't trust the bad guy...", because there's no guarantee that even paying for the decryptor that you'll get <i>all</i> of your data back. This statement would lead us to believe that this is one of those instances.</p><p>Another quote from the article:</p><p><i>To guard against future incidents, the township has installed the most sophisticated dual authentication system available to its own system and it is currently up and running.</i></p><div>Does this say something about the attack? Does this indicate that the overall issue, the initial infection vector, was thought to be some means of remote access that was not protected via MFA?</div><p>Something else this says about the issue - 5 June to 28 July is almost 8 full weeks. Let's be conservative here and assume that the reporting on 28 July is not up-to-the-minute, and say that the overall time between encrypted files and ransom (or "settlement") paid is 7 weeks; that's still a long time to be down, not being able to operate a business or a government, and this doesn't even address the impacted services, and the effect upon the community.</p><p>I know that one article mentions a "settlement" or what's more commonly known as a ransom payment, but where does that money really come from?<br /><br />Municipalities (local governments, police departments, etc.) getting ransomed is nothing new. <a href="https://www.tapinto.net/towns/montclair/sections/essex-county-news/articles/newark-city-hall-computers-hacked-with-ransomware-7">Newark was hit with ransomware in April 2017</a>; yes, that was 6 yrs ago, multiple lifetimes in Internet years, but shouldn't that have served as a warning?</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-9518042.post-28920557519697876672023-08-02T17:03:00.001-05:002023-08-03T06:44:33.192-05:00Events Ripper UpdatesI uploaded several new updates to <a href="https://github.com/keydet89/Events-Ripper">Events Ripper</a> plugins in the repo recently...<div><br /></div><div><i>defender.pl</i> - added a check for <a href="https://kirannr.com/2020/07/02/__trashed/#:~:text=Every%20time%20Windows%20Defender%20AV,the%20file%20name%20and%20hash.">event ID 2050 records</a>, indicating that Defender uploaded a sample (as opposed to <a href="https://windowsir.blogspot.com/2022/10/events-ripper.html">event ID 2051 records</a>, indicating that a file could<i> not</i> be sent). The plugin now displays the file path and name, as well as the hash.</div><div><br /></div><div><i>filter.pl</i> - added a check for <a href="https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/event-5152">event ID 5152 records</a>, indicating that WFP blocked a packet. The plugin displays the source IP address of the packet, but not the direction (usually inbound), ports, or destination IP address (will likely be the endpoint itself, or broadcast). When looking at this output, keep the endpoint IP address in mind...you may see connection attempts from other subnets, or from public IP addresses.</div><div><br /></div><div><i>scm.pl</i> - added a check for <i><a href="https://www.manageengine.com/products/eventlog/kb/event-7031-service-crash-help.html">Service Control Manager/7031</a></i> service crash events. I did not add event ID 7039 events; well, actually, I did, but found that there was a lot of noise, and if you're creating a timeline and using Events Ripper as it was intended, you'll get a pivot point from the new capability.</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-9518042.post-9111814679695275372023-07-28T18:10:00.000-05:002023-07-28T18:10:07.109-05:00Thoughts on Tool Features, pt II<p>My <a href="http://windowsir.blogspot.com/2023/07/thoughts-on-tool-features.html">previous post on this topic</a> addressed an apparent dichotomy (admittedly, based on a limited aperture) of thought between vendors and users when it comes to features being added to commercial forensic suites. This was the result of a road I'd <a href="http://windowsir.blogspot.com/2022/11/regripper-value-proposition.html">started down a while back</a>, trying to see if there was any value proposition to RegRipper at all (turns out, no, there isn't), and was simply the most recent pit stop along that road.</p><p>From <a href="http://windowsir.blogspot.com/2023/07/thoughts-on-tool-features.html">my previous post</a>, there seem to be two thoughts on the matter, or maybe it's more correct to say that only two basic positions or perspectives were shared. One is from vendors, some who rely on users to express the need for features; as such, vendors are relying on users to drive their own investigations, requesting or expressing the need for features as they arise. I'm sure vendors have ways of prioritizing those requests/needs, based on limited/available resources.</p><p>The other perspective is from users of those forensic tools; the views expressed are that if a vendor finds or 'sees' a cool feature, it should simply be added to the tool or framework, regardless of whether anyone actually wants it or not. To me, this seems to be users relying on vendors to drive investigations.</p><p>While I tend to agree more with the vendor perspective, as a user of forensic tools (albeit not commercial products), it seems that I have a different perspective from most users. There have been a few times in my career where I've had to deal with the issue of tool features; some examples follow:</p><p><i>CCN Searches</i><br />Circa 2009-ish (give or take), Chris Pogue and I were working a PCI forensic investigation, and found that while Discover and JCB cards had reportedly been processed by the merchant, none were appearing in our searches. As our team had recently grown, we had settled on EnCase (then owned by Guidance Software) as the tool used for all of the PCI-specific searches (CCNs, hashes, file names, etc.); this tool was commonly understood, and we wanted accuracy and consistency above all else.</p><p>We began digging into the issue, even going to the brands and getting test data. We kept reducing the scope of our testing, even to the point of, "here's a file with 3 Discover CCNs in it and nothing else...find all Discover CCNs", and each time, got no hits on either Discover or JCB card numbers. We determined that the <i>IsValidCreditCard()</i> built-in function, which was a closed-source "black box" for us, did not consider either brand of card number valid. We needed this capability <i>now</i>, and we were getting nowhere with the vendor, so we reached out to someone who was known for their EnScripting ability and asked for help. We ultimately ended up with a new function, one that included 7 Regexs, that we used to overload the built-in function. Borrowing a trick I learned from one my professors during my undergrad studies, Chris and I wrote up a quick email to everyone, stating, "copy this file to this location within the EnCase installation, etc.", and got everyone up and running at a consistent level pretty quickly. Yes, the process for searching for CCNs was a tad slower, but it was more accurate now, it was consistent across the team, and Chris and I could easily help troubleshoot any issues folks had in running the new function. After all, the only thing that could really go wrong at that point was that the file we sent was copied to the wrong location.</p><p>This all boiled down to the fact that we recognized that the tool we were using did not have the functionality we needed, even though, across the industry, everyone assumed that it did. We knew what we needed, we knew we needed it immediately, and we knew that we needed to ensure accuracy and consistency across the team. To do so, we sought help where we felt we needed it, and were more than willing to accept, "yeah, okay, we missed that..." along the way, in order to get to our goal. </p><p><i>Carving Records</i><br />In 2012, I attended the DC3 conference in Atlanta, and after spending several days there and returning home, I ran into a fellow attendee at the baggage claim for our flight. We knew of each other within the community, and I had no idea we'd been on the same flight. As we were waiting, we got to chatting, and they mentioned that they'd been "looking at" a system for about 3 months, and were stuck. I wanted to know more, and they said they were stuck trying to figure out how to recover cleared Event Logs. As soon as they said that, I said that I'd send them a tool I'd written as soon as I got home...which I did. It was late when I got home, so I kissed my wife, turned my computer on, and sent them the tool along with instructions, as promised.</p><p>In this case, someone working for the government had a situation, an investigation, where they were stalled "looking at" the image...for 3 months. Now, I have no doubt that that didn't amount to 8 hrs a day, 5 days a week, etc., but that it was more on-and-off over the span of 3 months. However, during that entire time, I had a tool that I'd developed that would have been perfect for the situation, one that I'd not only developed but used to great effect. In fact, the tool was originally written because an Event Log on an XP system had 2 more records within the logical .evt file than were reported by the API, which I'd been able to recover by basically writing a carver. As such, what would work on a logical .evt file would work equally well on unallocated space extracted from an image via <a href="http://www.sleuthkit.org/sleuthkit/man/blkls.html">blkls</a>.</p><p><i>RegRipper</i><br />Something I <i>really</i> like about <a href="https://github.com/keydet89/RegRipper3.0">RegRipper</a> that wasn't something I specifically thought of during the original design is how easily it can be updated (the same applies to <a href="https://github.com/keydet89/Events-Ripper">Events Ripper</a>). For example, not long ago, I was looking into <a href="https://nssm.cc/">nssm</a>, the "non-sucking service manager", and ran across the <a href="https://nssm.cc/usage">usage page</a>. This page describes a number of Registry values and keys that are used by <i>nssm</i>, as well as by tools like it, such as MS's <i><a href="https://learn.microsoft.com/en-us/troubleshoot/windows-client/deployment/create-user-defined-service">svrany.exe</a></i> (which nssm replaces). Both provide persistence and privilege escalation (Admin to LOCALSYSTEM) capabilities, and I knew that I'd never remember to specifically look for some of these keys and values...so I wrote a RegRipperPro plugin to do it for me; the output of a test is illustrated in the figure below.</p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhg_vuVB_BCX2CO01ewtGtcZbRmzLk0NGuZhy9-grErxGyZ-MlHi1o9a7NJ1S6kfmFSF5x99xJ_nvI6kwTHsTBmi3lawA1etVB2fj5_ai1jx11dDMspmbMVUMTOR8UBjEANHET6WJPX0biJKIOeeCUthzpPM9ro2I9vNFcC5Uo57B_OaVo75w/s883/nssm.png" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="275" data-original-width="883" height="125" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhg_vuVB_BCX2CO01ewtGtcZbRmzLk0NGuZhy9-grErxGyZ-MlHi1o9a7NJ1S6kfmFSF5x99xJ_nvI6kwTHsTBmi3lawA1etVB2fj5_ai1jx11dDMspmbMVUMTOR8UBjEANHET6WJPX0biJKIOeeCUthzpPM9ro2I9vNFcC5Uo57B_OaVo75w/w400-h125/nssm.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>Fig: Appenvironment.pl Plugin Output</i></td></tr></tbody></table><br /><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p>So, as you're reading this, you're probably thinking to yourself, "...but I can't write my own plugin." I get it. <a href="http://journeyintoir.blogspot.com/2015/08/minor-updates-to-autorip.html">Some folks don't bother thinking, "I can't...",</a> and just do it, but I get it..Perl isn't in common usage any longer, and programming simply intimidates some folks. That's cool...it's fine. </p><p>Because all you need to do is ask for help, and maybe provide some sample data to test against. I've consistently turned working plugins around in about an hour, when folks have asked (which hasn't been very often).</p><p>The plugin I wrote (output in the figure above) took me just a couple of hours in the evening...I had to stop working to make dinner, take the dog out, etc., but it was pretty straightforward to complete. Now, I don't have to specifically remember these values (and the included key) when conducting an investigation; it's trivial to run all available plugins against various hives, so even a year from now, if I never revisit the issue that led to the plugin again, I'll still have the "corporate knowledge" and the query will still be part of my process.</p><p>So, I have something of a different perspective regarding tool features and who drives an investigation. My perspective has always been that as the analyst/investigator, it was my job to drive the investigation. Now, that doesn't prevent me from asking for assistance, or seeking someone else's perspective, but ultimately, the tools I use and the functionality I need are up to me, as an investigator, and I'm not going to structure my investigation around the functionality provided by a tool (or not, as the case may be).</p>Unknownnoreply@blogger.com3