tag:blogger.com,1999:blog-9518042.post3534070685667525476..comments2024-03-19T07:46:20.437-05:00Comments on Windows Incident Response: On Validation, pt IIIUnknownnoreply@blogger.comBlogger4125tag:blogger.com,1999:blog-9518042.post-75561171488648000072023-05-22T09:29:50.925-05:002023-05-22T09:29:50.925-05:00Not that you need another stark example of a count...Not that you need another stark example of a county forensic lab failing to validate findings before someone’s life was ruined. https://dailyvoice.com/new-jersey/mercer/news/authorities-drop-charges-after-cybersleuth-proves-ramsey-responder-was-wrongfully-accused/835192/ <br /><br />This is a potential miscarriage of Justice caused by an examiner failing to follow up on a Dropbox account.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-9518042.post-77546100543181472932023-04-26T10:59:22.358-05:002023-04-26T10:59:22.358-05:00"opens the door to where we are now, which is..."opens the door to where we are now, which is an over-reliance on single artifacts on which to base a finding"<br /><br />Having not seen the actual forensic reports you're critiquing via these case studies; In practice I'm not really convinced that this problem is as widespread as your examples want to indicate. The examples feel a bit forced, a carefully constructed strawman to be taken down by the author. In my comment I was hoping to encourage you to expand away from dogma, and provide the reader with a nuanced framework to navigate their cases by. <br /><br />In any reports I've written or peer reviewed: "findings" are confidence grounded statements based on clusters of available artifacts (which appears to be what you mean by validated). If an analyst gave me a draft report where a conclusion or compromise window was being extrapolated from a single artifact it would be flagged during my review of that report. This could be a maturity thing, where some firms just hadn't automated enough evidence collection/pre-processing to internally hold each other to that standard. Bad firms do exist, so do bad analysts or analysts having a bad day -> but this a systemic failing of our industry do not make. (My cases were almost exclusively multi-system cyber intrusions where a client had already declined full forensic analysis during scoping) <br /><br />"threat actors really do a good job of actually cleaning up and hiding their activity" <br /><br />Even when they do a bad job -> A destructive attack itself does enough damage to the evidence to hinder confidence within an investigation. Encrypted prefetch artifacts aren't parsed, volume shadows were deleted to prevent recovery, modified/access times of the whole filesystem are mangled ect ect. Beyond that event logs often do get removed. In these cases analysts do need to stitch a lot of the story together on very shaky evidence; but to your point it should never be for lack of effort.<br /><br />Most of the modern forensic triage collection/analysis frameworks attempt a kitchen sink collection/processing approach. If anyone is out there hand jamming forensic triage in 2023 they need to be called out on a lot more than single artifact validation. Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-9518042.post-87457255803299417992023-04-26T06:16:41.607-05:002023-04-26T06:16:41.607-05:00Anonymous,
I appreciate your comment, but I have ...Anonymous,<br /><br />I appreciate your comment, but I have to disagree. <br /><br />"... where destruction of evidence were expected..."<br /><br />Expected? Yes, I agree, we see it...sometimes. But how often do threat actors really do a good job of actually cleaning up and hiding their activity? To say "where destruction of evidence were expected" is, IMHO, to concede. <br /><br />"PCI is a very specific case type..."<br /><br />While some specific activities are required of PCI cases, the cases themselves are really no different from any other cyber crime case. <br /><br />"... we should be exploring isn’t whether or not we should validate:..."<br /><br />See, I disagree. We should always attempt to validate, to the point where we can automate a great deal of what's needed to do so; data collection, data parsing and normalization, data decoration and enrichment, and then present the data to the analyst for "analysis".<br /><br />Determining whether or not to validate opens the door to where we are now, which is an over-reliance on single artifacts on which to base a finding.H. Carveyhttps://www.blogger.com/profile/08966595734678290320noreply@blogger.comtag:blogger.com,1999:blog-9518042.post-7678268341832356172023-04-25T20:10:40.940-05:002023-04-25T20:10:40.940-05:00In the end it really comes down to the case, the v...In the end it really comes down to the case, the value of the particular artifact to the overall findings and the clients goals/resources.<br /><br />PCI is a very specific case type, but in a ransomware case or an intrusion where destruction of evidence were expected the responsible thing to do is just caveat the report and provide confidence levels for high level statements.<br /><br />With this series I haven’t disagreed with any factual statements you’ve made but I think what we should be exploring isn’t whether or not we should validate: it’s the circumstances where it’s definitely needed/not needed and most importantly the grey areas where we all need better guidance. Ie: when an analyst is trying to force a story on the data rather than trying to strike at ground truth.Anonymousnoreply@blogger.com