RegRipper Plugin Update
Okay, this isn't so much an update as it is a new plugin. Patrick Seagren sent me a plugin called cortana.pl, which he's been using to extract Cortana searches from the Registry hives. Patrick sent the plugin and some test data, so I tested the plugin out and added it to the repository.
Process Creation Monitoring
When it comes to process creation monitoring, there appears to be a new kid on the block. NoVirusThanks is offering their Process Logger Service free for personal use.
Looking at the web site, the service appears to record process creation event information in a flat text file, with the date and time, process ID, as well as the parent process ID. While this does record some basic information about the processes, it doesn't look like it's the easiest to parse and include in current analysis techniques.
Other alternatives include native Windows auditing for Process Tracking (along with an update to improve the data collected), installing Sysmon, or going for a solution of a commercial nature such as Carbon Black. Note that incorporating the process creation information into the Windows Event Log (via either process) means that the data can be pulled from live systems via WMI or Powershell, forwarded to a central logging server (Splunk?), or extracted from an acquired image.
Process creation monitoring can be extremely valuable for detecting and responding to things such as Powershell Malware, as well as providing critical information for responders to determine the root cause of a ransomware infection.
AirBusCyberSecurity recently published this post that walks through dynamic analysis of "fileless" malware; in this case, Kovter. While it's interesting that they went with a pre-compiled platform, pre-stocked with monitoring tools, the results of their analysis did demonstrate how powerful and valuable this sort of technique (monitoring process creation) can be, particularly when it comes to detection of issues.
As a side note, while I greatly appreciate the work that was done to produce and publish that blog post, there are a couple of things that I don't necessarily agree with in the content that begin with this statement:
Kovter is able to conceal itself in the registry and maintain persistence through the use of several concealed run keys.
None of what's done is "concealed". The Registry is essentially a "file system within a file", and none of what the malware does with respect to persistence is particularly "concealed". "Run" keys have been used for persistence since the Registry was first used; if you're doing any form of DFIR work and not looking in the Run keys, well, that still doesn't make them "concealed".
Also, I'm not really sure I agree with this "fileless" thing. Just because persistence is maintained via Registry value doesn't make something "fileless".
Ransomware and Attribution
Speaking of ransomware engagements, a couple of interesting articles have popped up recently with respect to ransomware attacks and attribution. This recent Reuter's article shares some thoughts from analysts regarding attribution for observed attacks. Shortly after this article came out, Val Smith expanded upon information from the article in his blog post, and this ThreatPost article went on to suggest that what analyst's are seeing is really "false flag" operations.
While there are clearly theories regarding attribution for the attacks, there doesn't appear to be any clear indicators or evidence...not that are shared, anyway...that tie the attacks to a particular group, or geographic location.
This article popped up recently, describing how another hospital was hit with ransomware. What's interesting about the article is that there is NO information about how the bad guys gained access to the systems, but the author of the article refers to and quotes a TrustWave blog post; is the implication that this may be how the systems were infected? Who knows?
Carving
David Cowen recently posted a very interesting article, in which he shared the results of tool testing, specifically several file carving tools. I've seen comments and reviews from others who've read this same post that've said that David one tool or another "near the bottom", but to be honest, that doesn't appear to be the case at all. The key to this sort of testing is to understand the strengths and "weaknesses" of various tools. For example, bulk extractor was listed as the fastest tool in the test, but David also included the statement that it would benefit from more filters, and BE was the only free option.
Testing such as this, as well as what folks like Mari have done, is extremely valuable in not only extending our knowledge as a community, but also for showing others how this sort of thing can be done, and then shared.
Malware Analysis and Threat Intel
I ran across this interesting post regarding Dridex analysis recently...what attracted my attention was this statement:
...detail how I go about analyzing various samples, instead of just presenting my findings...
While I do think that discussing not just the "what" but also the "how" is extremely beneficial, I'm going to jump off of the beaten path here for a bit and take a look at the following statement:
...got the loader binary off virustotal...
The author of the post is clearly stating where they got the copy of the malware that they're analyzing in the post, but this statement jumped out at me, for an entirely different reason all together.
When I read posts such as this, as well as what is shared as "threat intel", I look at it from the perspective of an DF analyst and an incident responder, asking myself, "..how can I use this on an engagement?" While I greatly appreciate the effort that goes into creating this sort of content, I also realize that very often, a good deal of "threat intel" is developed purely through open source collection, without the benefit of context from an active engagement. Now, this is not a bad thing...not at all. But it is something that needs to be kept in mind.
In examples such as this one, understanding that the analysis relies primarily on a malware sample collected from VT should tell us that any mention of the initial infection vector (IIV) is likely going to be speculation, or the result of open source collection, as well. The corollary is that the IIV is not going to be the result of seeing this during an active incident.
I'll say it again...information such as this post, as well as other material shared as "threat intel"...is a valuable part of what we do. However, at the same time, we do need to understand the source of this information. Information shared as a result of open source collection and analysis can be used to create filters or triggers, which can then be used to detect these issues much earlier in the infection process, allowing responders to then get to affected systems sooner, and conduct analysis to determine the IIV.
The Windows Incident Response Blog is dedicated to the myriad information surrounding and inherent to the topics of IR and digital analysis of Windows systems. This blog provides information in support of my books; "Windows Forensic Analysis" (1st thru 4th editions), "Windows Registry Forensics", as well as the book I co-authored with Cory Altheide, "Digital Forensics with Open Source Tools".
Wednesday, March 23, 2016
Monday, March 14, 2016
Links and Stuff
Sysmon
I've spent some time discussing MS's Sysmon in this blog, describing how useful it can be, not only in a malware testing environment, but also in a corporate environment.
Mark Russinovich gives a great use case for Sysmon, from RSA in this online PowerPoint presentation. If you have any questions about Sysmon and how useful it can be, this presentation is definitely worth the time to browse through it.
Ransomware and Computer Speech
Ransomware has been in the "news" quite a bit lately; not just the opportunistic stuff like Locky, but also what appears to be more targeted stuff. IntelSecurity has an interesting write-up on the Samsam targeted ransomware, although a great deal of the content of that PDF is spent on the code of the ransomware, and not so much the TTPs employed by the threat group. This sort of thing is a great way to showcase your RE skillz, but may not be as relevant to folks who are trying to find this stuff within their infrastructure.
I ran across a write up regarding a new type of ransomware recently, this one called Cerber. Apparently, this particular ransomware, as part of the infection, drops a *.vbs script on the system that makes the computer tell the user that its infected. Wait...what?
Well, I started looking into it and found several sites that discussed this, and provided examples of how to do it. It turns out that its really pretty simple, and depending upon the version of Windows you're using, you may have a greater range of options available. For example, per this site, on Windows 10 you can select a different voice (a female "Anna" voice, rather than the default male "David" voice), as well as change the speed or volume of the speech.
...and then, much hilarity ensued...
Document Macros
Okay, back to the topic of ransomware, some of it (Locky, for example) ends up on systems as a result of the user opening a document that contains macros, and choosing to enable the content.
If you do find a system that's been infected (not just with ransomware, but anything else, really...), and you find a suspicious document, this presentation from Decalage provides a really good understanding of what macros can do. Also, take a look at this blog post, as well as Mari's post, to help you determine if the user chose to enable the content of the MS Office document.
Why bother with this at all? That's a great question, particularly in the face of ransomware attacks, where some organizations are paying tens or hundreds of thousands of dollars to get their documents back...how can they then justify paying for a DFIR investigation? Well, my point is this...if you don't know how the stuff gets in, you're not going to stop it the next time it (or something else) gets in.
You need to do a root cause investigation.
Do not...I repeat, do NOT...base any decisions made after an infection, compromise, or breach on assumption or emotion. Base them on actual data, and facts. Base them on findings developed from all the data available, not just some of it, with the gaps filled in with speculation.
Jump Lists
One way to determine which files a user had accessed, and with which application, is by analyzing Jump Lists. Jump Lists are a "new" artifact, as of Windows 7, and they persist through to Windows 10. Eric Zimmerman recently posted on understanding Jump Lists in depth; as you would expect, his post is written from the perspective of a tool developer.
Eric noted that the format for the DestList stream on Windows 10 systems has changed slightly...that an offset changed. It's important to know and understand this, as it does affect how tools will work.
Mysterious Records in the Index.dat File
I was conducting analysis on a Windows 2003 server recently, and I found that a user account created in Dec 2015 contained activity within the IE index.dat file dating back to 2013...and like any other analyst, I thought, "okay, that's weird". I noted it my case notes and continued on with my analysis, knowing that I'd get to the bottom of this issue.
First, parsing the index.dat. Similar to the Event Logs, I've got a couple of tools that I use, one that parses the file based on the header information, and the other that bypasses the header information all together and parses the file on a binary basis. These tools provide me with visibility into the records recorded within the files, as well as allowing me to add those records to a timeline as necessary. I've also developed a modified version of Jon Glass's WebCacheV01.dat parsing script that I use to incorporate the contents of IE10+ web activity database files in timelines.
So, back to why the index.dat file for a user account (and profile) created in Dec 2015 contained activity from 2013. Essentially, there was malware on the system in 2013 running with System privileges and utilizing the WinInet API, which resulted in web browser activity being recorded in the index.dat file within the "Default User" profile. As such, when the new user account was created in Dec 2015, and the account was used to access the system, the profile was created by copying content from the "Default User" profile. As IE wasn't being used/launched via the Windows Explorer shell (another program was using the WinInet API), the index.dat file was not subject to the cache clearance mechanisms we might usually expect to see (by default, using IE on a regular basis causes the cache to be cleared every 20 days).
Getting to the bottom of the analysis didn't take days or weeks of analysis...it just took a few minutes to finish up documenting (yes, I do that...) what I'd already found, and then circling back to confirm some findings, based on a targeted approach to analysis.
I've spent some time discussing MS's Sysmon in this blog, describing how useful it can be, not only in a malware testing environment, but also in a corporate environment.
Mark Russinovich gives a great use case for Sysmon, from RSA in this online PowerPoint presentation. If you have any questions about Sysmon and how useful it can be, this presentation is definitely worth the time to browse through it.
Ransomware and Computer Speech
Ransomware has been in the "news" quite a bit lately; not just the opportunistic stuff like Locky, but also what appears to be more targeted stuff. IntelSecurity has an interesting write-up on the Samsam targeted ransomware, although a great deal of the content of that PDF is spent on the code of the ransomware, and not so much the TTPs employed by the threat group. This sort of thing is a great way to showcase your RE skillz, but may not be as relevant to folks who are trying to find this stuff within their infrastructure.
I ran across a write up regarding a new type of ransomware recently, this one called Cerber. Apparently, this particular ransomware, as part of the infection, drops a *.vbs script on the system that makes the computer tell the user that its infected. Wait...what?
Well, I started looking into it and found several sites that discussed this, and provided examples of how to do it. It turns out that its really pretty simple, and depending upon the version of Windows you're using, you may have a greater range of options available. For example, per this site, on Windows 10 you can select a different voice (a female "Anna" voice, rather than the default male "David" voice), as well as change the speed or volume of the speech.
...and then, much hilarity ensued...
Document Macros
Okay, back to the topic of ransomware, some of it (Locky, for example) ends up on systems as a result of the user opening a document that contains macros, and choosing to enable the content.
If you do find a system that's been infected (not just with ransomware, but anything else, really...), and you find a suspicious document, this presentation from Decalage provides a really good understanding of what macros can do. Also, take a look at this blog post, as well as Mari's post, to help you determine if the user chose to enable the content of the MS Office document.
Why bother with this at all? That's a great question, particularly in the face of ransomware attacks, where some organizations are paying tens or hundreds of thousands of dollars to get their documents back...how can they then justify paying for a DFIR investigation? Well, my point is this...if you don't know how the stuff gets in, you're not going to stop it the next time it (or something else) gets in.
You need to do a root cause investigation.
Do not...I repeat, do NOT...base any decisions made after an infection, compromise, or breach on assumption or emotion. Base them on actual data, and facts. Base them on findings developed from all the data available, not just some of it, with the gaps filled in with speculation.
Jump Lists
One way to determine which files a user had accessed, and with which application, is by analyzing Jump Lists. Jump Lists are a "new" artifact, as of Windows 7, and they persist through to Windows 10. Eric Zimmerman recently posted on understanding Jump Lists in depth; as you would expect, his post is written from the perspective of a tool developer.
Eric noted that the format for the DestList stream on Windows 10 systems has changed slightly...that an offset changed. It's important to know and understand this, as it does affect how tools will work.
Mysterious Records in the Index.dat File
I was conducting analysis on a Windows 2003 server recently, and I found that a user account created in Dec 2015 contained activity within the IE index.dat file dating back to 2013...and like any other analyst, I thought, "okay, that's weird". I noted it my case notes and continued on with my analysis, knowing that I'd get to the bottom of this issue.
First, parsing the index.dat. Similar to the Event Logs, I've got a couple of tools that I use, one that parses the file based on the header information, and the other that bypasses the header information all together and parses the file on a binary basis. These tools provide me with visibility into the records recorded within the files, as well as allowing me to add those records to a timeline as necessary. I've also developed a modified version of Jon Glass's WebCacheV01.dat parsing script that I use to incorporate the contents of IE10+ web activity database files in timelines.
So, back to why the index.dat file for a user account (and profile) created in Dec 2015 contained activity from 2013. Essentially, there was malware on the system in 2013 running with System privileges and utilizing the WinInet API, which resulted in web browser activity being recorded in the index.dat file within the "Default User" profile. As such, when the new user account was created in Dec 2015, and the account was used to access the system, the profile was created by copying content from the "Default User" profile. As IE wasn't being used/launched via the Windows Explorer shell (another program was using the WinInet API), the index.dat file was not subject to the cache clearance mechanisms we might usually expect to see (by default, using IE on a regular basis causes the cache to be cleared every 20 days).
Getting to the bottom of the analysis didn't take days or weeks of analysis...it just took a few minutes to finish up documenting (yes, I do that...) what I'd already found, and then circling back to confirm some findings, based on a targeted approach to analysis.
Sunday, March 13, 2016
Event Logs
I've discussed Windows Event Log analysis in this blog before (here, and here), but it's been a while, and I've recently been involved in some analysis that has led me to believe that it might be a good idea to bring up the topic again.
Formats
I've said time and again...to the point that many of you are very likely tired of hearing me say it...that the version of Windows that you're analyzing matters. The available artifacts and their formats differ significantly between versions of Windows, and any discussion of (Windows) Event Logs is a great example of this fact.
Windows XP and 2003 use (I say "use" because I'm still seeing these systems in my analysis; in the past month alone I've analyzed a small handful of Windows 2003 server images) a logging format referred to as "Event Logs". MS does a great job in documenting the structure of the Event Log/*.evt file format, header, records, and even the EOF record structure. In short, these Event Logs are a "circular buffer" to which individual records are written. The limiting factor for these Event Logs is the file size; as new records are written, older records will simply be overwritten. These systems have three main Event Logs; Security (secevent.evt), System (sysevent.evt), and Application (appevent.evt). There may be others but they are often application specific.
Windows Vista systems and beyond use a "binary XML" format for the Windows Event Log/*.evtx files. Besides the different format structure for event records and the files themselves, perhaps one of the most notable aspects of Windows Event Logs is the number of log files available. On a default installation of Windows 7, I counted 140+ *.evtx files; on a Windows 10 system, I counted 289 files. Now, this does not mean that records are written to these logs all the time; in fact, some of the Windows Event Log files may never be written to, based on the configuration and use of the system. However, it's often likely that if you're following clusters of indicators as part of your analysis process (i.e., looking for groups of indicators close together, rather than one single indicator or event, that indicate a particular action) it's likely that you'll find more indications of the event in question.
Tools
Of the tools I use (and provide along with the book materials) in my daily work, there are two specifically related to (Windows Event) Logs.
First, there is evtparse.exe. This tool does not use the Windows API to parse Event Logs/*.evt files on a binary basis, bypassing the header information and basically "carving" *.evt files for valid records.
The ability to parse individual event records from *.evt files, regardless of what the file header says with respect the number of event records, etc., is valuable. I originally wrote this tool after I ran into a case where the Event Logs had been cleared. When this occurred, the "current" *.evt files were deleted (sectors comprising the files became part of unallocated space) and "new" *.evt files were created from available sectors within unallocated space. What happened was that one of the *.evt files contained header information that indicated that there were no event records in the file, but there was clearly something there. I was able to recover or "carve" valid event records from the file. I've also used evtparse.pl as the basis for a tool that would carve unstructured data (pagefile, unallocated space, even a memory dump) for *.evt records.
The other tool I use is evtxparse.exe. Note the "" in the name. This is NOT the same thing as evtparse.exe. Evtxparse.exe is part of a set of tools, used with wevtx.bat, LogParser (a free tool from MS), and eventmap.txt to parse either an individual *.evtx file or multiple *.evtx files into the timeline/TLN format I use for my analysis. The wevtx.bat file launches LogParser to parse the file(s), writing the parsed records to a temporary file, which is then parsed by evtxparse.exe. During that parsing, the eventmap.txt file is used to apply a modicum of "threat intel" (in short, stuff I've learned from previous engagements...) to the entries being included in the timeline events file, so that its easier for me to identify pivot points for analysis.
A major caveat to this is that LogParser relies on the native DLLs/API of the system on which it's being run. This means that you can't successfully run LogParser on Windows XP while trying to parse *.evtx files, nor can you successfully run LogParser on Windows 10 to parse Windows 2003 *.evt files (without first running wevtutil to change the format of the *.evt files to *.evtx).
Both tools are provided as Windows executables, along with the Perl source code.
When I run across *.evtx files that LogParser has difficulty parsing, my go-to tool is Willi Ballenthin's EVTXtract. There have been several instances where this tool set has worked extremely well, particularly when the Windows Event Logs that I'm interested in are reported by other tools as being "corrupt". In one particular instance, we'd found that the Windows Event Logs had been cleared, and we were able to not only retrieve a number of valid event records from unallocated space, but we were able to locate THE smoking gun record that we were looking for.
Gaps
Not long ago, I was asked a question about gaps in Windows Event Logs; specifically, is there something out there that allows someone to remove specific records from an active Windows Event Log on a live machine? Actually, this question has come up twice since the beginning of this year alone, in two different contexts.
There has been talk about there being, or that there have been, tools for removing specific records from Windows Event Logs on live systems, but all the talk comes back to the same thing...no one I've even spoken to has any actual data showing that this actually happened. There's been mention of a tool called "WinZapper" likely having been used, but when I've asked if the records were parsed and sorted by record number to confirm this, no one has any explicit data to support the fact that the tool had been used; it all comes back to speculation, and "it could have been used".
As I mentioned, this is pretty trivial to check. Wevtx.bat, for example, contains a LopParser command line that includes printing the record number for each event. You can run this command line on a Windows 7 (or 10) system to parse *.evtx files, or on a Windows XP system to parse *.evt files, and get similar results.
Evtparse.exe (note that there is no "x" in the tool name...) includes a switch for listing event records sequentially, displaying only the record number and time generated value for each event record. This output can then easily be sorted to look for gaps, or parsed via a script to do the same thing. For example, using either tool, you can then simply import the output into Excel and sort based on the record numbers and search it manually/visually, or write a script that looks for gaps in the record numbers.
So, when someone asks me if it's possible that specific event records were removed from a log, the first question I would ask in response would be, were records removed from the log? After all, this is pretty trivial to check, and if there are no gaps, then the question itself becomes academic.
Creating Event Records
There are a number of ways to create event records on live systems, should you be inclined to do so. For example, MS includes the eventcreate.exe tool, which allows you to create event records (with limitations; be sure to read the documentation).
Visual Basic can be used to write to the Event Log; for an example, see this StackOverflow post. Note that the post also links to this MSDN page, but as is often the case on the InterWebs, the second response goes off-topic.
You can also use Powershell to create new Windows Event Logs, or create event records.
Formats
I've said time and again...to the point that many of you are very likely tired of hearing me say it...that the version of Windows that you're analyzing matters. The available artifacts and their formats differ significantly between versions of Windows, and any discussion of (Windows) Event Logs is a great example of this fact.
Windows XP and 2003 use (I say "use" because I'm still seeing these systems in my analysis; in the past month alone I've analyzed a small handful of Windows 2003 server images) a logging format referred to as "Event Logs". MS does a great job in documenting the structure of the Event Log/*.evt file format, header, records, and even the EOF record structure. In short, these Event Logs are a "circular buffer" to which individual records are written. The limiting factor for these Event Logs is the file size; as new records are written, older records will simply be overwritten. These systems have three main Event Logs; Security (secevent.evt), System (sysevent.evt), and Application (appevent.evt). There may be others but they are often application specific.
Windows Vista systems and beyond use a "binary XML" format for the Windows Event Log/*.evtx files. Besides the different format structure for event records and the files themselves, perhaps one of the most notable aspects of Windows Event Logs is the number of log files available. On a default installation of Windows 7, I counted 140+ *.evtx files; on a Windows 10 system, I counted 289 files. Now, this does not mean that records are written to these logs all the time; in fact, some of the Windows Event Log files may never be written to, based on the configuration and use of the system. However, it's often likely that if you're following clusters of indicators as part of your analysis process (i.e., looking for groups of indicators close together, rather than one single indicator or event, that indicate a particular action) it's likely that you'll find more indications of the event in question.
Tools
Of the tools I use (and provide along with the book materials) in my daily work, there are two specifically related to (Windows Event) Logs.
First, there is evtparse.exe. This tool does not use the Windows API to parse Event Logs/*.evt files on a binary basis, bypassing the header information and basically "carving" *.evt files for valid records.
The ability to parse individual event records from *.evt files, regardless of what the file header says with respect the number of event records, etc., is valuable. I originally wrote this tool after I ran into a case where the Event Logs had been cleared. When this occurred, the "current" *.evt files were deleted (sectors comprising the files became part of unallocated space) and "new" *.evt files were created from available sectors within unallocated space. What happened was that one of the *.evt files contained header information that indicated that there were no event records in the file, but there was clearly something there. I was able to recover or "carve" valid event records from the file. I've also used evtparse.pl as the basis for a tool that would carve unstructured data (pagefile, unallocated space, even a memory dump) for *.evt records.
The other tool I use is evtxparse.exe. Note the "" in the name. This is NOT the same thing as evtparse.exe. Evtxparse.exe is part of a set of tools, used with wevtx.bat, LogParser (a free tool from MS), and eventmap.txt to parse either an individual *.evtx file or multiple *.evtx files into the timeline/TLN format I use for my analysis. The wevtx.bat file launches LogParser to parse the file(s), writing the parsed records to a temporary file, which is then parsed by evtxparse.exe. During that parsing, the eventmap.txt file is used to apply a modicum of "threat intel" (in short, stuff I've learned from previous engagements...) to the entries being included in the timeline events file, so that its easier for me to identify pivot points for analysis.
A major caveat to this is that LogParser relies on the native DLLs/API of the system on which it's being run. This means that you can't successfully run LogParser on Windows XP while trying to parse *.evtx files, nor can you successfully run LogParser on Windows 10 to parse Windows 2003 *.evt files (without first running wevtutil to change the format of the *.evt files to *.evtx).
Both tools are provided as Windows executables, along with the Perl source code.
When I run across *.evtx files that LogParser has difficulty parsing, my go-to tool is Willi Ballenthin's EVTXtract. There have been several instances where this tool set has worked extremely well, particularly when the Windows Event Logs that I'm interested in are reported by other tools as being "corrupt". In one particular instance, we'd found that the Windows Event Logs had been cleared, and we were able to not only retrieve a number of valid event records from unallocated space, but we were able to locate THE smoking gun record that we were looking for.
Gaps
Not long ago, I was asked a question about gaps in Windows Event Logs; specifically, is there something out there that allows someone to remove specific records from an active Windows Event Log on a live machine? Actually, this question has come up twice since the beginning of this year alone, in two different contexts.
There has been talk about there being, or that there have been, tools for removing specific records from Windows Event Logs on live systems, but all the talk comes back to the same thing...no one I've even spoken to has any actual data showing that this actually happened. There's been mention of a tool called "WinZapper" likely having been used, but when I've asked if the records were parsed and sorted by record number to confirm this, no one has any explicit data to support the fact that the tool had been used; it all comes back to speculation, and "it could have been used".
As I mentioned, this is pretty trivial to check. Wevtx.bat, for example, contains a LopParser command line that includes printing the record number for each event. You can run this command line on a Windows 7 (or 10) system to parse *.evtx files, or on a Windows XP system to parse *.evt files, and get similar results.
Evtparse.exe (note that there is no "x" in the tool name...) includes a switch for listing event records sequentially, displaying only the record number and time generated value for each event record. This output can then easily be sorted to look for gaps, or parsed via a script to do the same thing. For example, using either tool, you can then simply import the output into Excel and sort based on the record numbers and search it manually/visually, or write a script that looks for gaps in the record numbers.
So, when someone asks me if it's possible that specific event records were removed from a log, the first question I would ask in response would be, were records removed from the log? After all, this is pretty trivial to check, and if there are no gaps, then the question itself becomes academic.
Creating Event Records
There are a number of ways to create event records on live systems, should you be inclined to do so. For example, MS includes the eventcreate.exe tool, which allows you to create event records (with limitations; be sure to read the documentation).
Visual Basic can be used to write to the Event Log; for an example, see this StackOverflow post. Note that the post also links to this MSDN page, but as is often the case on the InterWebs, the second response goes off-topic.
You can also use Powershell to create new Windows Event Logs, or create event records.
Subscribe to:
Posts (Atom)