Sabtu, 07 Juni 2008

NoVA Sec Meeting Memory Analysis Notes

On 24 April we were lucky to have Aaron Walters of Volatile Systems speak to our NoVA Sec group on memory analysis.

I just found my notes so I'd like to post a few thoughts. There is no way I can summarize his talk. I recommend seeing him the next time he speaks at a conference.

Aaron noted that the PyFlag forensics suite has integrated the Volatility Framework for memory analysis. Aaron also mentioned FATkit and VADtools.

In addition to Aaron speaking, we were very surprised to see George M. Garner, Jr., author of Forensic Acquisition Utilities and KnTTools with KnTList. George noted that he wrote FAU at the first SANSFIRE, in 2001 in DC (which I attended too) after hearing there was no equivalent way to copy Windows memory using dd, as one could with Unix.

George sets the standard for software used to acquire memory from Windows systems, so using his KnTTools to collect memory for analysis by KnTList and/or Volatility Framework is a great approach.

While Aaron's talk was very technical, George spent a little more time on forensic philosophy. I was able to capture more of this in my notes. George noted than any forensic scenario usually involves three steps:


  1. Isolate the evidence, so the perpetrator or others cannot keep changing the evidence

  2. Preserve the evidence, so others can reproduce analytical results later

  3. Document what works and what does not


At this point I had two thoughts. First, this work is tough and complicated. You need to rely upon a trustworthy party for tools and tactics, but also you must test your results to see if they can be trusted. Second, as a general principle, raw data is always superior to anything else because raw data can be subjected to a variety of tools and techniques far into the future. Processed data has lost some or all of its granularity.

George confirmed my first intution by stating that there is no real trustworthy way to acquire memory. This reminded me of statements made by Johanna Rutkowska. George noted that whatever method he could use, running as a kernel driver, to acquire memory could be hooked by an adversary already in the kernel. It's a classic arms race, where the person trying to capture evidence from within a compromised system must try to find a way to get that data without being fooled by the intruder.

George talked about how nVidia and ATI have brought GPU programming to the developer world, and that there is no safe way to read GPU memory. Apparently intruders can sit in the GPU, move memory to and from the GPU and system RAM, and disable code signing.

I was really floored to learn the following. George stated that a hard drive is a computer. It has error correction algorithms that while "pretty good" are not perfect. In other words, you could encounter a situation where you cannot obtain a reliable "image" of a hard drive from one acquisition to the next. He contributed an excellent post here which emphasizes this point:

One final problem is that the data read from a failing drive actually may change from one acquisition to another. If you encounter a "bad block" that means that the error rate has overwhelmed the error correction algorithm in use by the drive. A disk drive is not a paper document. If a drive actually yields different data each time it is read is that an acquisition "error." Or have you accurately acquired the contents of the drive at that particular moment in time. Perhaps you have as many originals as acquired "images." Maybe it is a question of semantics, but it is a semantic that goes to the heart of DIGITAL forensics.

Remember that hashes do not guarantee that an "image" is accurate. They prove that it has not changed since it was acquired.


I just heard the brains of all the cops-turned-forensic-guys explode.

This post has more technical details.

So what's a forensic examiner to do? It turns out that one of the so-called "foundations" of digital forensics -- the "bit-for-bit copy" -- is no such foundation at all, at least if you're a "real" forensic investigator. George cited Statistics and the Evaluation of Evidence for Forensic Scientists by C. G. G. Aitken and Franco Taroni (pictured at left) to refute "traditional" computer forensics. Forensic reliability isn't derived from a bit-for-bit copy; it's derived from increasing the probability of reliability. You don't have to rely on a bit-for-bit copy. Increase reliability by increasing the number of evidence samples -- preferably using multiple methods.

What does this mean in practice? George said you build a robust case, for example, by gathering, analyzing, and integrating ISP logs, firewall logs, IDS logs, system logs, volatile memory, media, and so on. Wait, what does that sound like? You remember -- it's how Keith Jones provided the evidence to prove Roger Duronio was guilty of hacking UBS. It gets better; this technique is also called "fused intelligence" in my former Air Force world. You trust what you are reporting when independently corroborated by multiple sources.

If this all sounds blatantly obvious, it's because it is. Unfortunately, when you're stuck into a world where the process says "pull the plug and image the hard drive," it's hard to introduce some sanity. What's actually forcing these dinosaurs to change is their inability to handle 1 TB hard drives and multi-TB SAN.

As you can tell I was pretty excited by the talks that night. Thanks again to Aaron and George for presenting.

0 komentar:

Posting Komentar