Selasa, 24 Juni 2008

Pascal Meunier Is Right About Virtualization

I love Pascal Meunier's post Virtualization Is Successful Because Operating Systems Are Weak:

It occurred to me that virtual machine monitors (VMMs) provide similar functionality to that of operating systems...

What it looks like is that we have sinking boats, so we’re putting them inside a bigger, more powerful boat, virtualization...

I’m now not convinced that a virtualization solution + guest OS is significantly more secure or functional than just one well-designed OS could be, in theory...

I believe that all the special things that a virtualization solution does for functionality and security, as well as the “new” opportunities being researched, could be done as well by a trustworthy, properly designed OS.


Please read the whole post to see all of Pascal's points. I had similar thoughts on my mind when I wrote the following in my post NSM vs Encrypted Traffic, Plus Virtualization:

[R]eally nothing about virtualization is new. Once upon a time computers could only run one program at a time for one user. Then programmers added the ability to run multiple programs at one time, fooling each application into thinking that it had individual use of the computer. Soon we had the ability to log multiple users into one computer, fooling each user into thinking he or she had individual use. Now with virtualization, we're convincing applications or even entire operating systems that they have the attention of the computer...

Thanks to those who noted the author was Pascal Meunier and not Spaf!

Sabtu, 14 Juni 2008

Verizon Study Continues to Demolish Myths

I just read Patching Conundrum by Verizon's Russ Cooper. Wow, keep going guys. As in before, I recommend reading the whole post. Below are my favorite excerpts:

Our data shows that in only 18% of cases in the hacking category (see Figure 11) did the attack have anything to do with a “patchable” vulnerability. Further analysis in the study (Figure 12) showed that 90% of those attacks would have been prevented had patches been applied that were six months in age or older! Significantly, patching more frequently than monthly would have mitigated no additional cases.

Given average current patching strategies, it would appear that strategies to patch faster are perhaps less important than strategies to apply patches more comprehensively...

To summarize the findings in our “Control Effectiveness Study”, companies who did a great job of patching (or AV updates) did not have statistically significant less hacking or malicious code experience than companies who said they did an average job of patching or AV updates. And companies who did other simpler countermeasures, like lightweight standard configurations, had very strong correlations with reduced risk. The Verizon Business 2008 Data Breach Investigations Report supports very similar conclusions.
(emphasis added)

It gets even better.

In summary, the Sasser worm study analysis found that companies who had succeeded at “patching fast” were significantly worse off than “average” companies in the same study. This seemed to be because, as a group, these companies tended toward less use of broad, generic countermeasures. They also thought they had patched everyone, when in reality they hadn’t. You might say they spent more of their energy and money on patching and less on routing, ACLs, standard configurations, user response training, and similar “broad and fundamental” controls...

A control like patching, which has very simple and predictable behavior when used on individual computers, (i.e., home computers) seems to have more complex control effectiveness behavior when used in a community of computers (as in our enterprises).
(emphasis added)

So, quickly patching doesn't seem to matter, and those who rely on quick patching end up worse off than those with a broader security program. I can believe this. How often do you hear "We're patched and we have anti-virus -- we're good!"

Also, I can't emphasize how pleased I was to see the report reinforce my thoughts that Of Course Insiders Cause Fewer Security Incidents.

Jumat, 13 Juni 2008

Logging Web Traffic with Httpry

I don't need to tell anyone that a lot of interesting command-and-control traffic is sailing through our Web proxies right now. I encourage decent logging for anyone using Web proxies. Below are three example entries from a Squid access.log. This is "squid" format with entries for user-agent and referer tacked to the end.

Incidentally here is a diff of my Squid configuration that shows how I set up Squid.

r200a# diff /usr/local/etc/squid/squid.conf /usr/local/etc/squid/squid.conf.orig
632,633c632,633
< acl our_networks src 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16
< http_access allow our_networks
---
> #acl our_networks src 192.168.1.0/24 192.168.2.0/24
> #http_access allow our_networks
936c936
< http_port 172.16.2.1:3128
---
> http_port 3128
1990,1992d1989
< logformat squid-extended %ts.%03tu %6tr %>a %Ss/%03Hs %<st
%rm %ru %un %Sh/%<A %mt "%{Referer}>h" "%{User-Agent}>h"
<
<
2022c2019
< access_log /usr/local/squid/logs/access.log squid-extended
---
> access_log /usr/local/squid/logs/access.log squid
2216c2213
< strip_query_terms off
---
> # strip_query_terms on
3056d3052
< visible_hostname r200a.taosecurity.com

If you worry I'm exposing this to the world, don't worry too much. I find the value of having this information in a place I can find it outweighs the possibility someone will use this data to exploit me. There's much easier ways to do that, I think.

The first record shows a Google query for the term "dia", where the referer was a query for "fbi". The second record is a Firefox prefetch of the first record. The third record is a query for a .gif.

1213383786.614 255 192.168.2.103 TCP_MISS/200 9263
GET http://www.google.com/search?hl=en&client=firefox-a&rls=
com.ubuntu%3Aen-US%3Aofficial&hs=Hqt&q=dia&btnG=Search -
DIRECT/64.233.169.103 text/html "http://www.google.com/search
?q=fbi&ie=utf-8&oe=utf-8&aq=t&rls=com.ubuntu:en-US:official&client=firefox-a"
"Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.14) Gecko/20060601
Firefox/2.0.0.14 (Ubuntu-edgy)"

1213383786.704 76 192.168.2.103 TCP_MISS/200 2775
GET http://www.google.com/pfetch/dchart?s=DIA -
DIRECT/64.233.169.147 image/gif
"http://www.google.com/search?hl=en&client=firefox-a&rls=com.ubuntu%3A
en-US%3Aofficial&hs=Hqt&q=dia&btnG=Search" "Mozilla/5.0 (X11; U; Linux
i686; en-US; rv:1.8.1.14) Gecko/20060601 Firefox/2.0.0.14 (Ubuntu-edgy)"

1213383786.717 81 192.168.2.103 TCP_MISS/200 1146
GET http://www.google.com/images/blogsearch-onebox.gif -
DIRECT/64.233.169.99 image/gif "http://www.google.com/search?hl=en
&client=firefox-a&rls=com.ubuntu%3Aen-US%3Aofficial&hs=Hqt&q=dia&btnG=Search"
"Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.14) Gecko/20060601
Firefox/2.0.0.14 (Ubuntu-edgy)"

What if you're a security person who can't access Web logs, but you have a NSM sensor in the vicinity? You might use Bro to log this activity, but I found something last year that's much simpler by Jason Bittel: Httpry.

r200a# httpry -h
httpry version 0.1.3 -- HTTP logging and information retrieval tool
Copyright (c) 2005-2008 Jason Bittel
Usage: httpry [ -dhpq ] [ -i device ] [ -n count ] [ -o file ] [ -r file ]
[ -s format ] [ -u user ] [ 'expression' ]

-d run as daemon
-h print this help information
-i device listen on this interface
-n count set number of HTTP packets to parse
-o file write output to a file
-p disable promiscuous mode
-q suppress non-critical output
-r file read packets from input file
-s format specify output format string
-u user set process owner
expression specify a bpf-style capture filter

Additional information can be found at:
http://dumpsterventures.com/jason/httpry

In the following example I run Httpry against a trace of the traffic taken when I visited the site shown in the Squid logs earlier.

r200a# httpry -i bge0 -o /tmp/httprytest3.txt -q -u richard
-s timestamp,source-ip,x-forwarded-for,direction,dest-ip,method,host,
request-uri,user-agent,referer,status-code,http-version,reason-phrase
-r /tmp/test3.pcap
r200a# cat /tmp/httprytest3.txt

# httpry version 0.1.3
# Fields: timestamp,source-ip,x-forwarded-for,direction,dest-ip,method,host,
request-uri,user-agent,referer,status-code,http-version,reason-phrase

06/13/2008 15:03:06 68.48.240.186 - > 64.233.169.103
GET www.google.com /search?hl=en&client=firefox-a&rls=com.ubuntu
%3Aen-US%3Aofficial&hs=Hqt&q=dia&btnG=Search Mozilla/5.0
(X11; U; Linux i686; en-US; rv:1.8.1.14) Gecko/20060601 Firefox/2.0.0.14
(Ubuntu-edgy) http://www.google.com/search?q=fbi&ie=utf-8&
oe=utf-8&aq=t&rls=com.ubuntu:en-US:official&client=firefox-a -
HTTP/1.0 -

06/13/2008 15:03:06 64.233.169.103 - < 68.48.240.186
- - - - - 200 HTTP/1.0 OK

06/13/2008 15:03:06 68.48.240.186 192.168.2.103 > 64.233.169.147
GET www.google.com /pfetch/dchart?s=DIA Mozilla/5.0
(X11; U; Linux i686; en-US; rv:1.8.1.14) Gecko/20060601 Firefox/2.0.0.14
(Ubuntu-edgy) http://www.google.com/search?hl=en&client=
firefox-a&rls=com.ubuntu%3Aen-US%3Aofficial&hs=Hqt&q=dia&btnG=Search -
HTTP/1.0 -

06/13/2008 15:03:06 68.48.240.186 192.168.2.103 > 64.233.169.99
GET www.google.com /images/blogsearch-onebox.gif Mozilla/5.0
(X11; U; Linux i686; en-US; rv:1.8.1.14) Gecko/20060601 Firefox/2.0.0.14
(Ubuntu-edgy) http://www.google.com/search?hl=en&client=
firefox-a&rls=com.ubuntu%3Aen-US%3Aofficial&hs=Hqt&q=dia&btnG=Search -
HTTP/1.0 -

06/13/2008 15:03:06 64.233.169.147 - < 68.48.240.186
- - - - - 200 HTTP/1.0 OK
06/13/2008 15:03:06 64.233.169.99 - < 68.48.240.186
- - - - - 200 HTTP/1.0 OK

As you can see, the format here is request-reply, although the last four records are request,request,reply,reply.

Although I first tried Httpry straight from the source code, in this case I tested an upcoming FreeBSD port created by my friend WXS. If you give Httpry a try, let me know what you think and how you like to invoke it on the command line. I plan to daemonize it in production and run it against a live interface, not traces.

Kamis, 12 Juni 2008

Sourcefire Best of Open Source Security Conference

Sourcefire is sponsoring a Best of Open Source Security (BOSS) conference 8-10 February in Las Vegas, NV, with the main activities happening on 9-10 February. Sourcefire is holding the event simultaneously with their annual users conference. I am on the committee evaluating speakers so I look forward to seeing what people want to present.

Rabu, 11 Juni 2008

Verizon Business Report Speaks Volumes

This morning I attended a call discussing the new Verizon Business 2008 Data Breach Investigations Report. I'd like to quote the linked blog post and a previous article titled I Was an Anti-MSS Zealot, both of which I recommend reading in their entirety. First I cite some background on the study.

Verizon Business began an initiative in 2007 to identify a comprehensive set of metrics to record during each data compromise investigation. As a result of this effort, we pursued a post-mortem examination of over 500 security breach and data compromise engagements between 2004 and 2007 which provided us with the vast amount of factual evidence used to compile this study. This data covers 230 million compromised records. Amongst these are roughly one-quarter of all publicly disclosed data breaches in both 2006 and 2007, including three of the five largest data breaches ever reported.

The Verizon Business 2008 Data Breach Investigations Report contains first-hand information on actual security breaches...
(emphasis added)

That's awesome -- a study based on what Verizon's Incident Response Team found during their work. Next let's read some thoughts from one of Verizon's security team.

I used to think that Intrusion Detection Systems (IDS) and Managed Security Services (MSS) were a waste of time. After all, most attacks that I had worked on began, and were over, within seconds, and were typically totally automated...

But the Verizon Business 2008 Data Breach Investigations Report tells a very different story. The successful attacks were almost universally multi-faceted and the various timeframes are truly astounding. The series of pie charts in Figure 21 are the most interesting data.



The first chart shows that more than half of attacks take days, weeks, or months from the point of entry of the attack (the first successful attack step) to the point of data compromise (not simply system compromise, but the point at which the criminal has actually done material harm). 90% take more than hours and over 50% take days or longer. Clearly if an appropriate log was instrumented and being regularly reviewed or an IDS alarm occurred, you would notice and could stop the attack in the vast majority of our cases.

The second pie chart in the series reveals that 63% of companies do not discover the compromise for months and that almost 80% of cases do not learn of attacks for weeks after they occur. In 95% of cases it took the organization longer than days after the compromise to learn of the attack. There are hundreds of cases in which the inside team either didn’t look at the logs (in 82% of the breaches in the study, the evidence was manifested in their logs), or for some other reason (were frustrated, tired, overwhelmed by the logs, found them to be not-interesting, felt they were too noisy after a few days or weeks) simply quit looking...
(emphasis added)

That is amazing. Consider the following regarding patching.

[O]nly 22% of our cases involved exploitation of a vulnerability, of which, more than 80% were known, and of those all had a patch available at the time of the attack. This is not to say that patching is not effective, or necessary, but we do suggest that the emphasis on it is misplaced and inappropriately exaggerated by most organizations. For the sake of clarity, 78% of the breaches we handled would have still occurred if systems had been 100% patched the instance a patch was available. Clearly patching isn’t the solution to the majority of breaches we investigated.

How about the source of attacks?

While criminals more often came from external sources, and insider attacks result in the greatest losses, criminals at, or via partner connections actually represent the greatest risk. This is due to our risk equation: Threat X Impact = Risk

  • External criminals pose the greatest threat (73%), but achieve the least impact (30,000 compromised records), resulting in a Psuedo Risk Score of 21,900

  • Insiders pose the least threat (18%), and achieve the greatest impact (375,000 compromised records), resulting in a Pseudo Risk Score of 67,500

  • Partners are middle in both (39% and 187,500), resulting in a Pseudo Risk Score of 73,125


While these are rudimentary numbers, the relative risk scores are reasonable and discernable. It is also worth noting that the Partner numbers rose 5-fold over the duration of the study, making partner crime the leading factor in breaches. This is likely due to the ever increasing number of partner connections businesses are establishing, while doing little to nothing to increase their ability to monitor or control their partner’s security posture. Perhaps as expected, insider breaches are the result of your IT Administrators 50% of the time.
(Note the original blog post doesn't say 39%, although the report and briefing does.)

I think that's consistent with what I've said: external attacks are the most prevalent, but insiders can cause the worst damage. (The authors note the definition of "insiders" can be fuzzy, with partners sometimes considered insiders.)

This chart is one of the saddest of all.



Unfortunately, it confirms my own experience and that of my colleagues.

I'll add a few more items:



    • Three quarters of all breaches are not discovered by the victim

    • Attacks are typically not terribly difficult or do not require advanced skills

    • 85% of attacks are opportunistic rather than targeted

    • 87% could have been prevented by reasonable measures any company should have been capable of implementing or performing



    Sounds like my Caveman post from last year.

    I am really glad Verizon published this report and I look forward to the next edition in the fall.
  • House of Representatives v China

    Thanks to one of my colleagues for pointing out Lawmaker says Chinese hacked Capitol computers:

    By PETE YOST and LARA JAKES JORDAN – 3 hours ago

    WASHINGTON (AP) — A congressman said Wednesday the FBI has found that four of his government computers have been hacked by sources working out of China.

    Rep. Frank Wolf, a Virginia Republican, said that similar incidents — also originating from China — have taken place on computers of other members of the House and at least one House committee.

    A spokesman for Wolf said the four computers in his office were being used by staff members working on human rights issues and that the hacking began in August 2006. Wolf is a longtime critic of the Chinese government's human rights record.

    The congressman suggested the problem probably goes further. "If it's been done in the House, don't you think that they're doing the same thing in the Senate?" he asked.


    For a record of others hacked by China, see my earlier posts.

    Senin, 09 Juni 2008

    Publicity: BSD Associate Examinations

    I was asked to mention the following BSD Associate examinations will take place at the following three events:

    1. RMLL: Mont-de-Marsan, France, Jul 02, 2008

    2. OpenKyiv 2008: Kiev, Ukraine, Aug 02, 2008

    3. LinuxWorld: San Francisco, CA, Aug 06-07, 2008


    From the BSDA description:

    The BSDA certification is designed to be an entry-level certification on BSD Unix systems administration. Testing candidates with a general Unix background, but less than six months of work experience as a BSD systems administrator (or who wish to obtain employment as a BSD systems administrator) will benefit most from this certification. Human resource departments should consider the successful BSDA certified applicant to be knowledgeable in the daily maintenance of existing BSD systems under the direction and supervision of a more senior administrator.

    Minggu, 08 Juni 2008

    The Best Single Day Class Ever

    I had the great fortune to attend Edward Tufte's one day class Presenting Data and Information. I only knew Tufte from advertisements in the Economist. For example, the image at left was frequently used as an ad in the print magazine. I had not read any of his books although I knew of his criticism of PowerPoint, specifically with respect to the Challenger disaster.

    This was the best one day class I have ever taken. It profoundly altered the way I think about presenting information and making arguments. If any part of your professional life involves delivering presentations, you must attend this class. It's a complete bargain for the price. I would like to see every professional at my company take this course. Following Tufte's advice would provide the single biggest productivity improvement and corresponding "return on investment" we are likely to see in my tenure.

    There is no way for me to summarize Tufte's course. You should attend yourself, and read the four (!) textbooks he provides. I will try to capture points which made an impact upon me.

    Substance, not Structure: When delivering a presentation, do whatever it takes to make your point. Be substance-driven, not method-driven. This means you determine what information you need to convey, not what you should put into PowerPoint. This really impressed me. PowerPoint is the currency for just about every presentation, conference, or other public event I attend. Imagine if we approached every event by deciding what effect we want to have upon the audience, instead of what slides we should create? Tufte stressed the power of sentences, saying sentences have "agency" but PowerPoint bullets do not. Sentences are tougher to write because they have nouns, verbs, and objects; bullets may have all, some, or none of those. PowerPoint also cripples arguments by stacking information in time and relying on the audience's short term memory. Instead, information should be arrayed in space, with as much spread out at once. The latter approach capitalizes on the human eye's "bandwidth of 80 Mbps per eye."

    Credibility: Tufte emphasized that detail builds credibility, and audiences are constantly assessing the credibility of the speaker. Everything that can be documented and referenced and sourced should be; this resonated with my history degree. Every item of information should be communicative and should provide reasons to believe the speaker. Credibility arises from delivering an argument backed by evidence, and that material can be believed until an alternative explanation for the evidence, with as much rigor as the first explanation, appears. Speakers expand their credibility by explicitly addressing alternative explanations, rather than avoiding them.

    Making an Impact: Too many of us exist in "flatland," i.e., the world of the computer screen, paper, and related media. To grab your audience's attention, bring something real from the 3D world to your presentation. This resonated with me too. At a recent week-long class for work with 42 other managers, I was told that some of the people in the class remembered me long after my initial introduction because I had a prop. The BusinessWeek magazine on "e-spionage" was on a table near me, so I told the class "I do this."

    Image ref: Writing is About Putting Yourself to Words.

    Presentation Design: Tufte advocates using what a colleague of mine calls a "placemat" format for delivering information. Tufte calls it a "tech report." Rather than standing in front of a PowerPoint slide deck, create a two-sided, 11" X 17" handout for the audience. Copy a format you've seen elsewhere. Pay attention to the pros; Tufte recommends Nature magazine for elite scientific and technical reporting or the New York Times for non-technical reporting. Include what Tufte calls a "supergraphic," an image that captures the audience's attention, like a highly detailed aerial photograph. (Whatever it is, ensure it is relevant to the audience!) He likes Gil Sans font. Include data on performance. There is no such thing as "information overload," only bad design. To clarify add detail -- don't remove it.

    Fundamental Principles of Analytical Design:

    1. Show comparisons.

    2. Show causality.

    3. Show multivariate data.

    4. Integrate evidence; don't segregate by mode of production. (For example, Sports Illustrated's Web site has a "Video" section. Why aren't those videos simply next to the appropriate news stories?)

    5. Document evidence.

    6. Content above all else.



    PowerPoint: PowerPoint only helps the bottom ten percent of speakers who would have no idea what to say without it. PowerPoint doesn't hinder the top ten percent of speakers who probably ignore their slides. PowerPoint devastates the middle 80 percent of speakers who think they are delivering information, when really they are (unconsciously) treating the audience as if they are too stupid to digest information in any other format. People can read 2-3 times faster than they can speak, so why should a presenter waste so much time with bullet points? Presentations should be a "high resolution data dump" (like a paper) and not a "press conference." Provide information in problem -> relevance -> solution format with a paragraph for each, with images, tables, "sparklines," and such integrated. You may use PowerPoint as a "projector operating system" ("POS," get it, get it?) to display tables, movies, or other media as necessary, but not as a bullet delivery tool.

    Image ref: Presentation Zen: Contrasts in presentation style: Yoda vs. Darth Vader. Note the bullets are sentences, so they are actually more content-oriented than the usual PowerPoint bullets!

    Active Person: The active person should be the audience, not the "speaker". Let the audience learn using its own cognitive style, not the method chosen by the presenter. Presenters should let speakers read the "placemat" or "tech report," then offer to answer questions. Asking questions is a sign that audience actually cares about the material. Leading the audience along a path chosen by the speaker, at the speaker's speed, using the speaker's cognitive style, and refusing to take questions because it "disrupts flow" is a disaster.

    That's it for me. If you look a little you'll find other people's coverage of these training classes, like Colliers Atlas Blog or 21Apples.

    What does this mean for me? I recently taught a one-day class on Network Security Operations. I decided to print the entire slide deck I've used for the last few years, suitable for a two or three day class. I decided to use that material solely as a reference, like Tufte uses his text books in his own classes. I asked the students what problems they were trying to solve in their own enterprises. Then I selected themes and spoke to them, using some of my slides as background or reference. I am trying to decide how to integrate this approach into my upcoming TCP/IP Weapons School class at Black Hat, which is mostly an examination of packet traces using Wireshark. I don't rely on slides for it.

    Sabtu, 07 Juni 2008

    NoVA Sec Meeting Memory Analysis Notes

    On 24 April we were lucky to have Aaron Walters of Volatile Systems speak to our NoVA Sec group on memory analysis.

    I just found my notes so I'd like to post a few thoughts. There is no way I can summarize his talk. I recommend seeing him the next time he speaks at a conference.

    Aaron noted that the PyFlag forensics suite has integrated the Volatility Framework for memory analysis. Aaron also mentioned FATkit and VADtools.

    In addition to Aaron speaking, we were very surprised to see George M. Garner, Jr., author of Forensic Acquisition Utilities and KnTTools with KnTList. George noted that he wrote FAU at the first SANSFIRE, in 2001 in DC (which I attended too) after hearing there was no equivalent way to copy Windows memory using dd, as one could with Unix.

    George sets the standard for software used to acquire memory from Windows systems, so using his KnTTools to collect memory for analysis by KnTList and/or Volatility Framework is a great approach.

    While Aaron's talk was very technical, George spent a little more time on forensic philosophy. I was able to capture more of this in my notes. George noted than any forensic scenario usually involves three steps:

    1. Isolate the evidence, so the perpetrator or others cannot keep changing the evidence

    2. Preserve the evidence, so others can reproduce analytical results later

    3. Document what works and what does not


    At this point I had two thoughts. First, this work is tough and complicated. You need to rely upon a trustworthy party for tools and tactics, but also you must test your results to see if they can be trusted. Second, as a general principle, raw data is always superior to anything else because raw data can be subjected to a variety of tools and techniques far into the future. Processed data has lost some or all of its granularity.

    George confirmed my first intution by stating that there is no real trustworthy way to acquire memory. This reminded me of statements made by Johanna Rutkowska. George noted that whatever method he could use, running as a kernel driver, to acquire memory could be hooked by an adversary already in the kernel. It's a classic arms race, where the person trying to capture evidence from within a compromised system must try to find a way to get that data without being fooled by the intruder.

    George talked about how nVidia and ATI have brought GPU programming to the developer world, and that there is no safe way to read GPU memory. Apparently intruders can sit in the GPU, move memory to and from the GPU and system RAM, and disable code signing.

    I was really floored to learn the following. George stated that a hard drive is a computer. It has error correction algorithms that while "pretty good" are not perfect. In other words, you could encounter a situation where you cannot obtain a reliable "image" of a hard drive from one acquisition to the next. He contributed an excellent post here which emphasizes this point:

    One final problem is that the data read from a failing drive actually may change from one acquisition to another. If you encounter a "bad block" that means that the error rate has overwhelmed the error correction algorithm in use by the drive. A disk drive is not a paper document. If a drive actually yields different data each time it is read is that an acquisition "error." Or have you accurately acquired the contents of the drive at that particular moment in time. Perhaps you have as many originals as acquired "images." Maybe it is a question of semantics, but it is a semantic that goes to the heart of DIGITAL forensics.

    Remember that hashes do not guarantee that an "image" is accurate. They prove that it has not changed since it was acquired.


    I just heard the brains of all the cops-turned-forensic-guys explode.

    This post has more technical details.

    So what's a forensic examiner to do? It turns out that one of the so-called "foundations" of digital forensics -- the "bit-for-bit copy" -- is no such foundation at all, at least if you're a "real" forensic investigator. George cited Statistics and the Evaluation of Evidence for Forensic Scientists by C. G. G. Aitken and Franco Taroni (pictured at left) to refute "traditional" computer forensics. Forensic reliability isn't derived from a bit-for-bit copy; it's derived from increasing the probability of reliability. You don't have to rely on a bit-for-bit copy. Increase reliability by increasing the number of evidence samples -- preferably using multiple methods.

    What does this mean in practice? George said you build a robust case, for example, by gathering, analyzing, and integrating ISP logs, firewall logs, IDS logs, system logs, volatile memory, media, and so on. Wait, what does that sound like? You remember -- it's how Keith Jones provided the evidence to prove Roger Duronio was guilty of hacking UBS. It gets better; this technique is also called "fused intelligence" in my former Air Force world. You trust what you are reporting when independently corroborated by multiple sources.

    If this all sounds blatantly obvious, it's because it is. Unfortunately, when you're stuck into a world where the process says "pull the plug and image the hard drive," it's hard to introduce some sanity. What's actually forcing these dinosaurs to change is their inability to handle 1 TB hard drives and multi-TB SAN.

    As you can tell I was pretty excited by the talks that night. Thanks again to Aaron and George for presenting.

    Recycling Security Technology

    Remember when IDS was supposed to be dead? I thought it was funny to see the very same inspection technologies that concentrated on inbound traffic suddenly turned around to watch outbound traffic. Never mind that the so-called "IPS" that rendered the "IDS" dead used the same technology. Now, thanks to VMware VMsafe APIs, vendors looking for something else to do with their packet inspection code can watch traffic between VMs, as reported by the hypervisor.

    We've seen Solera, Altor, and others jump into this space. It's popular and helpful to wonder if having the ability to monitor traffic on the ESX server is a feature or product. I consider it a feature. The very same code that can be found in products from Sourcefire and other established players is likely to be much more robust than something a startup is going to assemble, assuming the startup isn't using Snort anyway! Once the traditional plug-into-the-wire vendors hear of this requirement from their customers, they will acquire or more likely squash any "pure virtualization" bit players. Traffic collected via VMsafe will just be another packet feed.

    Although I am a big fan of visibility, it seems a little disheartening to think we must resort to adding a packet inspection product to VMware in order to determine if the VMs are behaving -- never mind the fact that the hypervisor itself could be compromised and omitting traffic sent to the VM-based network inspection product. Sigh.

    Intel Premier IT Magazine on "War Gaming"

    Intel Premier IT Magazine published an article titled Wargaming: How Intel Creates a Company-Wide Security Force. (Access granted after registration with whatever you want to input.) What Intel calls "war gaming" sounds like three activities.

    For reference I differentiated between Threat and Attack Models last year.

    1. Threat Modeling: Identifying parties with the capabilities and intentions to exploit a vulnerability in an asset

    2. Attack Modeling: Identifying vectors by which any threat could exploit an asset; i.e., the identity of the threat is irrelevant -- the method matters here

    3. Adversary Imagination and Simulation: The former involves thinking about how an adversary would act like a threat and perform an attack. The latter is actually acting as the threat upon production assets. The article mentions doing the latter for computer concerns.


    I am not a big fan of adversary imagination as the end result of any activity. It's far too likely to rest on untested assumptions and you end up with defense or management by belief instead of by fact.

    I thought it was helpful to see that a big company like Intel works to integrate personnel from across its business into these exercises to stimulate security awareness and guide resistance, detection, and response.

    I found this excerpt interesting too:

    Finally, we make it clear when we invite people to a war game that they are not required to fix the vulnerabilities they discover. Within Intel's corporate culture, we take pride in identifying a problem and then owning the solution, but telling participants "you find it, you fix it" could discourage them from speaking up.

    Agreed.

    Review of Nmap in the Enterprise Posted

    Amazon.com just published my 3 star review of Nmap in the Enterprise by Angela Orebaugh and Becky Pinkard. From the review:

    Initially I hoped Nmap in the Enterprise (NITE) would live up to its title. I was excited to see "Automate Tasks with the Nmap Scripting Engine (NSE)" on the cover, in addition to the "Enterprise" focus. It turns out that beyond a few command line options of which I was not previously aware, and some good info on interpreting OS fingerprinting output in Ch 6, I didn't learn much by reading NITE. If you are new to Nmap or network scanning you will probably like NITE, but if you want a real enterprise focus or information on NSE you will be disappointed.

    Review of No Tech Hacking Posted

    Amazon.com just posted my 4 star review of No Tech Hacking by Johnny Long. From the review:

    No Tech Hacking (NTH) again demonstrates that the fewer the number of authors a Syngress book advertises, the better the book. With security star Johnny Long as the main author, the book adds a section in Ch 5 (Social Engineering) by Techno Security organizer Jack Wiles. The "special contributors" no doubt worked with Johnny to answer his questions, but it's clear that relying on a primary author resulted in a better-than-average Syngress title. (Harlan Carvey's Windows Forensic Analysis is another example of this phenomenon.)

    Review of Botnets Posted

    Amazon.com just posted my 2 star review of Botnets by Craig Schiller, et al. From the review:

    I am wary of Syngress books that consist of a collection of contributions. The quality of the books usually decreases as the number of authors increases. Botnets is no exception, unfortunately. You will probably enjoy chapters by Gadi Evron (Ch 3, Alternative Botnet C&Cs) and Carsten Willems (Ch 10, Using Sandbox Tools for Botnets). I was initially interested in the book because of chapters on Ourmon (Chs 6-9, by Jim Binkley, tool developer). That leaves half the book not worth reading.

    Review of Building a Server with FreeBSD 7

    If you look at the reviews of Building a Server with FreeBSD 7 by Bryan Hong, you'll see my review for the self-published Building an Internet Server With FreeBSD 6 Posted, which I gave 4 our of 5 stars. No Starch took the first edition, worked with the author, and published this new book using FreeBSD 7.0 as the base OS. If I could post a new review at Amazon.com, I would also give this book 4 out of 5 stars.

    I think BASWF7 is an excellent companion to Absolute FreeBSD, 2nd Ed by Michael Lucas. Much of my original review pertains to this new edition. The majority of the book explores how to get a variety of popular open source applications running on FreeBSD 7.0 using the ports tree. For each application, the following sections usually appear: summary, resources, required, optional, preparation, install, configure, testing, utilities, config files, log files, and notes. I am really confident I could sit down with the appropriate chapter and get a previously unfamiliar program like SquirrelMail running fairly quickly.

    Does this focus make the book a "FreeBSD book?" To the extent you use FreeBSD to provide services to others, and you want to follow the "FreeBSD way" using the ports tree, I say "yes". If you want really specific FreeBSD OS information, use Michael Lucas' book.

    I have a few comments that perhaps Bryan might answer here. First, why replace the OpenSSH and OpenSSL included in the base OS with those from the ports tree? (I'm not saying that's "wrong;" I'd just like to know his thoughts. Second, I recommend including some words on using Portsnap to update the ports tree, and pkg_add to install binary packages instead of compiling source through the ports tree. Third, consider replacing net/ntp with net/openntpd. Fourth, the beginning of the book seems to imply that only i386 and amd64 distributions are available. Finally, I don't like the labels attached to the TCP/IP layers in Appendix D.

    Overall, I think those with beginning to intermediate FreeBSD system administration will really like this book. I would like to see Bryan accept suggestions for new applications to be included in the next edition for FreeBSD 8.0.

    Jumat, 06 Juni 2008

    FX on Cisco IOS Rootkits

    I saw FX speak on Cisco IOS forensics at Black Hat DC 2008. I just got a chance to read his excellent post On IOS Rootkits. I was impressed to read FX's pointer to his company's Cisco Incident Response - CIR Online Service, with a specific report run on Sebastian 'topo' Muniz's IOS rootkit. Also, consider this from FX's post:

    Now that some people actually talk about IOS rootkits, interesting tidbits show up. One person asked me if we have tested CIR with the Russian IOS rootkit that was for sale a few years ago. No, we didn't, but good to know that these exist.

    Russian IOS rootkit... interesting. How much proof do we need to Monitor our routers?

    Kamis, 05 Juni 2008

    A Clueful Interview

    If you have ten minutes and want to be genuinely more informed when it's over, read Federico Biancuzzi's excellent interview of Nate Lawson titled Racing Against Reversers. I found this comment interesting:

    Q: It sounds as security through obscurity has some admirers among the DRM designers. What is the role of "secrets" in a DRM system?

    A: In software protection, obscurity is everything. You're ultimately depending on the attacker to not be able to just "see" the key or how the protection works. That sounds weak and against normal security principles but actually works quite well in practice, if you're good at it.


    I think that insight echoes what I said in Fight to Your Strengths last year:

    Apparently several people with a lot of free time have been vigorously arguing that "security through obscurity" is bad in all its forms, period. I don't think any rational security professional would argue that relying only upon security through obscurity is a sound security policy. However, integrating security through obscurity with other measures can help force an intruder to fight your fight.

    Don't get hung up on the obscurity issue if you disagree, however. The interview is awesome.

    Rabu, 04 Juni 2008

    NSM vs Encrypted Traffic Revisited

    My last post What Would Galileo Think was originally the first part of this post, but I decided to let it stand on its own. This post is now a follow on to NSM vs Encrypted Traffic, Plus Virtualization and Snort Report 16 Posted. I received several questions, which I thought deserved a new post. I'm going to answer the first with Galileo in mind.

    LonerVamp asked:

    So can I infer that you would prefer to MITM encrypted channels where you can, so to inspect that traffic on the wire? :)

    On a related note, Ivan Ristic asked:

    Richard, how come you are not mentioning passive SSL decryption as an option?

    I thought I had answered those questions when I said:

    If you loosen your trust boundary, maybe you monitor at the perimeter. If you permit encrypted traffic out of the perimeter, you need to man-in-the-middle the traffic with a SSL accelerator. If you trust the endpoints outside the perimeter, you don't need to.

    Let's reconsider that statement with Galileo in mind. Originally I proposed that those who terminate their trust boundary at their perimeter must find a way to penetrate encrypted traffic traversing that trust boundary. Another way to approach that problem is to perform measurements to try to determine what cost and benefit can be gained by terminating SSL at the perimeter, inspecting clear text, and re-encrypting traffic as it leaves the enterprise. Does that process actually result in identify and/or limiting intrusions? If yes, use the results to justify the action. If not, abandon the plan or decide to conduct a second round of measurements if conditions are deemed to change at a later date. Don't just argue that "I need to see through SSL" because it's a philosophical standpoint.

    Marcin asked:

    So what do you say and do when your NSM Sensor/SSL Load Balancer/SSL Proxy gets compromised, exposing your most sensitive data (by nature, because it is being encrypted)?

    Am I supposed to rely on my IDS' and my own ability to detect 0day attacks against hardened hosts?


    To answer the first question, I would say check out my TaoSecurity Enterprise Trust Pyramid. The same factors which make data from sensors more reliable also make those sensors more resilient. However, no sensor is immune from compromise, and I recommend taking steps to monitor and contain the sensor itself in a manner appropriate for the level of traffic it inspects. Keep in mind a sensor is not a SSL proxy. The SSL proxy might only log URLs; it might not provide clear text to a separate sensor.

    Answering the second question could take a whole book. Identifying "0day attacks," what I call "first order detection," is increasingly difficult. Performing second order detection, meaning identifying reinforcement, consolidation, and pillage is often more plausible, especially using extrusion detection methods. Performing third order detection, meaning discovering indications of your hosts in someone's botnet or similar unauthorized control, is another technique. Finally, fourth order detection, or seeing your intellectual property in places where it should not be, is a means to discover intrusions.

    Vivek Rajan asked:

    Daemonlogger is cool, but what do you think about more sophisticated approaches like the Time Machine ? ( http://www.net.t-labs.tu-berlin.de/research/tm/ )

    Is there some value in retaining full content of long running (possibly encrypted) sessions?


    I don't consider Time Machine "more sophisticated." It's just a question of trade-offs. Where possible I prefer to log everything, because you can never really be sure before an incident just what might be important later. Regarding encryption, what if you disable collecting traffic on port 443 TCP outbound because it's supposed to be SSL, when you later learn that an intruder is using some weak obfuscation method or no encryption at all?

    To summarize, implement whatever system you select based on the demonstrable improvement it brings to your security posture, not because you think it is helpful. I am particularly critical when it comes to defensive measures. For measures that improve visibility, my objective is to gather additional data with a benefit that outweighs the costs of collection.

    What Would Galileo Think

    I love history. Studying the past constantly reminds me that we are not any smarter than our predecessors, although we have more knowledge available. The challenge of history is to apply its lessons to modern problems in time to positively impact those problems.

    I offer this post in response to some of the reporting from the Gartner Security Summit 2008, where pearls of wisdom like the following appear:

    What if your network could proactively adapt to threats and the needs of the business? That’s the vision of the adaptive security infrastructure unveiled by Gartner here today.

    Neil MacDonald, vice president and fellow at Gartner, says this is the security model necessary to accommodate the emergence of multiple perimeters and moving parts on the network, and increasingly advanced threats targeting enterprises. “We can’t control everything [in the network] anymore,” MacDonald says. That’s why a policy-based security model that is contextual makes sense, he says.

    “The next generation data center is adaptive – it will do workloads on the fly,” he says. “It will be service-oriented, virtualized, model-driven and contextual. So security has to be, too.”


    Translation? Buzzword, buzzword, how about another buzzword? People are paying to attend this conference and hear this sort of "advice"?

    I humbly offer the following free of charge in the hopes it makes a slight impact on your approach to security. I am confident these ideas are not new to those who study history (like my Three Wise Men and those who follow their lead).

    Let's go back in time. It's the early 17th century. For literally hundreds of years, European "expertise" with the physical world has been judged by one's ability to recite and rehash Aristotelian views of the universe. In other words, one was considered an "expert" not because his (or her) views could be validated by outcomes and real life results, but because he or she could most accurately adhere to statements considered to be authoritative by a philosopher who lived in the fourth century BC. Disagreements were won by the party who best defended the Aristotelian worldview, regardless of its actual relation to ground truth.

    Enter Galileo, his telescope, and his invention of science. Suddenly a man is defending the heliocentric model proposed by Copernicus using measurements and data, not eloquent speech and debating tactics. If you disagree with Galileo (and people tried), you have to debate his experimental results, not his rhetoric. It doesn't matter what you think; it matters what you can demonstrate. Amazing. The world has been different ever since, and for the better.

    Let's return to the early 21st century. For the last several years, "expertise" with the cyber world has been judged by one's ability to recite and rehash audit and regulatory views of cyber security. In other words, one was considered an "expert" not because his (or her) views could be validated by outcomes and real life results, but because he or she could most accurately adhere to rules considered to be authoritative by regulators and others creating "standards." Disagreements were won by the party who best defended the regulatory worldview, regardless of its actual relation to ground truth.

    Does this sound familiar? How many of you have attended meetings where participants debated password complexity policies for at least one hour? How many of you have wondered whether you need to deploy an IPS -- in 2008? How many of you buy new security products without having any idea if deploying said product will make any difference at all?

    What would Galileo think?

    Perhaps he might do the following. Galileo would first take measurements to identify the nature of the "cybersecurity universe," such as it is. (One method is described in my post How Many Burning Homes.) Galileo would then propose a statement that changes some condition, like "remove local Administrator access on Windows systems", and devise an experiment to identify the effect of such a change. One could select a control group within the population and contrast its state with a group who loses local Administrator control, assessing the security posture of each group after a period (like a month or two). If the change resulted in measurable security improvement, like fewer compromised systems, the result is used to justify further work in that direction. If not, abandon that argument.

    This approach sounds absurdly simple, yet we do not do it. We constantly implement new defensive security measures and have little or no idea if the benefit, if any (who is measuring anyway?) outweighs the cost (never mind just money -- what about inconvenience, etc.) Instead of saying "I can show that removing local Administrator access while drive our compromised host ratio down 10%," we say "Regulation X says we need anti-virus on all servers" or "Guideline Y says we should have password complexity policy Z."

    Please, let's consider changing the way we make security decisions. We have an excellent model to follow, even if it is four hundred years old.

    Phone Book Full Disclosure

    The following story is all over the local media. From the Hagerstown (MD) Herald-Mail, which broke the story:

    A mistake by Verizon that led to the printing of about 12,500 unlisted or nonpublished telephone numbers and corresponding addresses in a telephone book has prompted fear and anger in some of those affected...

    In March, Verizon inadvertently sold the numbers to Ogden Directory Inc. for publication in the phone book...

    The phone books were in the process of being distributed by the post office, but Ogden officials last week asked that distribution be halted after the problem was discovered.

    [T]he publication of the phone numbers can be rectified by Verizon providing new numbers, but the damage caused by publishing addresses is irreversible.


    If you need examples why this is a big deal, please read the article.

    When I heard this story yesterday, I thought: "I would not have known about this if the local media did not report it." I wondered if it would have been more appropriate for Verizon and Ogden to have mailed each of the 12,500 people affected. By openly broadcasting this story, the very sorts of undesirable people who would want access to the unlisted and nonpublished numbers now have a much higher chance of learning of this disclosure.

    Now I think a quiet disclosure strategy would not have worked. More than one person receiving such a letter would have publicly complained to the authorities or press, and we would be in the current situation. That's probably what happened in this case, minus the letters of notification.

    There doesn't appear to be a good answer to this problem. Because those affected by the disclosure have so few options (change phone numbers and relocate), and the latter option is so burdensome, I doubt the benefits of the disclosure (warning those affected) outweighs the costs (greater awareness on the part of evil-doers).

    By the way, I'm reporting my thoughts here because all of the notification damage has already been done.

    Selasa, 03 Juni 2008

    Old School Layer 2 Hacking

    When I designed my TCP/IP Weapons School class my intent was to teach TCP/IP at an advanced level using traffic generated by security tools. I thought the standard approach of showing all normal traffic was boring. Sometimes students (or those on the sidelines) wonder why I should bother teaching a technique like ARP spoofing at all, when layer 7 attacks are what the cool kids are doing these days. One answer is below.

    Ref: Sunbelt Blog

    How could this happen? It turns out it wasn't the fault of the Metasploit Project. Rather, a server in the same VLAN as the Metasploit Project was compromised and used to ARP spoof the gateway of the Metasploit Project Web site. See Full Disclosure: Re: Metasploit - Hack ? and this for details.

    HD Moore responded to the incident by adding the proper MAC address for his Web hoster's gateway as a static entry to his ARP cache.

    This is a great example of a cloud security problem. You host your content at a third party, and you rely upon that third party -- and potentially other customers of that third party -- to implement adequate security. In this case, at least one other customer was vulnerable, and the Web hosting company didn't take adequate measures to protect its switching infrastructure. Of course the intruder who ran the ARP spoofing attack is really at fault, but this event demonstrates the trade-off associated with relying upon third parties.

    Incidentally, this marks the third event of "modern history" involving ARP spoofing I've documented here. Earlier incidents included Freenode admin credentials and injecting malicious IFRAMEs at another Web hosting provider.

    If you're interested in my Black Hat class, we increased the seat count to 80 per class (instead of 60). Registration is still open.

    Minggu, 01 Juni 2008

    Snort Report 16 Posted

    My 16th Snort Report titled When Snort Is Not Enough has been posted. From the article:

    [I]t's important to understand how a network intrusion detection system (IDS) like Snort and techniques based upon its use fit into a holistic detection and response operation. Placing Snort within an entire security program is too broad a topic to cover in this Snort Report. Rather, let's consider when a tool like Snort is independently helpful and when you should support Snort with complementary tools and techniques.