Selasa, 24 Juni 2008

Pascal Meunier Is Right About Virtualization

I love Pascal Meunier's post Virtualization Is Successful Because Operating Systems Are Weak:

It occurred to me that virtual machine monitors (VMMs) provide similar functionality to that of operating systems...

What it looks like is that we have sinking boats, so we’re putting them inside a bigger, more powerful boat, virtualization...

I’m now not convinced that a virtualization solution + guest OS is significantly more secure or functional than just one well-designed OS could be, in theory...

I believe that all the special things that a virtualization solution does for functionality and security, as well as the “new” opportunities being researched, could be done as well by a trustworthy, properly designed OS.


Please read the whole post to see all of Pascal's points. I had similar thoughts on my mind when I wrote the following in my post NSM vs Encrypted Traffic, Plus Virtualization:

[R]eally nothing about virtualization is new. Once upon a time computers could only run one program at a time for one user. Then programmers added the ability to run multiple programs at one time, fooling each application into thinking that it had individual use of the computer. Soon we had the ability to log multiple users into one computer, fooling each user into thinking he or she had individual use. Now with virtualization, we're convincing applications or even entire operating systems that they have the attention of the computer...

Thanks to those who noted the author was Pascal Meunier and not Spaf!

Sabtu, 14 Juni 2008

Verizon Study Continues to Demolish Myths

I just read Patching Conundrum by Verizon's Russ Cooper. Wow, keep going guys. As in before, I recommend reading the whole post. Below are my favorite excerpts:

Our data shows that in only 18% of cases in the hacking category (see Figure 11) did the attack have anything to do with a “patchable” vulnerability. Further analysis in the study (Figure 12) showed that 90% of those attacks would have been prevented had patches been applied that were six months in age or older! Significantly, patching more frequently than monthly would have mitigated no additional cases.

Given average current patching strategies, it would appear that strategies to patch faster are perhaps less important than strategies to apply patches more comprehensively...

To summarize the findings in our “Control Effectiveness Study”, companies who did a great job of patching (or AV updates) did not have statistically significant less hacking or malicious code experience than companies who said they did an average job of patching or AV updates. And companies who did other simpler countermeasures, like lightweight standard configurations, had very strong correlations with reduced risk. The Verizon Business 2008 Data Breach Investigations Report supports very similar conclusions.
(emphasis added)

It gets even better.

In summary, the Sasser worm study analysis found that companies who had succeeded at “patching fast” were significantly worse off than “average” companies in the same study. This seemed to be because, as a group, these companies tended toward less use of broad, generic countermeasures. They also thought they had patched everyone, when in reality they hadn’t. You might say they spent more of their energy and money on patching and less on routing, ACLs, standard configurations, user response training, and similar “broad and fundamental” controls...

A control like patching, which has very simple and predictable behavior when used on individual computers, (i.e., home computers) seems to have more complex control effectiveness behavior when used in a community of computers (as in our enterprises).
(emphasis added)

So, quickly patching doesn't seem to matter, and those who rely on quick patching end up worse off than those with a broader security program. I can believe this. How often do you hear "We're patched and we have anti-virus -- we're good!"

Also, I can't emphasize how pleased I was to see the report reinforce my thoughts that Of Course Insiders Cause Fewer Security Incidents.

Jumat, 13 Juni 2008

Logging Web Traffic with Httpry

I don't need to tell anyone that a lot of interesting command-and-control traffic is sailing through our Web proxies right now. I encourage decent logging for anyone using Web proxies. Below are three example entries from a Squid access.log. This is "squid" format with entries for user-agent and referer tacked to the end.

Incidentally here is a diff of my Squid configuration that shows how I set up Squid.

r200a# diff /usr/local/etc/squid/squid.conf /usr/local/etc/squid/squid.conf.orig
632,633c632,633
< acl our_networks src 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16
< http_access allow our_networks
---
> #acl our_networks src 192.168.1.0/24 192.168.2.0/24
> #http_access allow our_networks
936c936
< http_port 172.16.2.1:3128
---
> http_port 3128
1990,1992d1989
< logformat squid-extended %ts.%03tu %6tr %>a %Ss/%03Hs %<st
%rm %ru %un %Sh/%<A %mt "%{Referer}>h" "%{User-Agent}>h"
<
<
2022c2019
< access_log /usr/local/squid/logs/access.log squid-extended
---
> access_log /usr/local/squid/logs/access.log squid
2216c2213
< strip_query_terms off
---
> # strip_query_terms on
3056d3052
< visible_hostname r200a.taosecurity.com

If you worry I'm exposing this to the world, don't worry too much. I find the value of having this information in a place I can find it outweighs the possibility someone will use this data to exploit me. There's much easier ways to do that, I think.

The first record shows a Google query for the term "dia", where the referer was a query for "fbi". The second record is a Firefox prefetch of the first record. The third record is a query for a .gif.

1213383786.614 255 192.168.2.103 TCP_MISS/200 9263
GET http://www.google.com/search?hl=en&client=firefox-a&rls=
com.ubuntu%3Aen-US%3Aofficial&hs=Hqt&q=dia&btnG=Search -
DIRECT/64.233.169.103 text/html "http://www.google.com/search
?q=fbi&ie=utf-8&oe=utf-8&aq=t&rls=com.ubuntu:en-US:official&client=firefox-a"
"Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.14) Gecko/20060601
Firefox/2.0.0.14 (Ubuntu-edgy)"

1213383786.704 76 192.168.2.103 TCP_MISS/200 2775
GET http://www.google.com/pfetch/dchart?s=DIA -
DIRECT/64.233.169.147 image/gif
"http://www.google.com/search?hl=en&client=firefox-a&rls=com.ubuntu%3A
en-US%3Aofficial&hs=Hqt&q=dia&btnG=Search" "Mozilla/5.0 (X11; U; Linux
i686; en-US; rv:1.8.1.14) Gecko/20060601 Firefox/2.0.0.14 (Ubuntu-edgy)"

1213383786.717 81 192.168.2.103 TCP_MISS/200 1146
GET http://www.google.com/images/blogsearch-onebox.gif -
DIRECT/64.233.169.99 image/gif "http://www.google.com/search?hl=en
&client=firefox-a&rls=com.ubuntu%3Aen-US%3Aofficial&hs=Hqt&q=dia&btnG=Search"
"Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.14) Gecko/20060601
Firefox/2.0.0.14 (Ubuntu-edgy)"

What if you're a security person who can't access Web logs, but you have a NSM sensor in the vicinity? You might use Bro to log this activity, but I found something last year that's much simpler by Jason Bittel: Httpry.

r200a# httpry -h
httpry version 0.1.3 -- HTTP logging and information retrieval tool
Copyright (c) 2005-2008 Jason Bittel
Usage: httpry [ -dhpq ] [ -i device ] [ -n count ] [ -o file ] [ -r file ]
[ -s format ] [ -u user ] [ 'expression' ]

-d run as daemon
-h print this help information
-i device listen on this interface
-n count set number of HTTP packets to parse
-o file write output to a file
-p disable promiscuous mode
-q suppress non-critical output
-r file read packets from input file
-s format specify output format string
-u user set process owner
expression specify a bpf-style capture filter

Additional information can be found at:
http://dumpsterventures.com/jason/httpry

In the following example I run Httpry against a trace of the traffic taken when I visited the site shown in the Squid logs earlier.

r200a# httpry -i bge0 -o /tmp/httprytest3.txt -q -u richard
-s timestamp,source-ip,x-forwarded-for,direction,dest-ip,method,host,
request-uri,user-agent,referer,status-code,http-version,reason-phrase
-r /tmp/test3.pcap
r200a# cat /tmp/httprytest3.txt

# httpry version 0.1.3
# Fields: timestamp,source-ip,x-forwarded-for,direction,dest-ip,method,host,
request-uri,user-agent,referer,status-code,http-version,reason-phrase

06/13/2008 15:03:06 68.48.240.186 - > 64.233.169.103
GET www.google.com /search?hl=en&client=firefox-a&rls=com.ubuntu
%3Aen-US%3Aofficial&hs=Hqt&q=dia&btnG=Search Mozilla/5.0
(X11; U; Linux i686; en-US; rv:1.8.1.14) Gecko/20060601 Firefox/2.0.0.14
(Ubuntu-edgy) http://www.google.com/search?q=fbi&ie=utf-8&
oe=utf-8&aq=t&rls=com.ubuntu:en-US:official&client=firefox-a -
HTTP/1.0 -

06/13/2008 15:03:06 64.233.169.103 - < 68.48.240.186
- - - - - 200 HTTP/1.0 OK

06/13/2008 15:03:06 68.48.240.186 192.168.2.103 > 64.233.169.147
GET www.google.com /pfetch/dchart?s=DIA Mozilla/5.0
(X11; U; Linux i686; en-US; rv:1.8.1.14) Gecko/20060601 Firefox/2.0.0.14
(Ubuntu-edgy) http://www.google.com/search?hl=en&client=
firefox-a&rls=com.ubuntu%3Aen-US%3Aofficial&hs=Hqt&q=dia&btnG=Search -
HTTP/1.0 -

06/13/2008 15:03:06 68.48.240.186 192.168.2.103 > 64.233.169.99
GET www.google.com /images/blogsearch-onebox.gif Mozilla/5.0
(X11; U; Linux i686; en-US; rv:1.8.1.14) Gecko/20060601 Firefox/2.0.0.14
(Ubuntu-edgy) http://www.google.com/search?hl=en&client=
firefox-a&rls=com.ubuntu%3Aen-US%3Aofficial&hs=Hqt&q=dia&btnG=Search -
HTTP/1.0 -

06/13/2008 15:03:06 64.233.169.147 - < 68.48.240.186
- - - - - 200 HTTP/1.0 OK
06/13/2008 15:03:06 64.233.169.99 - < 68.48.240.186
- - - - - 200 HTTP/1.0 OK

As you can see, the format here is request-reply, although the last four records are request,request,reply,reply.

Although I first tried Httpry straight from the source code, in this case I tested an upcoming FreeBSD port created by my friend WXS. If you give Httpry a try, let me know what you think and how you like to invoke it on the command line. I plan to daemonize it in production and run it against a live interface, not traces.

Kamis, 12 Juni 2008

Sourcefire Best of Open Source Security Conference

Sourcefire is sponsoring a Best of Open Source Security (BOSS) conference 8-10 February in Las Vegas, NV, with the main activities happening on 9-10 February. Sourcefire is holding the event simultaneously with their annual users conference. I am on the committee evaluating speakers so I look forward to seeing what people want to present.

Rabu, 11 Juni 2008

Verizon Business Report Speaks Volumes

This morning I attended a call discussing the new Verizon Business 2008 Data Breach Investigations Report. I'd like to quote the linked blog post and a previous article titled I Was an Anti-MSS Zealot, both of which I recommend reading in their entirety. First I cite some background on the study.

Verizon Business began an initiative in 2007 to identify a comprehensive set of metrics to record during each data compromise investigation. As a result of this effort, we pursued a post-mortem examination of over 500 security breach and data compromise engagements between 2004 and 2007 which provided us with the vast amount of factual evidence used to compile this study. This data covers 230 million compromised records. Amongst these are roughly one-quarter of all publicly disclosed data breaches in both 2006 and 2007, including three of the five largest data breaches ever reported.

The Verizon Business 2008 Data Breach Investigations Report contains first-hand information on actual security breaches...
(emphasis added)

That's awesome -- a study based on what Verizon's Incident Response Team found during their work. Next let's read some thoughts from one of Verizon's security team.

I used to think that Intrusion Detection Systems (IDS) and Managed Security Services (MSS) were a waste of time. After all, most attacks that I had worked on began, and were over, within seconds, and were typically totally automated...

But the Verizon Business 2008 Data Breach Investigations Report tells a very different story. The successful attacks were almost universally multi-faceted and the various timeframes are truly astounding. The series of pie charts in Figure 21 are the most interesting data.



The first chart shows that more than half of attacks take days, weeks, or months from the point of entry of the attack (the first successful attack step) to the point of data compromise (not simply system compromise, but the point at which the criminal has actually done material harm). 90% take more than hours and over 50% take days or longer. Clearly if an appropriate log was instrumented and being regularly reviewed or an IDS alarm occurred, you would notice and could stop the attack in the vast majority of our cases.

The second pie chart in the series reveals that 63% of companies do not discover the compromise for months and that almost 80% of cases do not learn of attacks for weeks after they occur. In 95% of cases it took the organization longer than days after the compromise to learn of the attack. There are hundreds of cases in which the inside team either didn’t look at the logs (in 82% of the breaches in the study, the evidence was manifested in their logs), or for some other reason (were frustrated, tired, overwhelmed by the logs, found them to be not-interesting, felt they were too noisy after a few days or weeks) simply quit looking...
(emphasis added)

That is amazing. Consider the following regarding patching.

[O]nly 22% of our cases involved exploitation of a vulnerability, of which, more than 80% were known, and of those all had a patch available at the time of the attack. This is not to say that patching is not effective, or necessary, but we do suggest that the emphasis on it is misplaced and inappropriately exaggerated by most organizations. For the sake of clarity, 78% of the breaches we handled would have still occurred if systems had been 100% patched the instance a patch was available. Clearly patching isn’t the solution to the majority of breaches we investigated.

How about the source of attacks?

While criminals more often came from external sources, and insider attacks result in the greatest losses, criminals at, or via partner connections actually represent the greatest risk. This is due to our risk equation: Threat X Impact = Risk

  • External criminals pose the greatest threat (73%), but achieve the least impact (30,000 compromised records), resulting in a Psuedo Risk Score of 21,900

  • Insiders pose the least threat (18%), and achieve the greatest impact (375,000 compromised records), resulting in a Pseudo Risk Score of 67,500

  • Partners are middle in both (39% and 187,500), resulting in a Pseudo Risk Score of 73,125


While these are rudimentary numbers, the relative risk scores are reasonable and discernable. It is also worth noting that the Partner numbers rose 5-fold over the duration of the study, making partner crime the leading factor in breaches. This is likely due to the ever increasing number of partner connections businesses are establishing, while doing little to nothing to increase their ability to monitor or control their partner’s security posture. Perhaps as expected, insider breaches are the result of your IT Administrators 50% of the time.
(Note the original blog post doesn't say 39%, although the report and briefing does.)

I think that's consistent with what I've said: external attacks are the most prevalent, but insiders can cause the worst damage. (The authors note the definition of "insiders" can be fuzzy, with partners sometimes considered insiders.)

This chart is one of the saddest of all.



Unfortunately, it confirms my own experience and that of my colleagues.

I'll add a few more items:



    • Three quarters of all breaches are not discovered by the victim

    • Attacks are typically not terribly difficult or do not require advanced skills

    • 85% of attacks are opportunistic rather than targeted

    • 87% could have been prevented by reasonable measures any company should have been capable of implementing or performing



    Sounds like my Caveman post from last year.

    I am really glad Verizon published this report and I look forward to the next edition in the fall.
  • House of Representatives v China

    Thanks to one of my colleagues for pointing out Lawmaker says Chinese hacked Capitol computers:

    By PETE YOST and LARA JAKES JORDAN – 3 hours ago

    WASHINGTON (AP) — A congressman said Wednesday the FBI has found that four of his government computers have been hacked by sources working out of China.

    Rep. Frank Wolf, a Virginia Republican, said that similar incidents — also originating from China — have taken place on computers of other members of the House and at least one House committee.

    A spokesman for Wolf said the four computers in his office were being used by staff members working on human rights issues and that the hacking began in August 2006. Wolf is a longtime critic of the Chinese government's human rights record.

    The congressman suggested the problem probably goes further. "If it's been done in the House, don't you think that they're doing the same thing in the Senate?" he asked.


    For a record of others hacked by China, see my earlier posts.

    Senin, 09 Juni 2008

    Publicity: BSD Associate Examinations

    I was asked to mention the following BSD Associate examinations will take place at the following three events:

    1. RMLL: Mont-de-Marsan, France, Jul 02, 2008

    2. OpenKyiv 2008: Kiev, Ukraine, Aug 02, 2008

    3. LinuxWorld: San Francisco, CA, Aug 06-07, 2008


    From the BSDA description:

    The BSDA certification is designed to be an entry-level certification on BSD Unix systems administration. Testing candidates with a general Unix background, but less than six months of work experience as a BSD systems administrator (or who wish to obtain employment as a BSD systems administrator) will benefit most from this certification. Human resource departments should consider the successful BSDA certified applicant to be knowledgeable in the daily maintenance of existing BSD systems under the direction and supervision of a more senior administrator.