Rabu, 30 Juli 2008

Snort Report 17 Posted

My 17th Snort Report titled How to find new features in Snort 2.8.2 has been posted. It was delayed in production for a while but it still applies to Snort 2.8.2.1. From the article:

Service provider takeaway: Service providers will learn about new features in Snort 2.8.2 that they can deploy at customer sites.

The last time we looked at new Snort options occurred with Snort 2.8.0, released in late September 2007. Since then, Snort 2.8.0.1, 2.8.0.2 and 2.8.1 have been published. At the time of writing, Snort 2.8.2-8-rc1 is the latest version, although release candidate versions should generally not be deployed in production environments. However, RC editions do provide a look at the newest elements of Snort available to the general public. This Snort Report provides an overview of some of the new features in the latest editions of Snort while explaining how to identify these new features.


I'm working on a new Snort Report now looking at the new SSP Beta 2.

Selasa, 29 Juli 2008

Counterintelligence: Worse than Security?

As a former Air Force intelligence officer, I'm very interested in counterintelligence. I've written about counterintelligence and the cyber-threat before. I'm reading a book about counterintelligence failures, and the following occurred to me. It is seldom in the self-interest of any single individual, department, or agency to identify homegrown spies. In other words, hardly anyone in the CIA wants to find a Russian spy working at Langley. If you disagree, examine the history of any agency suffering similar breaches. It isn't pretty; the degree to which people deny reality and then seek to cover it up is incredible.

In some ways this make sense. Nothing good comes from identifying a spy, other than (hopefully) a damage assessment of the spy's impact. Overall the national security of the country can be incredibly damaged, never mind the lives lost or harmed by the spy's actions. However, in case after case, the appeal to higher national security interests is frequently buried.

Reading this book, it also occurred to me that security has exactly the same problem. Spies are worse in most respects, and they could be equated to insider threats. However, just as with spies, in security it is seldom in the self-interest of any single individual, department, or agency to identify compromises. Despite the fact that an intruder is the perpetrator, the victim is often blamed for security breaches. The word "security failure" demonstrates that something bad has happened, so it must be the fault of the IT and security groups. (Imagine if a mugging was called a "personal security failure.")

Because of this reality, it seems that the only way to counter these self-interests is to task a central group, organizationally detached from the individual agencies, with identifying security breaches. In CI, this should be the job of the National Counterintelligence Executive (NCIX), but the Office of the Director of National Intelligence appears to have neutered the NCIX. In digital security, a headquarters-level group should independently assess the security of its constituents. These central groups must have the support of top management, or they will be ineffective.

Update: Fixed the "seldom" problem!

Security Operations: Do You CAER?

Security operations can be reduced to four main tasks. A mature security operation performs all four tasks proactively.

  1. Collection is the process of acquiring the evidence necessary to identify activity of interest.

  2. Analysis is the process of investigating evidence to identify suspicious and malicious activity that could qualify as incidents.

  3. Escalation is the process of reporting incidents to responsible parties.

  4. Resolution is the process of handling incidents to recover to a desired state.


The goal of every mature security operation is to reduce the mean time to resolution, i.e., accomplishing all four tasks as quickly and efficiently as possible.

As has been noted elsewhere (e.g., Anton Chuvakin's Blog), some organizations which aren't even performing collection yet view achieving that first step as their end goal. ("Whew, we got the logs. Check!") Collecting evidence is no easy task, to be sure. Increasingly the logs generated by security devices are less relevant to real investigations. Application-level data can sometimes be the only way to understand what is happening, yet programmers aren't really practicing security application instrumentation by building visibility in (yet).

Various regulatory frameworks are beginning to drive recalcitrant organizations further into security operations by requiring analysis and not just collection. Besides meeting legal requirements, it should be obvious that identifying security failures as early as possible reduces the ultimate cost of resolving those problems, just as purging bugs from software early in the development process is cheaper than developing patches for software in the field. Competent analysis is probably the most difficult aspect of security operations. Understanding applications, the environment, and attack models is increasingly difficult, and the human resources to perform this task well are seldom inexpensive nor willing to relocate in large numbers.

Assuming one has the capability to do decent enough analysis to discover trouble, knowing whom to notify (escalation) becomes the next step. In a large organization this is no trivial task. Simply performing asset inventory, naming responsible parties, and establishing incident response procedures is a project unto itself. Worse, none of these details are static. Any system which depends upon administrators to manually enumerate their networks, data, systems, applications, and personal information will become stale within days. These processes should be automated, so a human incident handler can escalate without wasting time tracking down missing information.

Finally we come to resolution. Several problems arise here. First, the "responsible" party may deny the incident, despite evidence to the contrary. Although providing evidence may help, in some cases the "responsible" party may ignore the incident handler while quietly recovering from the event. Second, the "responsible" party may ignore the incident. The person may simply not care at all, or not care enough to direct the resources needed to resolve the incident. Third, the responsible party may want to resolve the incident, but may not be politically or technically capable for doing so. All three cases justify giving incident handlers the authority and knowledge to guide incident resolution as needed, to include deploying an augmentation team in serious cases.

Note that an organization may be forced to do escalation and resolution even when it does no collection or analysis. External parties, like law enforcement, the military, customers, regulators, and peers are frequently informing organizations that they have been compromised. This is a very poor situation, because a victim not doing independent collection and analysis has few options when the cops come knocking. Usually administrators must scramble to salvage whatever data might exist, wasting time as log sources (which may not even exist) are located. Resources to make sense of the data are lacking, so the victim is helpless. Management unwilling to support a security operation are going to be flummoxed when confronted by a serious incident. More time will be wasted. The only winner is the intruder.

Avoiding this situation requires us to fully CAER. It's cheaper, faster, and at some point I believe will be demanded by the government and market anyway.

Senin, 28 Juli 2008

Notes for Black Hat Students

The following is directed at students of my TCP/IP Weapons School (TWS) at Black Hat USA 2008 on 2-3 and 4-5 August 2008, at Caesars Palace, Las Vegas, NV. Please disregard otherwise.

TWS is an advanced network traffic analysis class. We expect students to have some experience looking at network traffic using tools like Wireshark. We also expect students to have some experience working in Unix-like operating systems.

We want you to get the most value from TWS. Students may participate in three ways.

  1. Students may simply observe while the instructor explains the network traffic and attacks which generated the traces. Students do not need anything to enjoy this aspect of the class.

  2. Students are encouraged to review traces as the instructor explains the network traffic and attacks. Students will need a laptop running Wireshark and a DVD drive to enjoy this aspect of the class.

  3. Students are encouraged to perform hands-on exercises which demonstrate tools and techniques to create interesting network traffic. Students will need a laptop with 10 GB free and a DVD drive.

    The laptop must have a VMware product installed. The instructor tested the VMs with VMware Server 1.0.6 on Ubuntu 8.04 and Windows XPSP2. The instructor expects the VMs to work on VMware Player (free), VMware Workstation (not free) and VMware Fusion (not free), although they were not tested. Students are strongly discouraged from relying on VMware Player, which only allows one VM to run at a time. Students will receive 3 VMs, and some labs require all 3 to be running simultaenously.

    At least one of the VMs is compressed using the 7z format. Windows users can use 7-zip and Unix-like users can use p7zip to extract the VM(s).


We hope you choose to participate by examining network traces using Wireshark and running the labs, so please bring the appropriate software and hardware to class. Extracting the VMs from the DVD may take an hour or more depending on hardware speeds, so hands-on labs will not start until late morning or early afternoon of the first day of class.

If you have any questions, please email taosecurity -at- gmail -dot- com.

Sabtu, 26 Juli 2008

Review of The New School of Information Security Posted

Amazon.com just published my four star review of The New School of Information Security by Adam Shostack and Andrew Stewart. From the review:

If you don't "get" Allan Schiffman's 2004 phrase "amateurs study cryptography; professionals study economics," if you don't know who Prof. Ross Anderson is, and if you think anti-virus and a firewall are required simply because they are "best practices," you need to read The New School of Information Security (TNSOIS). If you already recognize why I highlight these issues, you will not find much beyond an explanation of these central tenets in TNSOIS.

Review of Nmap Network Scanning

Recently Fyodor sent me a pre-publication review copy of his new self-published book Nmap Network Scanning (NNS). I had heard of Fyodor's book when I wrote my review of Nmap in the Enterprise last month, but I wasn't consciously considering what could be in Fyodor's version compared to the Syngress title. Although the copy I read was labelled "Pre-Release Beta Version," I was very impressed by this book. In short, if you are looking for the book on Nmap, the search is over: NNS is a winner.

I've reviewed dedicated "tool" books before, including titles about Snort, Nessus, and Nagios. NNS dives into the internals of Nmap unlike any other title I've read. Without Nmap author Fyodor as the author, I think any competitor would need to have thoroughly read the source code of the application to have a chance at duplicating the level of detail Fyodor includes in NNS. Instead of just describing how to use Nmap, Fyodor explains how Nmap works. Going even further, he describes the algorithms used to implement various tests, and why he chose those approaches. The "Idle Scan Implementation Algorithsm" section in Ch 5 is a great example of this sort of material. I will probably just refer students of my TCP/IP Weapons School class to this part of NNS when we discuss the technique!

One of the best parts of NNS, mentioned by explained in no other text, is the Nmap Scripting Engine (NSE). Ch 9 is all about NSE, with a brief intro to Lua and excellent documentation of using and building upon NSE. Beyond this groundbreaking material readers will find many examples of Nmap case studies from users. This and other sections help make NNS a practical book, showing how people use Nmap in their environments for a variety of purposes.

NNS is a five star book, and when it's posted at Amazon.com I'll upload this review there. You can learn more about the book at nmap.org/book, and see it in paper at Def Con next month.

Dark Visitor Podcast: Real "Truth About Chinese Hackers"

I just listened to the first edition of the Dark Visitor Podcast. You may remember my February post titled Review of The Dark Visitor, where I discussed a book by the The Dark Visitor Blog author Scott Henderson. In the podcast, fellow blogger Jumper speaks with Henderson (aka "Heike" on the blog) about various issues related to Chinese hackers. The pair make it clear that they base their posts on "open sources," meaning information available to anyone with a Web browser and an understanding of the Chinese language.

Chinese hackers are a hot topic. The latest Information Security Magazine features a "face-off" between Marcus Ranum and Bruce Schneier on the subject. I think you would learn more reading the Dark Visitor Blog regularly. For example, they responded directly to Bruce Schneier's thesis.

I think anyone who likes my blog will enjoy listening to this new podcast. If anyone knows of a similar English-language site covering Russian and/or east European hackers, please let me know.

Jumat, 25 Juli 2008

DNS and the Cyber TARDIS Problem

It's been 16 days since I responded to public notification of DNS problems in Thoughts on Latest Kaminsky DNS Issue, and 4 days since Halvar Flake's post On Dan's request for "no speculation please". Apparently the tubes are still working, since I presume you're reading this post via the Internet and not carrier pigeon. It's still been a remarkable period, characterized by the acronymn in the title of this post.

I'm not referring to the TARDIS of Doctor Who, although centrality of "Time" is the reason I used the TARDIS theme. I mean Time and Relative Data in Security. Time and Relative Data were the key issues in the DNS issue. Who knew more about the problem, and when? Halvar understood this in his post, when he estimated that a savvy attacker would need 1/4 the time of a normal security person to understand the nature of the DNS problem, given the same starting point.

Since Halvar's speculation, Matasano's confirmation, Metasploit's weaponization, and Dan's elaboration, there's been a flurry of offensive and defensive activity. It reminds me somewhat of Y2k: am I still able to use the Internet because DNS administrators have been patching, or are not enough bad guys trying to bother me? It would be nice to see some academics query whatever data (hint) they can find on recent DNS activity to produce some practical research, rather than trying to decipher five year old worm data or yet another port scan. According to this Arbor Networks Blog post by Jose Nazario, his group might have some data to share soon.

I'd like to highlight some of my favorite thoughts from the past few days. I liked FX's post Perception of Vulnerabilities:

The Kaminsky DNS attack is definitively regarded as the most important vulnerability this year. This, I find highly interesting , as we have seen two other gigantic security failures already in 2008. Debian's NRNG (non-random number generator) is most certainly one of them. But honestly, raise your hands if you have even noticed SNMPv3... SNMPv3 is used to manage routers - the routers that forward all your traffic around the world, including your DNS queries. Managing a router means being able to configure it; a.k.a. super user access. Attackers who can configure a router in your path can redirect everything, without you knowing, not just traffic that relies on name resolution.

The weaponization discussion has been great. On one side are people like Hoff and Rich Mogull, who believe the Metasploit team was wrong to weaponize the exploit. I place myself on the other side. I agree with a lot of Andre Gironda's argument in comments on Rich Mogull's post. I think it's important to be able to test if your DNS implementation is vulnerable, as noted by Ron Gula in But I patched our DNS servers ....

With the growing importance of the cloud, and the customer's increasingly reliance on software he/she doesn't control, are we to be satisifed with promises of applied patches, or even the effectiveness of said patches? If you always believe your vendor (i.e., you're naive), answer yes. If you trust but verify, answer no -- and start testing. Metasploit (exercised via a pre-existing, contractual agreement that permits such customer testing) is one way to see if your vendor really is as safe as it claims to be.

People who care about reality -- facts on the ground -- care about testing. Such people also care about monitoring. Prior to Halvar's speculation, probably the best place to try to figure out how to detect what "might" be coming was the Daily Dave mailing list. Since Halvar's post, there's been a lot of monitoring discussions on Emerging-Threats. Monitoring types have been trying to work around implementation challenges in popular tools like Snort, with alternatives like Bro getting more attention. Some historical articles on DNS intracies have helped people understand DNS better, now that we know exactly what to observe.

I believe the actions of the past week have been for the better. Sure, the bad guys have a tool now, but as Druid noted in the Metasplot blog:

I was personally aware of multiple exploits in various levels of development before, during, and after HD and I wrote ours, so we felt at this point publishing working exploit code was fair game.

Poke around for five minutes and you'll find other implementations of exploit code beyond Metasploit anyway, never mind the private ones.

Public speculation followed by weaponization has elevated the issue for those who had to produce "proof" in order to justify patching, as well as helping level the knowledge field. Those of you who object have got to understand this point: real bad guys always win in the Time and Relative Data arena. Their paid job is to find ways to exploit targets. They have the time and knowledge to identify vulnerabilities in DNS regardless of what Dan Kaminsky says or doesn't say. I know whole teams of people who avoid the most elite public conferences because they don't learn anything new.

Defensive-minded security person -- how do you spend your time? Are you like me, balancing operations, planning, meetings, family, and so on, across thousands of systems, with hundreds of classes of vulnerabilities, and nowhere near enough time or resources to mitigate them? Do you know as much about the latest attacks and defenses as the people who discover and exploit them, for a living? Probably not.

Even assuming such adversaries do not know about the DNS problem prior to Dan's disclosure, as soon as they acquire the scent that problems exist (and especially if patches are released), they point their collective noses at the newest victim and tear into it. Halvar's N/4 estimate was very conservative, although he recognized real bad guys probably work a lot faster than that.

I think Dave Aitel put it best:

The motto of the week is that you can't hint at bugs or people will just find them. Either full disclosure or no-disclosure wins, because there's no point doing anything else.

Sabtu, 19 Juli 2008

What Should Dan Have Done?

I answered a question on the Daily Dave mailing list, so now a few of you are asking "what should Dan have done?" about his DNS discovery. Keeping in mind my thoughts on keeping vulnerabilities in perspective, I have the following suggestions.

  1. Black Hat and/or Def Con should not be the place where "all is revealed." The gravity of the situation (such as it might be) is nullified by what will undoubtedly be a circus. Disclosure of additional details should have been done by a neutral party with no commercial interests. Black Hat and/or Def Con would have made great post-disclosure locations, where Dan explains how he found the vulnerability, along with "the rest of the story." That would have still made a great talk, with plenty of worthwhile attention.

  2. Personal blog posts should be avoided. The disclosure process should have been run exclusively through a group with some nominal "Internet security legitimacy," like CERT-CC and the affiliated US-CERT. Any questions on the issue should have been referred to them.

  3. The person discovering the issue should not have asked us to avoid speculation, while issuing a challenge, e.g.:

    I want you to explore DNS. I want you to try to build off the same bugs I did to figure out what could possibly go wrong. Maybe I missed something — I want you to help me find out if I did, so we can deal with it now instead of later...

    While I’m out there, trying to get all these bugs scrubbed — old and new — please, keep the speculation off the @public forums and IRC channels. We’re a curious lot, and we want to know how things break. But the public needs at least a chance to deploy this fix, and from a blatantly selfish perspective, I’d kind of like my thunder not to be completely stolen in Vegas :)

    Now, if you do figure it out, and tell me privately, you’re coming on stage with me at Defcon. So I can at least offer that.


    This essentially says "if you're clever enough to figure this problem out, tell me and join me in the circus."


I think it's remarkable that, despite all the brainpower behind the preparation for these announcements, the DNS behind NAT problem first noticed by imipak was missed. If no speculation or discussion of the issue had taken place, how would that problem have been addressed?

There's no easy answer to the fundamental question, but it's fair to ask really what is at stake here. Right now, hundreds of thousands, perhaps millions, of innocent users have unwanted intruders controlling their PCs. That is a realized problem. It is not theoretical. It is not pending. Why is there not a crash program to help those people?

Consider the issue from another angle. Anyone with military experience knows there are procedures in place for dealing with real catastrophes. Absolutely nothing about the current situation has raised any official notice outside of our community. Are there any warnings on CNN? The SANS Internet Threat Level (take it with a grain of salt) is even still green.

This does not diminish the amount of work done by Dan, the vendors, and other parties to fix this issue. It's all for the better to have more robust infrastructure in place. At the very least this situation has raised the question of how vulnerabilities in critical infrastructure should be addressed in the future.

Jumat, 18 Juli 2008

Vulnerabilities in Perspective

It's been nine days since Dan Kaminsky publicized his DNS discovery. Since then, we've seen a Blackberry vulnerability which can be exploited by a malicious .pdf, a Linux kernel flaw which can be remotely exploited to gain root access, Kris Kaspersky promising to present Remote Code Execution Through Intel CPU Bugs this fall, and David Litchfield reporting "a flaw that, when exploited, allows an unauthenticated attacker on the Internet to gain full control of a backend Oracle database server via the front end web server." That sounds like a pretty bad week!

It's bad if you think of R only in terms of V and forget about T and A. What do I mean? Remember the simplistic risk equation, which says Risk = Vulnerability X Threat X Asset value. Those vulnerabilities are all fairly big V's, some bigger than others depending on the intruder's goal. However, R depends on the values of T and A. If there's no T, then R is zero.

Verizon Business understood this in their post DNS Vulnerability Is Important, but There’s No Reason to Panic:

Cache poisoning attacks are almost as old as the DNS system itself. Enterprises already protect and monitor their DNS systems to prevent and detect cache-poisoning attacks. There has been no increase in reports of cache poisoning attacks and no reports of attacks on this specific vulnerability...

The Internet is not at risk. Even if we started seeing attacks immediately, the reader, Verizon Business, and security and network professionals the world-over exist to make systems work and beat the outlaws. We’re problem-solvers. If, or when, this becomes a practical versus theoretical problem, we’ll put our heads together and solve it. We shouldn’t lose our heads now.

However, this doesn’t mean we discount the potential severity of this vulnerability. We just believe it deserves a place on our To-Do lists. We do not, at this point, need to work nights and weekends, skip meals or break dates any more than we already do. And while important, this isn’t enough of an excuse to escape next Monday’s budget meeting.

It also doesn’t mean we believe someone would be silly to have already patched and to be very concerned about this issue. Every enterprise must make their own risk management decisions. This is our recommendation to our customers. In February of 2002, we advised customers to fix their SNMP instances due to the BER issue discovered by Oulu University, but there have been no widespread attacks on those vulnerabilities for nearly six years now. We were overly cautious. We also said the Debian RNG issue was unlikely to be the target of near-term attacks and recommended routine maintenance or 90 days to update. So far, it appears we are right on target.

There have been no increase in reports of cache poisoning attempts, and none that try to exploit this vulnerability. As such, the threat and the risk are unchanged.


I think the mention of the 2002 SNMP fiasco is spot on. A lot of us had to deal with people running around thinking the end of the world had arrived because everything runs SNMP, and everything is vulnerable. It turns out hardly anything happened at all, and we were watching for it.

Halvar Flake was also right when he said:

I personally think we've seen much worse problems than this in living memory. I'd argue that the Debian Debacle was an order of magnitude (or two) worse, and I'd argue that OpenSSH bugs a few years back were worse.

Looking ahead, I thought this comment on the Kaspersky CPU attacks was interesting: CPU Bug Attacks: Are they really necessary?:

But every year, at every security conference, there are really interesting presentations and lot of experienced people talking about theorically serious threats. But this doesn't necessarily mean that an exposed PoC will become a serious threat in the wild. Many of these PoCs require high levels of skill (which most malware authors do not have) to actually make them work in other contexts.

And, I feel sorry to say this, but being in the security industry my thoughts are: do malware writers really need to develop highly complex stuff to get milions of pcs infected? The answer is most likely not.


I think that insight applies to the current DNS problems. Are those seeking to exploit vulnerable machines so desperate that they need to leverage this new DNS technique (whatever it is)? Probably not.

At the end of the day, those of us working in production networks have to make choices about how we prioritize our actions. Evidence-based decision-making is superior to reacting to the latest sensationalist news story. If our monitoring efforts demonstrate the prevalance of one attack vector over another, and our systems our vulnerable, and those systems are very valuable, then we can make decisions about what gets patched or mitigated first.

Jumat, 11 Juli 2008

Packet Anonymization with PktAnon


I noticed a new tool on Packetstorm recently: PktAnon by Christoph P. Mayer, Thomas Gamer, and Dr. Marcus Schöller.

This tool seems powerful because you can apply a variety of anonymization policies based on settings you apply in an XML configuration file.

It was easy to install the tool on Debian 4.0:


tws:~# cd /usr/local/src
tws:/usr/local/src# wget http://www.tm.uka.de/pktanon/download/pktanon-1.2.0-dev .tar.gz
...edited...
tws:/usr/local/src# tar -xzf pktanon-1.2.0-dev.tar.gz
tws:/usr/local/src# http://www.tm.uka.de/pktanon/download/pktanon-1.2.0-dev.tar. gz
tws:/usr/local/src# sudo apt-get install libxerces27-dev libboost-dev
-su: sudo: command not found
tws:/usr/local/src# apt-get install libxerces27-dev libboost-dev
Reading package lists... Done
Building dependency tree... Done
The following extra packages will be installed:
libicu36 libxerces27
Suggested packages:
libboost-doc libboost-date-time-dev libboost-filesystem-dev
libboost-graph-dev libboost-iostreams-dev libboost-program-options-dev
libboost-python-dev libboost-regex-dev libboost-serialization-dev
libboost-signals-dev libboost-test-dev libboost-thread-dev libboost-wave-dev
xalan libxerces27-doc
The following NEW packages will be installed:
libboost-dev libicu36 libxerces27 libxerces27-dev
0 upgraded, 4 newly installed, 0 to remove and 3 not upgraded.
Need to get 9259kB of archives.
After unpacking 44.7MB of additional disk space will be used.
Do you want to continue [Y/n]? y
...edited...
tws:/usr/local/src# cd pktanon-1.2.0-dev
tws:/usr/local/src/pktanon-1.2.0-dev# mkdir /usr/local/pktanon
tws:/usr/local/src/pktanon-1.2.0-dev# ./configure --prefix=/usr/local/pktanon
tws:/usr/local/src/pktanon-1.2.0-dev# make
tws:/usr/local/src/pktanon-1.2.0-dev# make install

Next you choose which of the anonymization profiles we want. Here we use settings_high.xml. To use this configuration file we just tell it where the Input is and where the Output is.

For example, here is the first, original packet.

tws:/tmp# tcpdump -c 1 -r sample.ftp.pcap -neXvvv

reading from file sample.ftp.pcap, link-type EN10MB (Ethernet)
09:38:37.565642 00:0c:29:2d:6a:a0 > 00:50:56:ee:e5:fc, ethertype IPv4 (0x0800),
length 74: (tos 0x0, ttl 64, id 48680, offset 0, flags [DF], proto: TCP (6),
length: 60) 192.168.255.131.1385 > 62.243.72.50.21: S, cksum 0x7890 (correct),
2888152290:2888152290(0) win 5840 <mss 1460,sackOK,timestamp 199370 0,nop,wscale 2>
0x0000: 4500 003c be28 4000 4006 3542 c0a8 ff83 E..<.(@.@.5B....
0x0010: 3ef3 4832 0569 0015 ac25 b4e2 0000 0000 >.H2.i...%......
0x0020: a002 16d0 7890 0000 0204 05b4 0402 080a ....x...........
0x0030: 0003 0aca 0000 0000 0103 0302 ............

Here is the settings_low profile output.

tws:/tmp# tcpdump -c 1 -r anon.low.ftp.pcap -neXvvv

reading from file anon.low.ftp.pcap, link-type EN10MB (Ethernet)
09:38:37.565642 00:0c:29:2d:6a:a0 > 00:50:56:ee:e5:fc, ethertype IPv4 (0x0800),
length 74: (tos 0x0, ttl 64, id 48680, offset 0, flags [DF], proto: TCP (6),
length: 60) 246.142.91.186.1385 > 90.113.151.13.21: S, cksum 0x7c1a (correct),
2888152290:2888152290(0) win 5840 <mss 1460,sackOK,timestamp 199370 0,nop,wscale 2>
0x0000: 4500 003c be28 4000 4006 38cc f68e 5bba E..<.(@.@.8...[.
0x0010: 5a71 970d 0569 0015 ac25 b4e2 0000 0000 Zq...i...%......
0x0020: a002 16d0 7c1a 0000 0204 05b4 0402 080a ....|...........
0x0030: 0003 0aca 0000 0000 0103 0302 ............

I decided I wanted a low profile that also modified MAC addresses, so I copied the low setting and then made this change:

<configitem anon="AnonBytewiseHashSha1" name="MacSource"/>
<configitem anon="AnonBytewiseHashSha1" name="MacDest"/>

This was the result.

tws:/tmp# tcpdump -c 1 -r anon.low-mac.ftp.pcap -neXvvv
reading from file anon.low-mac.ftp.pcap, link-type EN10MB (Ethernet)
09:38:37.565642 da:cb:dc:54:d2:51 > da:28:8d:39:ef:7b, ethertype IPv4 (0x0800),
length 74: (tos 0x0, ttl 64, id 48680, offset 0, flags [DF], proto: TCP (6),
length: 60) 246.142.91.186.1385 > 90.113.151.13.21: S, cksum 0x7c1a (correct),
2888152290:2888152290(0) win 5840 <mss 1460,sackOK,timestamp 199370 0,nop,wscale 2>
0x0000: 4500 003c be28 4000 4006 38cc f68e 5bba E..<.(@.@.8...[.
0x0010: 5a71 970d 0569 0015 ac25 b4e2 0000 0000 Zq...i...%......
0x0020: a002 16d0 7c1a 0000 0204 05b4 0402 080a ....|...........
0x0030: 0003 0aca 0000 0000 0103 0302 ............

Finally I ran the medium and high settings.

tws:/tmp# tcpdump -c 1 -r anon.medium.ftp.pcap -neXvvv
reading from file anon.medium.ftp.pcap, link-type EN10MB (Ethernet)
09:38:37.565642 da:cb:dc:54:d2:51 > da:28:8d:39:ef:7b, ethertype IPv4 (0x0800),
length 60: (tos 0x0, ttl 116, id 48680, offset 0, flags [DF], proto: TCP (6),
length: 40) 21.248.227.61.19357 > 172.148.57.189.56062: S, cksum 0x31e7
(correct), 2888152290:2888152290(0) win 5840
0x0000: 4500 0028 be28 4000 7406 6920 15f8 e33d E..(.(@.t.i....=
0x0010: ac94 39bd 4b9d dafe ac25 b4e2 0000 0000 ..9.K....%......
0x0020: 5002 16d0 31e7 0000 0000 0000 0000 P...1.........

tws:/tmp# tcpdump -c 1 -r anon.high.ftp.pcap -neXvvv
reading from file anon.high.ftp.pcap, link-type EN10MB (Ethernet)
09:38:37.565642 55:3e:4d:bf:1f:e8 > 55:35:a0:67:f1:3a, ethertype IPv4 (0x0800),
length 60: (tos 0x0, ttl 126, id 48680, offset 0, flags [DF], proto: TCP (6),
length: 40) 162.131.129.172.20319 > 97.102.43.234.21842: S, cksum 0xb113
(correct), 2888279266:2888279266(0) win 5907
0x0000: 4500 0028 be28 4000 7e06 8d27 a283 81ac E..(.(@.~..'....
0x0010: 6166 2bea 4f5f 5552 ac27 a4e2 2080 2000 af+.O_UR.'......
0x0020: 5002 1713 b113 0000 0000 0000 0000 P.............

We should be able to try this tool with OpenPacket.org. Let me know what you think.

For details on the anonimization policies please read the documentation.

Robert Graham on TurboCap

I liked Robert Graham's post on CACE Technologies TurboCap. I don't necessarily think TurboCap is that exciting, but I learned a lot of tricks reading Robert's explanation of how to collect packets quickly for traffic inspection purposes. I've discussed some of them, like device polling on FreeBSD.

By the way, don't forget to upgrade to Wireshark 1.0.2.

Hint of Visibility in the Cloud

Visibility in the cloud is one of my concerns these days. When someone else hosts and processes your data, how can you tell if it is "secure?" I found Robert Graham's post Gmail now shows IP address log to be very interesting. Robert explains how Gmail using HTTPS doesn't always use HTTPS (which is old news, as he says), but monitoring (of a sort) is now available to determine if someone else is using your account. According to the Gmail blog, Gmail will soon make available logs of IPs using your Gmail account. I agree that the technique could be applied to other Web and cloud applications. How about a record of my Amazon S3 account?

Proposed Air Force Cyber Badge

The Air Force published New cyberspace career fields, training paths, badge proposed earlier this month. I found the proposed cyber badge to be interesting. From the story:

The badge features: lightning bolts to signify the cyberspace domain; center bolts taken from the navigator badge and the Air Force Seal to signify cyberspace's worldwide power and reach and its common lineage and history of electronic warfare officers; and orbits to signify cyberspace's space-related mission elements. And, like other specialty badges, it will identify skill (certification) levels. Final approval and specifics of the wear criteria is under review at the air staff.

For comparison I've posted the intelligence badge I used to wear. Wikipedia's Badges of the US Air Force is a nice reference.

The Air Force also published a proposed Cyberspace Training Path for Operators and Specialists.

Since we're talking military cyber operations, a blog reader asked for my opinion of the new story U.S. Army challenges USAF on network warfare. I saw this first hand at a cyber conference recently. The Air Force colonel who will be vice commander of Cyber Command, Tony Buntyn, spoke, followed by an Army colonel, John Blaine, from NetCom. Col Blaine said the Army had been doing cyber operations for years, seemingly in contrast to the "new" Air Force Cyber Command. Of course, my previous history post noted that the Air Force Information Warfare Center was established in 1993, and the AFCERT was created the year earlier. Air Force cyber history is very extensive, especially if you expand to electronic warfare in Vietnam.

Rabu, 09 Juli 2008

Thoughts on Latest Kaminsky DNS Issue

It seems Dan Kaminsky has discovered a more effective way to poison the DNS cache of vulnerable name servers. This is not a new problem, but Dan's technique apparently makes it easier to accomplish.

One problem is we do not know exactly what Dan's technique is. He is saving the details for Black Hat. Instead of publishing the vulnerability details and the patches simultaneously, Dan is just notifying the world a problem exists while announcing coordinated patch notifications.

I would keep an eye on the following for details as they emerge:

I think this person figured out the server side:

"Allen Baranov Jul 9

Having read the comments and checked out the site mentioned in the blog I have the following theory:

1. I connect to vulnerable DNS Server and query my very own domain. I note what the UDP source port is.

2. I connect to the DNS Server again ASAP and query another of my very own domains. I note what that UDP source port is.

3. Assuming they are close together I do a query of “microsoft.com” or some such and send a UDP reply to the server with a bogus IP address. I could probably send 20 replies so that I get the correct port.

4. Cache is poisoned."

That is the server side of the equation. In other words, if your DNS server is vulnerable, and an intruder poisons the cache, then the intruder now effectively controls the responses sent by your DNS servers to anyone who queries it. As noted by Steve Pinkham's comment on a previously noted blog, "The patches for bind turn on query port randomization by default, and allow a larger range of source ports."

There is also a client side of the equation. Microsoft issued its own advisory in April for the client side: MS08-020.

That reveals how the client side works. Because the Microsoft (and other) DNS clients use insufficiently random transaction IDs, attackers can apparently issue their own responses directly to vulnerable clients issuing DNS requests, if the attacker can reach the host issuing the request.

If I am wrong about any of this please post a comment. When possible I will keep checking the two sources I noted above. I posted this note because I have been receiving questions about this, and it would be helpful to understand what is happening.

Minggu, 06 Juli 2008

Reviews of FreeBSD Books Posted

Amazon.com just published my four star review of BSD UNIX Toolbox: 1000+ Commands for FreeBSD, OpenBSD and NetBSD by Christopher Negus and Francois Caen . From the review:

BSD Unix Toolbox (BUT) is a straightforward system administration book that could apply to many Unix-like operating systems. The title mentions "BSD" but the BSD-specific material is FreeBSD-oriented. The non-FreeBSD sections (such as using a shell) could apply to any Unix-like OS, so in that sense other BSDs like OpenBSD or NetBSD are "covered." However, sections like Ch 2 (Installing FreeBSD and Adding Software) have no OpenBSD or NetBSD equivalents. Nevertheless, I recommend BUT for anyone looking for a rapid introduction to BSD system administration.

Amazon.com also just published my three star review of Network Administration with FreeBSD 7 by Babak Farrokhi. From the review:

I am always glad to see new books on FreeBSD. The best authors look at the current market, identify gaps, and fill or expand beyond them with good material. I believe Network Administration with FreeBSD 7 (NAWF7) could be that book if the author takes a look at the competition and decides where his book should fit. Right now it's a combination of standard FreeBSD system administration advice plus fairly interesting, higher-end guidance. I strongly suggest the author remove all of the standard material, tell the reader to look elsewhere for basics, and focus squarely on advanced FreeBSD system administration. Add a copyeditor who proofs for grammar (in addition to the technical editor who proofs for content) and you could see a five star second edition.

Sabtu, 05 Juli 2008

Air Force Cyber Panel

Last month I participated in a panel hosted by the US Air Force. One of my co-panelists, Jim Stogdill, summarized some of the event in his recent post Sharing vs. Protecting, Generativity on DoD Networks.

I'd like to add the following thoughts. Before the event most of the panelists met for breakfast. One of the subjects we discussed was the so-called "People's Army" China uses for conducting cyber operations. You can read about this phenomenon in the great book The Dark Visitor.

In the US, our DoD relies upon professional, uniformed military members, government civilians, and an immense contracting force to defend the nation and project its military power. In China, their PLA mixes uniformed military with ordinary civilians, some of whom act at the behest of the military and government, with others acting on their own for "patriotic means."

This latter model is almost unheard of in the US and completely outside any formalized mechanism offered by the DoD. Imagine a group of "patriotic" teenagers approaching the DoD, saying they had hacked into some uber-secret Chinese network! How would generals even wrap their heads around such a scenario? That's illegal! Those kids aren't cleared! Government officials cannot accept donations!

This creates an amazing scenario. In one corner, the military-industrial complex. In the other, the People's Army. Who will win?

During the panel the question of recruiting "cyber warriors" was raised. I responded that recruitment wasn't the real problem; retention is. I left the Air Force Information Warfare Center (along with 31 of my fellow 32 company grade officers) because there was no career path that could keep me "in front of a computer screen." (That reminds me of the problems pilots have "staying in the cockpit.") When I was told it was "time to move," I was given the choice of being a protocol officer, a logistics officer, or an executive officer. The Air Force calls this "career broadening." I decided to broaden my way right out of the service rather than accept any of those non-intelligence, non-cyber jobs. I am hopeful the new Cyber Command will give young officers a real future conducing computer operations.

We discussed open source software briefly. I told the audience that if Windows XP were open source, no one would really care if Microsoft ended support. If the OS were truly that important to the mission, and it was an open source product, the Air Force could fork it and maintain its own patches and development. I am constantly amazed that some people advocate Microsoft's commercial "support" for XP as a reason for shunning open source software, when those "customers" are being instructed by Microsoft to migrate to Vista as XP's support ends.

I still think the Air Force's decision to stick with Microsoft was stupid. Can you imagine it's been almost four years since the AF-Microsoft super deal was signed? Think of all the Microsoft-targeting client-side attacks that could have been avoided if the client had not been running applications on Microsoft Windows.

Yes, I know, other operating systems have problems, other applications have problems, client-side attacks aren't everything, blah blah. Shifting to something other than Windows would still have increased the intruder's cost of exploitation. Suddenly instead of focusing all their R&D on attacking Windows, the bad guy has to open a second exploit development shop, and be far more careful when attacking the Air Force. What did NSA spend all that effort on SELinux for anyway?

Overall, I really enjoyed the panel and even got to visit a few friends from way back in the Air Force CERT who also attended the conference. I met some cool people on the panel too. Please feel free to reunite us anytime!

Making Decisions Using Randomized Evaluations

I really liked this article from a recent Economist: Economics focus: Control freaks; Are “randomised evaluations” a better way of doing aid and development policy?:

Laboratory scientists peer into microscopes to observe the behaviour of bugs. Epidemiologists track sickness in populations. Drug-company researchers run clinical trials. Economists have traditionally had a smaller toolkit. When studying growth, they put individual countries under the microscope or conduct cross-country macroeconomic studies (a bit like epidemiology). But they had nothing like drug trials. Economic data were based on observation and modelling, not controlled experiment.

That is changing. A tribe of economists, most from Harvard University and the Massachusetts Institute of Technology (MIT), have begun to champion the latest thing in development economics: “randomised evaluations” in which different policies—to boost school attendance, say—are tested by randomly assigning them to different groups...

Randomised evaluations are a good way to answer microeconomic questions... often, they provide information that could be got in no other way. To take bednets: supporters of distributing free benefits say that only this approach can spread the use of nets quickly enough to eradicate malaria. Supporters of charging retort that cost-sharing is necessary to establish a reliable system of supply and because people value what they pay for. Both ideas sound plausible and there was no way of telling in advance who was right. But the trial clearly showed how people behave...


Reading the whole article is best, but the core idea is that it might be helpful to conduct experiments on samples before applying policies to entire populations. In other words, don't just rely on theories, "conventional wisdom," "best practices," and so on... try to determine what actually works, and then expand the successful approaches to the overall group.

I thought immediately of the application to digital security, where, for example, bloggers write posts like Challenges to sell Information Security products and services:

Everyone knows (I hope) that some security measures are simply necessary — period. Firewalls and Antivirus, for example, are by common sense necessary.

Care to test that "common sense" in an experiment?

Jumat, 04 Juli 2008

Green Security

You all know how environmentally-conscience I am. Actually, I don't consider myself to be all that "green," aside from the environmental science merit badge I earned as a Scout. However, working for a global company (and especially the Air Force, in a prior life) reinforces one of my personal tenets: move data, not people. In other words, I look for ways to acquire security data remotely, and move it to me. I'd rather not fly to a location where the information resides; data centers are too distributed, cold, noisy, and cramped for me to want to spend a lot of time there.

So, when Bill Brenner of CSO asked if I had thoughts on "Green IT," I think I surprised him by answering postively. You can read some of what I said in his article Cost-Cutting Through Green IT Security: Real or Myth?

For Richard Bejtlich, director of incident response at General Electric, the biggest green security challenge is in how the company moves people around. Incident response investigations often require people to fly to offices spread across the country. But travel can be expensive and the environment certainly doesn't benefit from the jet fuel that's burned in the process.

Bejtlich's solution is to find more remote ways for employees to conduct incident response.

"Rather than have the carbon footprint of a plane trip, we can instead focus on moving the data we need (for incident response) instead of moving the people," he says. Bejtlich says a lot of the work can get done using virtual technology without reducing the quality of the security.

To achieve this at GE, Bejtlich has made use of F-Response, a vendor neutral, patent-pending software utility that allows an investigator to conduct live forensics, data recovery, and e-discovery over an IP network using the tools of their choice. "For $5,000 we can use the F-Response enterprise product throughout the company," he says. "It's a very good deal."

Bejtlich is also a believer in letting employees work from home. Like the reduction in air travel, working from home means fewer people burning gas on the way to the office.

"We encourage people to work from home so they don't waste energy on travel. The incident response team is all over the world anyway, so we really don't need to be in an office," he says. "Doing the job virtually makes budgetary sense, we spend more time getting the work done, and the bonus is it lowers our carbon footprint."

Virtual wonders

Bejtlich's success with virtual technology is music to the ears of Evolutionary IT's Guarino, who sees virtualization as a key to consolidating the IT environment and achieving green security.


Let me make a few clarifications. First, no one at GE uses F-Response. I mentioned it to Bill as an example of the sort of tool one could use to do remote forensics. I have a copy ready to test and I spent an hour on the phone speaking with Matt Shannon from F-Response, and I have high hopes for the product. Please don't read this as an endorsement of any single product. I mentioned F-Response to help get my point across to Bill.

Second, I don't see the "virtual technology" angle here. I didn't talk about "virtualization," so maybe the term was just used inappropriately.

Otherwise, I agree with my quotes on remote IR and working from home offices. They are key initiatives I would encourage other companies to adopt.

In fact, you could think of the home office as an example of move work, not people. Keep the people in place and move the job to them. In an increasingly competitive market where people with true skills are scarce, it's unreasonable to expect talent to uproot and migrate to an employer's location.