Kamis, 31 Mei 2007

I Have Seen the Future, and It Is Monitored

Today I spoke at the ISS World Spring 2007 conference in Alexandria, VA. ISS stands for Intelligence Support Systems. The speakers, attendees, and vendors are part of or support legal and government agencies that perform Lawful Intercept (LI) and associated monitoring activities. Many attendees appeared to be from county, state, and federal law enforcement agencies (LEAs). Others were wired and wireless service providers who are responsible for fulfilling LI requests.

This was a very different crowd. Even when cops attend security conferences (like Fed, I mean Black, Hat) the vibe is different. At security cons it's seen to be cool if one has mad offensive sk1llz. This group was all about acquiring the information needed to go to court to convict bad guys.

One theme immediately grabbed my attention, and it's going to eventually affect every entity that provides technological services:

Today lawful intercept monitors lines. Tomorrow lawful intercept will monitor services.

I cannot emphasize this enough. What does it mean?

Today (and previously), if I wanted to perform surveillance against a target, I would tap his phone line. In the very old days I would physically attach to phone lines, but these days I work with the telephone company to obtain electronic access. The telcom is a service provider and as such is subject to CALEA, which mandates providing various snooping capabilities for LEA use.

Also today, and definitely tommorow, targets are using VoIP. VoIP can be monitored by watching broadband lines, but "tapping a line" is not sufficient. The classic deficiency is call forwarding. As described at the conference today, assume a LEA is watching all broadband traffic to and from a target. If the target enables call forwarding through his VoIP provider, a LEA watching network traffic will not see a call come in if the VoIP provider forwards it elsewhere.

Therefore, gaining access to that critical information requires monitoring the service, not the line.

Extend the services to be monitored beyond VoIP. Suddenly you can probably imagine many scenarios where LEAs would want to essentially be inside the service, or able to tap data directly from the service. The line to the target is secondary. For example, why try to follow a target from Internet cafe to Internet cafe if you can just watch his chat room, Web forum, or other meeting place directly?

This seems less like Big Brother and more like Embedded Brother. Any application wich law enforcement might consider a source of data on a target could be compelled by law to provide a means for LEA to perform lawful intercept. Already we are seeing signs of this through various data retention directives. One of the conference panelists mentioned a story from Germany that makes this point. He said Germany (or at least part of it) has a system that tracks cars paying tolls. When the system was deployed it was forbidden to use such data for tracking car owners, even if crimes were committed. However, a person was run down at a toll booth. After the crime happened, an outcry erupted to use the toll logs to identify the culprit. This is the sort of "emergency thinking" that results in powers be granted to LEAs to become ever deeper into technology services.

One financial note: consider buying stock in log management and storage vendors. All of this data needs to be managed and stored.

My previous thoughts on this subject appear in posts containing the lines The Revolution Will Be Monitored.

In one of my classes I list the reasons why people monitor, in this progression:

  1. Performance: is our service working well?

  2. Fault: why does our service fail?

  3. Security: is our service compromised?

  4. Compliance: is our service meeting legal and regulatory mandates?


Many companies are still at step 2. Step 3 might be leapfrogged and step 4 might be here sooner than you think. Hopefully data collected for step 4 will inform step 3, thereby serving a non-LEA purpose as well.

Incidentally I did not hear the term encryption mentioned as a challenge for law enforcement. I'll let the conspiracy theorists chew on that one. In a service-oriented lawful intercept world, I would imagine LEAs could access data unencrypted at the service provider if end-to-end encryption were not part of the service. In other words, maybe your VoIP call is encrypted from you to the provider, and from the provider to the recipient, but the LEA can intercept at the hub of the communication.

Update: I want people to understand that me predicting this development does not mean I agree with it. I prefer privacy to what's going to happen.

Interview with Designing BSD Rootkits Author

If you like rootkits and/or FreeBSD try reading this interview with Designing BSD Rootkits author Joseph Kong. This amazes me:

Could you introduce yourself?

Joseph Kong: I am a relatively young (24 years old) self-taught computer enthusiast who enjoys working (or playing, depending on how you look at it) in the field of computer security; specifically, at the low-level...

When did you hear about rootkits for the first time?

Joseph Kong: The first time I heard the term "rootkits" was in 2004--straight out of the mouth of Greg Hoglund, who was at the time promoting his new book Exploiting Software: How to Break Code. That's actually how I got into rootkit programming. Thanks Greg. :)


Wow. Zero to book on rootkits in 3 years -- that's cool.

Now for a bit of wisdom:

Do you know any anti-rootkit tool/product for *BSD?

I know a lot of people who refer to rootkits and rootkit-detectors as being in a big game of cat and mouse. However, it's really more like follow the leader--with rootkit authors always being the leader. Kind of grim, but that's really how it is. Until someone reveals how a specific (or certain class of) rootkit works, nobody thinks about protecting that part of the system. And when they do, the rootkit authors just find a way around it...


Contrast that with this bit of marketing:



Guess which one is correct?

Finally, I appreciated seeing this:

Keep in mind that although I am extolling the virtues of prevention, as other computer security professionals (such as, Richard Bejtlich) have said, prevention eventually fails (e.g., Loïc Duflot showed that you can bypass secure levels in SMM), and detection is just as important. The problem is rootkit detection, as I said earlier, is difficult.

This ties in to what I wrote concerning Joanna Rutkowska's views earlier this year.

Owning the Platform

At AusCERT last week one of the speakers mentioned the regular autumn spike in malicious traffic from malware-infested student laptops joining the university network. Apparently this university supports the variety of equipment students inevitably bring to school, because they require or at least expect students to possess computing hardware. The university owns the infrastructure, but the students own the platform. This has been the norm at universities for years.

A week earlier I attended a different session where the "consumerization" of information technology was the subject. I got to meet Greg Shipley from Neohapsis, incidentally -- great guy. This question was asked: if companies don't provide cellphones for employees, why do companies provide laptops? Extend this issue a few years into the future and you see that many of our cellphones will be as powerful as our laptops are now. If you consider the possibility of server-centric, thin client computing, most of the horsepower will need to be elsewhere anyway. Several large companies are already considering the "no company laptop" approach, so what does that mean for digital security?

You must now see the connection. University students are the corporate employees of the near future. If we want to learn some tricks for dealing with employee-owned hardware on company-owned infrastructure manipulating mixed-ownership data (business and personal), consider going back to college. I think we're going to have to focus on Enterprise Rights Management, which is a popular topic. That still won't make a difference if the employee smartphone is 0wned by an intruder who is taking screen captures, unless some form of hardware-enforced Digital Rights Management frustrates this attack. Regardless, I think the next corporate laptop you receive might be your last.

Electronic Discovery Resources

The Economist recently published Electronic discovery: Of bytes and briefs. To summarize:

As technology changes the way people communicate, the legal system is stumbling to keep up. The “discovery” process, whereby both parties to a lawsuit share relevant documents with each other, used to involve physically handing over a few boxes of papers. But now that most documents are created and stored electronically, it is mostly about retrieving files from computers. This has two important consequences...

First, e-discovery is more intrusive than the traditional sort...

Second, e-discovery is more burdensome than the old sort.


I think I first mentioned ediscovery last year in Forensics Warnings from CIO Magazine. I am acquainting myself with the intrusiveness and burden of this process in preparation for some new work. The article mentioned the Institute for the Advancement of the American Legal System (IAALS), which published Navigating the Hazards of E-Discovery: A Manual for Judges in State Courts Across the Nation. This is a free .pdf. I am exceptionally interested in their next report:

Later this summer, we will release a publication on the implications of these changes for America’s businesses.

The IAALS article also mentions Suggested Protocol for
Discovery of Electronically Stored Information (“ESI”)
(.pdf), a Maryland document that elaborates on ediscovery.

MRAPs Lose to Arms Race

Three weeks ago I wrote about Vulnerability-Centric Security regarding the Mine Resistant Ambush Protected (MRAP) vehicle, the US Army's replacement for the Hummvee pictured at left. I consider the MRAP an example of the failures of vulnerability-centric security. This morning USA Today's story MRAPs can't stop newest weapon validates my thoughts:

New military vehicles that are supposed to better protect troops from roadside explosions in Iraq aren't strong enough to withstand the latest type of bombs used by insurgents, according to Pentagon documents and military officials.

As a result, the vehicles need more armor added to them, according to a January Marine Corps document provided to USA TODAY...

"Ricocheting hull fragments, equipment debris and the penetrating slugs themselves shred vulnerable vehicle occupants who are in their path," said the document...

EFPs are explosives capped by a metal disk. The blast turns the disk into a high-speed slug that can penetrate armor.


Even with additional armor, the augmented MRAPs will still be vulnerable. This is because attackers possess advantages that defenses cannot overcome. In April I wrote Threat Advantages, which describes the strengths I see digital threats offering.

At least John Pike understands this problem.

It's doubtful new armor can stop all EFPs, said John Pike, director of Globalsecurity, a Washington-based defense think tank.

"Short of victory, they're going to continue to figure out ways to kill Americans," Pike said of the insurgents. "In any war, it is measure and countermeasure."


Investor's Business Daily agrees:

[W]e know the insurgency won't be put down with such defensive technologies. Better armor won't kill jihadists and suicide bombers. Better intelligence and better offensive tactics will.

In the digital realm, offense means actions to deter, investigate, apprehend, prosecute, and incarcerate threats. Sitting behind higher, deeper walls is not the answer. Neither is trusting the victim (the hardware, OS, application, or data) to defend itself.

Selasa, 29 Mei 2007

Review of Inside the Machine Posted

Amazon.com just posted my four star review of Inside the Machine. From the review:

Let me say that I wish I could give this book 4 1/2 stars. It's just shy of 5 stars, but I couldn't place this book alongside some of my favorite 5-star books of all time. Still, I really enjoyed reading Inside the Machine -- it's a great book that will answer many questions for the devoted technical reader.

At the end of the review I mention Scott Mueller's Upgrading and Repairing PCs. In a nice show of synchronicity, the chapter from Scott's book on Microprocessor Types and Specifications is available online in .pdf format.

Clueless Consultants

I'm seeing a common "business of security" theme today, following my post The Peril of Speaker-Sponsors. Ira Winkler writes in If You Have to Ask, You Shouldn't Be Asking:

[S]omeone once attended a presentation that I gave on penetration testing, and then contacted me a year later with an e-mail that basically said, “I finally talked a client into letting me perform a pen test. I don’t know what to do, how to do it, what to charge, or any special legal language that should be in the contract.” My response was basically, “You shouldn’t do the work...”

In today’s message, a consultant from a very large integration firm sent out a message saying that one of their clients wants to scope out integration of a NOC/SOC. He gave a very wide variety of requirements for the facility, and then wanted feedback from a wide variety of people not associated with his company. While I am normally all for helping out a colleague, this person should have either sought this info inside his own organization, which has access to such experts, or just told the client he doesn’t have a clue and to go elsewhere.


I see this problem all the time, in two forms. First, I am frequently asked to perform a variety of tasks for which I do not consider myself an expert. Blog visitors, book readers, and students sometimes expect me to be an expert in another area of security after seeing my work in network security monitoring, network forensics, incident response, and related subjects. When asked to work outside those areas, I always refer the work to colleagues whom I consider to be experts in the task in question. In return, my colleages pass me work they would prefer me to do.

Second, I know many service/consulting companies who will take any job, period. They are managed by people who only care about making "bodies chargeable," preferably over 100% for the week. (That means billing over 40 hours of work to a client, per consultant, per week.) The consultants (1) suffer silently, for fear of losing their jobs; (2) think they can become experts in anything in "10 minutes" (I hear that often); or (3) don't realize that they are clueless, and probably never will. The end result is the service delivered to the client is sub-par at best, or a disaster at worst.

I agree with Ira' last statement:

[T]he mark of a good consultant is one who knows when to turn away work.

In light of that wisdom, consider asking the following question when shopping for a consultant:

What work would you not want to do?

If the answer is "nothing," then walk away.

Bejtlich on Sites Collide Podcast

Tyrel McMahan interviewed me at CONFidence for his Sites Collide podcast. It's in QuickTime format. We talk about what smaller businesses should do with regards to monitoring and I discuss ideas from my conference presentation. Thanks to Tyrel for the interview.

Security Language

Gunnar Peterson's post on the new Common Attack Pattern Enumeration and Classification (CAPEC) project reminded me that MITRE is hosting a ton of these sorts of frameworks. Most of them are listed at measurablesecurity.mitre.org so I intend to refer to that portal from now on. It would be great to see related projects cooperate with MITRE's work. For example, the Web Application Security Consortium "Threat" Classification should be renamed to be an attack classification, consistent with the MITRE CAPEC enumeration. Similarly, it would be nice to see the Open Web Application Security Project Top Ten speak in terms of "attacks" rather than "flaws."

Overall I would like to see some rigorous thought applied to the use of security terms. For example, a recent SANS NewsBites said:

We are planning for the 2007 Top20 Internet Security Threats report. If you have any experience with Top20 reports over the past six years, could you tell us whether you think an annual or semi-annual or quarterly summary report is necessary or valuable?

Is this another identity crisis for the SANS Top 20 (as covered in my post Further Thoughts on SANS Top 20) or is someone saying "threat" when they mean "vulnerability," or...?

We need to have our terminology straight or we will continue to talk past each other.

The Peril of Speaker-Sponsors

One of the interesting aspects of being an independent consultant is having other companies think TaoSecurity exists as a mighty corporate entity with plenty of cash to spend. This has exposed me to some of the seedier aspects of corporate life, namely "speaker-sponsorship." Have you ever attended a keynote address, or other talk at a conference, and wondered how such a person could ever have been accepted to speak? There's a good chance that person paid for the slot.

Two instances of this come to mind. First, several months ago I was contacted by the producer of a television program to appear on their show. The program was hosted by Terry Bradshaw (no kidding) and was looking for speakers to discuss the state of the digital security market. This sounded like it was almost too good to be true, and guess what -- it was. A few minutes into the conversation with the producer I learned that TaoSecurity would be expected to pay a $15,000 sponsorship fee to "defray costs" for Mr. Bradshaw, and other expenses. Essentially I would be buying a spot on the show, but it would be a "fabulous marketing experience." I said forget it.

Second, I just received a call from someone organizing a "security event." This person was looking for "experts" on PCI and other topics for briefings in September. I told him I was not available at the specified time, so he asked to be switched to the TaoSecurity marketing department since what he really wanted was "speaker-sponsors." In other words, people speaking at this event will have paid for their slots. Again, I said forget it.

Keep these thoughts in mind the next time you see a lame talk at a security conference by a marketing person.

Attacker 3.0

Gunnar Peterson mentioned a few terms that, for me, brilliantly describe the problem we face in digital security. To paraphrase Gunnar, the digital world consists of the following:

  • Security 1.0

  • Web 2.0

  • Attacker 3.0


To that might I add the following:

  • Government -1.0

  • User 0.5

  • Application Developer 2.5


What do I mean by all of this?

  • Government -1.0: in general, hopelessly clueless legislation leads to worse security than without such legislation -- often due to unintended consequences

  • User 0.5: users are largely unaware and essentially helpless, but I wouldn't expect them to improve -- I'm not an automobile designer or electrical engineer, yet I can drive my car and watch TV

  • Security 1.0: security tools and techniques are just about good enough to address yesterday's attacks

  • Web 2.0: this is what is here, with more on the way -- essentially indefensible applications all running over port 80 TCP (or at least HTTP) that no developer really understands and for which no one takes responsibility

  • Application Developer 2.5: by this I do not mean developers are ahead of anyone with respect to security; rather, they are introducing new features and capabilities without regard to security, thereby exposing vulnerabilities no one (except intruders and some security researchers) really understand

  • Attacker 3.0: in Tao I said because some intruders are smarter than us and unpredictable, prevention eventually fails -- it's more true now than ever


The only way I know to deal with this problem is to stay aware of it through monitoring and to deter, prosecute, and incarcerate threats. Without Attacker 3.0 free to exploit at will without fear of attribution and retribution, I care less about these problems.

Prof Starbird Mathematics Courses

I'm a big fan of courses produced by The Teaching Company, so I bet similarly-minded blog readers might also enjoy such courses. My favorite instructor is Prof Michael Starbird. I noticed that three of his four courses are on sale until 14 June:

When I say "sale" I mean "buy these now or wait another year until they are on sale again," because a course currently selling for $69.95 will be $254.95 most of the year.

I took all sorts of math courses through college and probability and statistics courses through graduate school, but I never developed the sense of understanding that Prof Starbird conveyed.

After watching Prof Starbird's first course, The Joy of Thinking: The Beauty and Power of Classical Mathematical Ideas, my wife and I visited Prof Starbird at his office at the University of Texas. I don't think he ever had a "fan" visit before, because he gave us a prop from the course (triangles used to prove the Pythagorean Theorem, I think).

I saw Prof Starbird published a new book titled Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas. I have to admit I still haven't read the first edition of his book The Heart of Mathematics, so I should try to bring that book on a plane soon.

I also like history courses from The Teaching Company and I've even watched a course on music, but that's not what I expect my fellow technophiles to want to read in this blog.

Minggu, 27 Mei 2007

Brief Thought on FreeBSD X.org Update

Since I do not run X on my FreeBSD servers, and my laptop now runs Ubuntu (heretical but productive, I know), I have not been affected by the update of X.org to 7.2 on FreeBSD. I read Updating Firefox 2 and FreeBSD 6.2 and the response Not everybody will be happy with the X.org upgrade. Basically there's a difference of opinion concerning the appropriateness of radically changing a key addition to the operating system mid-stream, i.e., during the life of 6.2.

If I were running FreeBSD 6.2 with X, I probably would have tried avoiding X.org 7.2 if possible. Losing X is a very disruptive event if the upgrade fails, and with so many ports affected it would be very invasive. I would have waited until the release of FreeBSD 6.3 or 7.0 before using X.org 7.2. Alternatively, I might have reinstalled 6.2 without X.org, and then added it and all other software as packages.

I understand the developers wanting to get X.org 7.2 into users hands as soon as possible, given the amount of work involved and their desire to have finished months ago. However, changing from a monolithic version of X.org to a modular one seems disruptive enough to have waited for coordination with the release of FreeBSD 6.3 and 7.0. I'm not a developer but that's my thoughts on the matter. I would be curious to hear how others might be handling this issue.

Another Anti-Virus Problem, Again

In February I blogged about a vulnerability in a Trend Micro product that exposed systems "protected" by this anti-virus software to remote exploitation. Symantec provides another example that running anti-virus is not cost free: Symantec false positive cripples thousands of Chinese PCs.

Now, according to Symantec may compensate Chinese users hit by buggy update, Symantec may pay companies affected by its botched signature update. Trend Micro apparently had a similar problem in 2005, before I was blogging about these dangers; it cost TM $8.2 million.

Please keep these stories in mind when you hear people claim that adding any security software to a system is automatically good and justified because of "defense in depth."

On a related note, this story pointed me towards the English language edition of the Chinese Internet Security Response Team blog.

Reminder: Time Running Out for Bejtlich at GFIRST

I'll be teaching and speaking at the 2007 GFIRST conference in Orlando, FL in June 2007. This is pro-bono since DHS isn't paying airfare, hotel, meals, or a speaking honorarium. On Monday 25 June 2007 I'll be teaching two half-day tutorials. The first will cover Network Incident Response and the second will cover Network Forensics. On Tuesday 26 June at 1415 I will deliver the talk I gave at Shmoocon -- Traditional IDS Should Be Dead. I spoke at the 2006 and 2005 GFIRST conferences as well.

GFIRST still hasn't updated their training page to reflect my class, but I will be there teaching.

Reminder: Early Registration Ends Soon for Bejtlich at SANSFIRE 2007

I'll be teaching a special one-day course, Enterprise Network Instrumentation, at SANSFIRE 2007 in Washington, DC on 25 July 2007.

ENI is a one-day course designed to teach all methods of network traffic access. If you have a network you need to monitor, ENI will teach you what equipment is available (hubs, switch SPAN ports, taps, bypass switches, matrix switches, and so on) and how to use it effectively. Everyone else assumes network instrumentation is a given. ENI teaches the reality and provides practical solutions.

Please register while there are still seats available. My class is the day before all the six-day tracks begin. If you register before 6 June you will save $250. If you register by 27 June you will save $150. If you take this one-day class with a full SANS track my class only costs $450. Please note SANS set all of these prices and schedules.

This is the only time I'll be teaching this class in 2007. Thank you.

Update: I cancelled the class. If you want reasons please email me privately. Thank you.

Bejtlich Teaching Network Security Operations in Chicago

I am happy to announce that I will be teaching a three day edition of my Network Security Operations training class in Chicago, IL on 27-29 August 2007. This is a public class, although I will be speaking at the 30 August meeting of the Chicago Electronic Crimes Task Force. Please register here. The early discount applies to registrations before midnight 27 July. ISSA members get an additional discount on top of the early registration discount.

Network Security Operations addresses the following topics:

  • Network Security Monitoring


    • NSM theory

    • Building and deploying NSM sensors

    • Accessing wired and wireless traffic

    • Full content tools: Tcpdump, Ethereal/Tethereal, Snort as packet logger, Daemonlogger

    • Additional data analysis tools: Tcpreplay, Tcpflow, Ngrep, Netdude

    • Session data tools: Cisco NetFlow, Fprobe, Flow-tools, Argus, SANCP

    • Statistical data tools: Ipcad, Trafshow, Tcpdstat, Cisco accounting records

    • Sguil (sguil.sf.net)

    • Case studies, personal war stories, and attendee participation


  • Network Incident Response


    • Simple steps to take now that make incident response easier later

    • Characteristics of intruders, such as their motivation, skill levels, and
      techniques

    • Common ways intruders are detected, and reasons they are often initially
      missed

    • Improved ways to detect intruders based on network security monitoring
      principles

    • First response actions and related best practices

    • Secure communications among IR team members, and consequences of negligence

    • Approaches to remediation when facing a high-end attacker

    • Short, medium, and long-term verification of the remediation plan to keep the
      intruder out


  • Network Forensics


    • Collecting network traffic as evidence

    • Protecting and preserving traffic from tampering, either by careless
      helpers or the intruder himself

    • Analyzing network evidence using a variety of open source tools, based
      on network security monitoring (NSM) principles

    • Presenting findings to lay persons, such as management, juries, or judges

    • Defending the conclusions reached during an investigation, even in the
      face of adversarial defense attorneys or skeptical business leaders



This is only one of two Network Security Operations courses left for 2007. Please consider attending this class if you want to understand how to detect, inspect, and eject network intruders.

Bejtlich Teaching Network Security Operations in Cincinnati

I am happy to announce that I will be teaching a three day edition of my Network Security Operations training class in Cincinnati, OH on 21-23 August 2007. The Cincinnati ISSA chapter is hosting the class. Please register here. The early discount applies to registrations before 20 July. ISSA members get an additional discount on top of the early registration discount.

Network Security Operations addresses the following topics:

  • Network Security Monitoring


    • NSM theory

    • Building and deploying NSM sensors

    • Accessing wired and wireless traffic

    • Full content tools: Tcpdump, Ethereal/Tethereal, Snort as packet logger, Daemonlogger

    • Additional data analysis tools: Tcpreplay, Tcpflow, Ngrep, Netdude

    • Session data tools: Cisco NetFlow, Fprobe, Flow-tools, Argus, SANCP

    • Statistical data tools: Ipcad, Trafshow, Tcpdstat, Cisco accounting records

    • Sguil (sguil.sf.net)

    • Case studies, personal war stories, and attendee participation


  • Network Incident Response


    • Simple steps to take now that make incident response easier later

    • Characteristics of intruders, such as their motivation, skill levels, and
      techniques

    • Common ways intruders are detected, and reasons they are often initially
      missed

    • Improved ways to detect intruders based on network security monitoring
      principles

    • First response actions and related best practices

    • Secure communications among IR team members, and consequences of negligence

    • Approaches to remediation when facing a high-end attacker

    • Short, medium, and long-term verification of the remediation plan to keep the
      intruder out


  • Network Forensics


    • Collecting network traffic as evidence

    • Protecting and preserving traffic from tampering, either by careless
      helpers or the intruder himself

    • Analyzing network evidence using a variety of open source tools, based
      on network security monitoring (NSM) principles

    • Presenting findings to lay persons, such as management, juries, or judges

    • Defending the conclusions reached during an investigation, even in the
      face of adversarial defense attorneys or skeptical business leaders



This is only one of two Network Security Operations courses left for 2007. Please consider attending this class if you want to understand how to detect, inspect, and eject network intruders.

4000 Helpful Votes at Amazon.com

Last week the "Helpful Votes" count for my Amazon.com reviews reached the 4,000 count. I hit 3,000 in January 2006 and 1,500 in December 2003. Since reaching the 3,000 mark I've read and reviewed 55 additional books. Thank you to everyone who votes my reviews "helpful."

If you want to see what I have on my shelf and plan to read next, please check out my reading list. If you want to see the books I hope to see soon, please visit my Amazon.com Wish List.

If you want general recommendations read my Amazon.com Listmania Lists. In 2005 Bookbool published my favorite 10 books from the past 10 years.

My reading pace has slowed since becoming an independent consultant and father of two, but I try to read when flying hither and non.

Bejtlich Teaching at USENIX Security

USENIX just posted details on USENIX Security 2007, 6-10 August in Boston, MA. I will be teaching TCP/IP Weapons School, Layers 4-7 on 6-7 June.

This is a sequel to TCP/IP Weapons School, Layers 2-3 at USENIX Annual 2007 in Santa Clara, CA on 21-22 June and TCP/IP Weapons School, Layers 2-3 at Techno Security 2007 in Myrtle Beach, CA on 6-7 June.

The 2 day class I'm teaching at Black Hat on 28-29 and 30-31 July is a condensed version (2 days) of the 4 day series (broken into layers 2-3 and 4-7) for USENIX. I also plan to teach this condensed edition at ForenSec in Regina, SK in September.

Snort Report 6 Posted

My sixth Snort Report -- Output options for Snort data has been posted. From the introduction:

Output modes are the methods by which Snort reports its findings when run in IDS mode. As discussed in the first Snort Report, Snort can also run in sniffer and packet logger modes. In sniffer mode, Snort writes traffic directly to the console. As a packet logger, Snort writes packets to disk in Libpcap format. This article describes output options for IDS mode, called via the -c [snort.conf] switch. Only IDS mode offers output options.

This is the first of two Snort Reports in which I address output options. Without output options, consultants and VARs can't produce Snort data in a meaningful manner. Because output options vary widely, it's important to understand the capabilities and limitations of different features. In this edition of Snort Report, I describe output options available from the command line and their equivalent options (if available) in the snort.conf file. I don't discuss the Unix socket option (-A unsock or alert_unixsock). I will conclude with a description of logging directly to a MySQL database, which I don't recommend but explain for completeness.


In the next edition I will discuss Barnyard.

Jumat, 25 Mei 2007

Heading Home from Australia

My whirlwind Australia trip is coming to a close. I'll be boarding a flight from Sydney to LAX soon. I'd like to thank Christian Heinrich and John Dale from Secure Agility for hosting me in Sydney and to everyone at AusCERT for helping me with my classes in Gold Coast.

I'd like to briefly record a few thoughts on the AusCERT conference.

  • Andrea Barisani gave a great talk on the rsync1.it.gentoo.org compromise of December 2003. He emphasized that preventing incidents is nice, but security monitoring and awareness are absolutely critical. I need to try his Tenshi log monitoring tool.

  • Greg Castle introduced his Whitetrash whitelisting Web redirector for Squid. I think his approach is very innovative and I plan to try Whitetrash with my lab Squid proxy. Mike showed how Google Mobile could avoid some URL inspectors, with URLs like http://google.com/gwt/n?u=http:%3a%2f%2fslashdot.org.

  • Mike Newton from Stanford explained his Argus infrastructure, which collects 35 GB of data per day, which he reduces with bzip2 to 11 GB per day and then 3 GB per day with custom filtering. He keeps 30 days online in raw format then compresses and stores 400 days. He watches 5 class B networks with 45,000 hosts. Based on his analysis Stanford is segmenting itself into 300 zones using virtual firewalls (?). He said that one of the important reasons to monitor with Argus is to avoid having to disclose incident details, because Argus data can show that compromise of sensitive data was unlikely or did not occur.

  • John McHugh (formerly of CERT) gave a great talk on network situational awareness using SiLK, right after my talk. I need to try some of the tools at the Network Situational Awareness group at CERT. I had dinner with John and I hope to do a guest lecture at some point at his school.

  • Cristine Hoepers from the Brazil CERT spoke on spam research using open proxy honeypots. Her talk reminded me that I should consider honeypots as a way to collect threat information in locations where monitoring production traffic is sensitive. If I monitor the honeypot only I can limit privacy complaints about seeing other people's traffic.

Minggu, 20 Mei 2007

Latest Plane Reading

I'm on the road again, en route to Gold Coast for AusCERT, followed by a public course on Network Security Monitoring in Sydney on Friday 25 May 2007. There are still seats left -- check it out if you want to attend!

Here are a few thoughts on items I read on my flight from IAD to LAX.


  • The latest Cisco IP Journal article on DNS Infrastructure by Steve Gibbard is awesome. Read it if you really want to understand global DNS in a few pages.

  • The Hotbots paper Peer-to-Peer Botnets (.pdf) is awesome. I question the use of PerilEyez for forensic work, but I haven't tried it before. I need to check out Trojan.Peacomm and Kademlia.

  • Baller Herbst has helpful CALEA docs. I also liked the Aqsacom LAWFUL INTERCEPTION FOR IP NETWORKS White Paper (.pdf).

  • Kudos to Matt Blaze for more cool research, specifically his co-authored paper The Eavesdropper's Dilemma. If you think you're doing network forensics you need to develop a strategy to address his conclusion:

    Internet eavesdropping systems suffer from the eavesdropper’s dilemma. For electronic wiretapping systems to be reliable, they must exhibit correct behavior with regard to both sensitivity and selectivity. Since capturing traffic is a requisite of any monitoring system, considerable research has focused on preventing evasion attacks and otherwise improving sensitivity. However, little attention has been paid to enhancing selectivity or even recognizing the issue in the Internet context. Traditional wisdom has held that eavesdropping is sufficiently reliable as long as the communicating parties do not participate in a bilateral effort to conceal their messages.

    We have demonstrated that even in the absence of cooperation between the communicating endpoints, reliable Internet eavesdropping is more difficult than simply capturing packets. If an eavesdropper cannot definitively and correctly select the pertinent messages from the captured traffic, the validity of the reconstructed conversation can be called into question. By injecting noise into the communication channel, unilateral or third-party confusion can make the selectivity process much more difficult and therefore further diminishes the reliability of electronic eavesdropping.


    Life just got more complicated.

  • We need to take out Hackistan.

  • CIO Magazine has a good article with percentages of companies not in compliance with various rules and regulations. It contains gems like:

    Compliance with federal, state, and international privacy and security laws and regulations often is more an interpretive art than an empirical science—and it is frequently a matter for negotiation. How to (or, for some CIOs, even whether to) follow regulations is neither a simple question with a simple answer nor a straightforward issue of following instructions. This makes it more an exercise in risk management than governance. Often, doing the right thing means doing what’s right for the bottom line, not necessarily what’s right in terms of the regulation or even what’s right for the customer...

    “We’re trying to remain profitable for our shareholders, and we literally could go broke trying to cover for everything. So, you make risk-based decisions: What’re the most important things that are absolutely required by law?”...
    The CISO told Taylor that she had received an e-mail from one of her programmers informing her that the school may have experienced a breach that may have exposed students’ personal information. The programmer was unsure if the law required the school to report the incident and asked the CISO for guidance.

    Taylor asked her what she did. She said she wrote back to the programmer telling him not to do anything. Taylor told the CISO that the university should have reported the breach. The CISO disagreed, saying, essentially, that because very few people review system log files and because only one or two people at the university understood the systems and the data in them, it was probable that the breach would go unremarked and undiscovered...

    The cost to harden the legacy database against a possible intrusion could come to $10 million, he says. The cost to notify customers in case of a breach might be $1 million. With those figures, says Spaltro, “it’s a valid business decision to accept the risk” of a security breach. “I will not invest $10 million to avoid a possible $1 million loss,” he suggests...


    All of this rings true to me.

  • Who's Had a Taste of Your Intellectual Property? in Information Security magazine is good.

    According to a 2006 report from the office of the United States Trade Representative (USTR), U.S. businesses are losing approximately $250 billion annually from trade secret theft. Federal law enforcement officials say the most targeted industries include biotechnologies and pharmaceutical research, advanced materials, weapons systems not yet classified, communications and encryption technologies, nanotechnology and quantum computing...

    [I]t can take years until a trade secret theft is detected, says Smith: "You wouldn't even know it [your IP] was missing for five years, when a competitor would suddenly introduce a product that sold for one-third to one-fifth of the price of yours."..

    For organizations that depend heavily on commercializing the product of their R&D activities, trade secrets are particularly important. Patents are equally important, but trade secrets differ from patents in a significant way. They are--as their name implies--secret. Whereas patents represent a set of exclusive rights granted by the government in exchange for the public disclosure of an invention, a trade secret is internal information or knowledge that a company claims it alone knows, and which is a valuable intangible asset.

    While patent owners have certain legal protections from anyone using their patents without permission, companies are responsible for proving they have the right to legal protection of their trade secrets. According to the UTSA, your company must demonstrate that the specific information or knowledge is not generally known to the public, therefore it derives independent economic value; and that you have made reasonable efforts to make sure the knowledge remains secret.

    A trade secret's validity can only be proven via litigation; there's no automatic protection just because your company believes it possesses one. Ironically, a trade secret must be stolen or compromised before you can attempt to demonstrate it is legally a trade secret. Once in litigation, your company must convince the court of three points: secrecy, value and security. Inevitably, the most difficult element to demonstrate is that your company had reasonable controls in place to protect the secrecy of the IP in question...

    John Landwehr, Adobe's director of security solutions and strategy, believes that the best protection of sensitive data happens at the document level: "Given the range of devices that IP can live on--from desktops, to laptops, to PDAs and mobile phones--we think that the only viable way to persistently protect that information is if the protection travels with the document."

    However, a word of caution about some of these products designed to protect confidential data: Because the vast majority are based on rule-set driven engines, the number of false positives they generate can be significant.


    Oh, that last point sounds too much like IDS. It must be bad?

Jumat, 18 Mei 2007

It's Only a Flesh Wound

The slide above is from Gartner analyst Greg Young's 2006 presentation at the Gartner IT Security Summit 2006, Deconfusicating Network Intrusion Prevention (.pdf). "Deconfusicating" appears to be a fake synonym for simplifying. I bet that was supposed to confuse an IDS, but not an IPS. Funny that stopping an attack requires detecting it, but never mind.

Someone recently recommended I read this presentation, so I took a look. It's basically a push for Gartner's vision of "Next Generation Firewalls" (NGFW), which I agree are do-everything boxes that will eventually collapse into security switches or Steinnon-esque "secure network fabric." The funny thing about all those IPS deployments is that I continue to hear about organizations that utilize only a fraction or none of the IPS blocking capability, and instead use them as -- wait for it -- IDS. Hmm.

That still doesn't account for the major problem with a prevention-only mindset. Let's face the facts: there are events which transpire on the network which worry you, but which you can't reliably make a policy-based allow or deny decision. When business realities rule (which they always do) you let the traffic through. Where's the IPS now? It's an IDS.

There are also events for which you have no idea how to identify them prior to nontechnical incident detection. If you care at all about security you're going to want to keep track of what's happening on the network so you can scope the incident once you know what to look for. I call that one form of Network Security Monitoring (NSM).

At about the same time I saw the 2006 Gartner slides I read IDS in Mid-Morph, an interview with Gene Schultz, long time security veteran. The interview states:

Schultz says there are already signs of new life. For one thing, IDS data is being used as part of intelligence-collection for forensics, he says. "People are gathering a wide range of data about behavior in machines, the state of memory, etc. and combining it to find patterns of attacks.

Intrusion detection is one rendition of going more toward the route of intelligence-collection. Instead of focusing on micro-details like packet dumps, [security analysts] are looking at patterns of activity through intensive system and network analysis on a global scale, to determine what the potential threats are."

Schultz attributes this to a new breed of intrusion detection analyst, "more like an intelligence analyst, especially in the government."


I wonder if Gene read any of my books or articles? For the last five years I've defined NSM as the

collection, analysis, and escalation of indications and warnings to detect and respond to intrusions.

Chapter one from Tao is online and must say the word intelligence a dozen times.

Incidentally, if you're near Sydney I'll be teaching my NSM course on 25 May 2007. If you're near Santa Clara I'll be teaching at on 20 June 2007. Thank you.

Thoughts on Latest CISSP Requirements Change

You all know I am a big fan of the CISSP certification. (If you don't recognize that as sarcasm, please read some old posts.) I wasn't going to comment on the press release (ISC)²® to Increase Requirements for CISSP® Credential to Validate Information Security Expertise, but no one else really has.

First, a little history. The last time a requirements change was announced was January 2002, in the press release (ISC)² TO IMPLEMENT NEW CISSP REQUIREMENTS IN 2003. That article stated:

...new requirements for the Certified Information Systems Security Professional (CISSP) certification, effective Jan. 1, 2003.

As of that date, the minimum experience requirement for certification will be four years or three years with a college degree or equivalent life experience. The current requirements for the CISSP call for three years of experience...

The "equivalent life experience" provision is intended for mature professionals who did not obtain a college degree but are in positions where a college degree would normally be required...


You may remember these changed were announced about a month after 16 year old Namit Merchant passed the CISSP exam, according to a December 2001 SecurityFocus report.

I passed the CISSP in late 2001 as well (I was almost 30, not 16) so all I needed was three years of relevant work experience. Since 1 January 2003, you could have three years experience plus one of the approved credentials. Those include many certs from SANS, for example.

The new requirements for the CISSP, announced this week, are:

Effective 1 October 2007, the minimum experience requirement for certification will be five years of relevant work experience in two or more of the 10 domains of the CISSP CBK®, a taxonomy of information security topics recognized by professionals worldwide, or four years of work experience with an applicable college degree or a credential from the (ISC)²-approved list.

Currently, CISSP candidates are required to have four years of work experience or three years of experience with an applicable college degree or a credential from the (ISC)²-approved list, in one or more of the 10 domains of the CISSP CBK.


I am not sure why (ISC)² is increasing the experience requirement. I don't think an five years of "experience" are going to make that much of a difference when compared to four years of experience plus a degree or credential. Honestly, equating a degree with a certification like CompTIA Security+ (on the "approved list") is really a joke, or should be.

Experience is not the only change:

Also effective 1 October, CISSP candidates will be required to obtain an endorsement of their candidature exclusively from an (ISC)²-certified professional in good standing.

Currently, candidates can be endorsed by an officer from the candidate’s organization if no CISSP endorsement can be obtained. The professional endorsing the candidate can hold any (ISC)² base certification – CISSP, Systems Security Certified Practitioner (SSCP®) or Certification and Accreditation Professional (CAPCM).


This is an anti-fraud attempt. I think it is too late. From the rumblings I've heard, cheating on exams like CISSP is not uncommon. One bad apple can "earn" the CISSP and then "endorse" all his buddies.

Maybe (ISC)² is finally starting to behave like employed French workers, protecting those who already have the certification at the expense of those on the outside? In other words, are there too many CISSPs chasing too few jobs? The latest press release states:

“With an estimated 1.5 million people working in information security globally, the nearly 50,000 CISSPs remain an elite group of professionals that are leading this industry,” Zeitler said. “(ISC)² will continue to assess its certification criteria and processes, as well its examinations and educational programs, to ensure that remains the case.”

50,000! Less than five years ago the press release (ISC)² RECOGNIZES 10,000th CISSP said only 2,000 CISSPs were certified in 1999, and 10,000 was reached in October 2002.

I still think the CISSP exam, and the certification in general, is a waste of time. For the latest example why, read How I Prepared and Passed CISSP:

I chose a self study route, and devoted around 2 months for the preparation. Locked myself in and had very little to no time for the family, I’d told them what I was up to, both my wife and son were very supporting. Every weekday I would dedicate 3 to 4 hours, and on weekends 5 to 6 hours for preparation. The last week before exam, I took leave from work and dedicated around 12 hours straight everyday for 7 days. To cope with the physical and mental tensions I did 45 minutes yoga in the morning and 20 minutes meditation in the afternoon. I took a break or stretched for 5 to 15 minutes after every 1 or 2 hours of studies.

That is ridiculous. I would expect someone who wants to be considered as a "security professional" to be well-enough versed in the CISSP material to not require seven straight days of 12 hour studying sessions, beyond the previous seven weeks of study.

I prepared for the test in 2001 by reading the first edition of the Krutz and Vines CISSP guide, followed by the Exam Cram the night before. That was it. No boot camp, not study marathons, no weeks of study groups. I had about four years experience and I figured that if (ISC)² required three years, I should be ok. I finished the test in 90 minutes and that was it.

If you're wondering how I would replace the CISSP, please read my 2005 post What the CISSP Should Be. I think Peter Stephenson's requirements for certifications are good guidelines as well.

Database Forensics

Database ninja David Litchfield told me he posted the latest in a series of lengthy articles on investigating Oracle database incidents. Specifically, he asked me to review the newest article on Live Response (.pdf) given my background. I recommend checking out the whole set of articles at Database Security.

Speaking of database security, I got a chance to see Alexander Kornbrust of Red-Database-Security GmbH talk about Oracle (in)security at CONFidence 2007. His talk reminded me of comments Thomas Ptacek once made about certain software being indefensible ten years ago, whereas now we have a fighting chance with some software. After hearing Alex's talk I think Oracle belongs in the indefensible category. Oracle appears to be at least five years behind their peer group in terms of producing "secure" code.

(I put "secure" in quotation marks because I don't believe anything is really "secure," but on relative terms Oracle seems far behind those with more robust secure development lifecycles and patch response processes.)

Page Rank at 4

Hi guys, its been really a long time since i update my blog. These days, i am just pure pure busy with ideas flowing around and trying to make my ideas happen. I am actually doing lotsa research and reading work and putting bits and pieces together once it is ready. I should be starting to code when i make a return trip back to dubai from singapore. Well, i was searching for page ranking of my blog, and to my surprise, just 2 months of blogging and commenting, i got a page rank of 4 which i am so happy. Its like i start from 0 to 4, and now, thats an achivement for me. Check out www.seochat.com. This hardwork and perseverance will be continued on my new project and i hope to make it a success. Till then, drop me an email if you guys want to know more about networking or just say hi and I will be happy. Peace.

Minggu, 13 Mei 2007

Third of the Three Wise Men

I just listened to my third of the Three Wise Men, Ross Anderson, courtesy of Gary McGraw's Silver Bullet Podcast. This is another must-heed. During the podcast Prof. Anderson mentioned the following:

  • With respect to secure software development: As tools improve, we continue to "build bigger and better disasters." That echoes a theme in my previous posts.

  • "If someone is going to call themselves a security engineer, then they have to learn how things fail." This means studying history and contemporary security disasters. That's an argument for my National Digital Security Board.

  • Prof. Anderson mentioned potential compulsory registration for security professionals in the UK as a consequence of legislation requiring the registration of bouncers at clubs. Beware such an event here. Talk about unintended consequences.

  • Finally, Prof. Anderson warned of vulnerabilities in Near Field Communication (NFC) technology. For goodness sake, can we slow down the deployment of fundamentally broken technologies?


By the way, not only is the excellent Security Engineering now online, the first 7 chapters can be downloaded in .mp3 format.

Second of the Three Wise Men

I just blogged about a new podcast by the first of my Three Wise Men, namely Marcus Ranum. My second of the Three Wise Men for today is Dan Geer. I just noticed his testimony to the Subcommittee on Emerging Threats, Cybersecurity, and Science and Technology last month has been published. This is another must-heed collection of smart ideas. Brian Krebs summarized the hearing in his story Nation's Cyber Plan Outdated, Lawmakers Told. Dr. Geer's testimony included this gem:

I urge the Congress to put explaining the past, particularly for the purpose of assigning blame, behind itself. Demanding report cards, legislating under the influence of adrenaline, imagining that cybersecurity is an end rather than merely a means — all these and more inevitably prolong a world in which we are procedurally correct but factually stupid.

Amen. Also:

Information security is perhaps the hardest technical field on the planet. Nothing is stable, surprise is constant, and all defenders work at a permanent, structural disadvantage compared to the attackers. Because the demands for expertise so outstrip the supply, the fraction of all practitioners who are charlatans is rising. Because the demands of expertise are so difficult, the training deficit is critical. We do not have the time to create, as if from scratch, all the skills required. We must steal
them from other fields where parallel challenges exist.


I wonder if the fraction of all practitioners with CISSP certifications is rising too?

The opposition is professional. It is no longer joyriders or braggarts. Because of the sheer complexity of modern, distributed, interdigitated, networked computer systems, the number of hiding places for unwanted software and unwanted visitors is very large.

The complexity, for the most part, comes from competitive pressure to add feature-richness to products; there is no market-leading product where one or a small group of people knows it in its entirety, and components from any pervasive system tend to be used and re-used in ways that even their designers did not anticipate.

Were there no attackers, this would be a miracle of efficiency and goodness. But unlike any other industrial product, information systems are at risk not from accident, not from cosmic radiation, and not from clumsy operation but from sentient opponents. The risk is not, as some would blithely say, “evolving” if by evolving the speaker means to invoke the course of Nature. The risk is due to intelligent design, and there is nothing random about it.


This is why one cannot legislate "security" for computers as one could try to legislate "safety" for automobiles. If people were crushing cars with boulders off bridges, shooting out car windows with AK-47s, or running over cars with tanks, no one would be blaming car manufacturers. They would (rightly!) be blaming the threats, as we should be doing with software and digital intruders.

I could easily cite the entire published testimony. Please read it.

RFC 4890: Recommendations for Filtering ICMPv6 Messages in Firewalls

All you fans of mindlessly blocking ICMP traffic are going to be in trouble if you try that strategy with IPv6. Luckily this month RFC 4890: Recommendations for Filtering ICMPv6 Messages in Firewalls was just published. This Informational RFC provides concrete guidance using these categories:

  • Traffic That Must Not Be Dropped

  • Traffic That Normally Should Not Be Dropped

  • Traffic That Will Be Dropped Anyway -- No Special Attention Needed

  • Traffic for Which a Policy Should Be Defined

  • Traffic That Should Be Dropped Unless a Good Case Can Be Made


This is a nice reference for those who wish to implement some degree of control over ICMPv6, which is an integral part of IPv6 and not something one can blindly block.

CONFidence Wrap-Up

This morning I delivered a talk at CONFidence 2007 in Krakow, Poland. I'd like to thank Andrzej Targosz and Jacek Artymiak for being the best hosts I've met at any conference. They got me at the airport, took me to dinner (along with dozens of others), and will take me to the airport (at 0430 no less!) tomorrow. I spent a good amount of time with Anton Chuvakin, Daniel Cid, and Stefano Zanero, which was very cool.

I'd like to mention two talks. First, I watched Paweł Pokrywka talk about a neat way to discovery layer two LAN topology with crafted ARP packets. Unfortunately, his talk was in Polish and I didn't exactly learn how he does it! I spoke to Paweł briefly before my own talk, and he said he plans to release a paper (in English) and his code (called Etherbat), so I look forward to seeing both.

Second, I attended Dinis Cruz's talk on buffer overflows in .NET and ASP.NET. I'm afraid I can't say anything intelligent about his talk. Dinis is a coding ninja and I really only left his talk with one idea: all general-computing platforms can be broken. What's funny is I'm not even sure Dinis would agree with me. His point seemed to be that .NET and ASP.NET (as well as other managed code environments) are breakable, but if implemented "properly," could be made not breakable.

Let's think about that for a moment. I'm sure the people who dreamed up .NET and ASP.NET are really smart. However, there are problems that render them vulnerable to people like Dinis. "Fine," you say. "Let Dinis help Microsoft fix the problems." Ok, Dinis helps implement a new version of this framework. A year or so later someone with a different insight or skill comes along and breaks the new version. And so on. This is the history of general purpose computing. I don't see a way to break the cycle if we continue to want developers to be able to write general purpose software. I am not speaking as a developer, but as an historian. We have been walking this path for over 20 years and I don't see any improvements.

Update: I forgot to mention that I liked Anton Chuvakin's definition of forensics:

Computer forensics is the application of the scientific method to digital media to establish factual information for judicial review.

Thoughts on Rear Guard Security Podcast

I just listened to the first episode of Marcus Ranum's new podcast Rear Guard Security. A previous commenter got it right; it's like listening to an academic lecture. If that gives you a negative impression, I mean Marcus is a good academic lecturer. These are the sorts of lessons you might buy through The Teaching Company, for example.

Marcus isn't talking about the latest and greatest m4d sk1llz that 31337 d00ds use to 0wn j00. Instead, he's questioning the very fundamentals of digital security and trying to equip the listener with deep understandings of difficult problems. Most vendors will hate what he says and others will think he's far too pessimistic. I think Marcus is largely right because (although he doesn't say this outright) he believes vulnerability-centric security is doomed to failure. (I noticed Matt Franz thinks I may be right, too.) When you realize that nothing you do will ultimately remove all vulnerabilities, you've got to improve our ability to deter, investigate, apprehend, prosecute, and incarcerate threats. (I'll say a little more on this in a future post.)

One area in which I disagree with Marcus is penetration testing. I think he might accept my position if framed properly, since he is a proponent of "science" to the degree we can aspire to that standard. In my post Follow-Up to Donn Parker Story I wrote:

Rather than spending resources measuring risk, I would prefer to see
measurements like the following:

  1. Time for a pen testing team of [low/high] skill with [external/internal] access to obtain unauthorized [stealthy/unstealthy] control of a specified asset using [public/custom] tools and [zero/complete] target knowledge. Note this measurement contains variables affecting the time to successfully compromise the asset.

  2. Time for a target's intrusion detection team to identify said intruder (pen tester), and escalate incident details to the incident response team.

  3. Time for a target's incident response team to contain and remove said intruder, and reconstitute the asset.


These are the operational sorts of problems that matter in the real world.


Yes, I did slightly modify number one to clarify meaning.

In Answering Penetration Testing Questions I added a few more comments, specifically mentioning a source like SensePost Combat Grading as an example of how to rate the [low/high] variable. That's not necessarily the standard I would use (since I haven't seen it) but it shows professional pen testers do think about such issues. (Maybe I can chat with them at Black Hat?)

Marcus defines pen testing as attempting to determine the quality of an unknown quantity using another unknown quantity and a constantly varying set of conditions. In my #1 metric I try to reduce the number of variables such that the unknown qualities are fewer. I don't think it's ever possible to eliminate those variables, because the unit to be tested (the enterprise, usually) is never in a fixed state.

That reflects the real world. The enterprise attacked on Tuesday may not be like the enterprise on Wednesday. As much as I advocate knowing your network I recognize that comprehensive, perfect knowledge, simply due to complexity but aggravated by many other factors, cannot be obtained. However, the same factors which complicate our defense can complicate the intruder's offense. Overall I do not see the problem with finding out how long it takes for a pen testing team operating within my chosen parameters to achieve a specified objective.

This is why I think there's room in Marcus' world for my point of view. I believe there is value in the outcome of these tests. In other words, a single test is worth a thousand theories. I cannot say the number of times I've dealt with security people who refuse to believe a given incident has occurred (i.e., their box is rooted, it had no patches, etc.). Once you show them data, there's no room for excuses.

If it takes 30 minutes for a pen testing team of low skill with external access to obtain unauthorized unstealthy control of a specified asset using public tools and zero target knowledge, there's a problem.

If it takes an estimated 6 months for a pen testing team of high skill with internal access to obtain unauthorized stealthy control of a specified asset using private tools and full target knowledge, the situation is a lot different! (I say "estimated 6 months" because few if any customers are going to hire a pen team for that long. It is possible for pen teams to survey an architecture and estimate how long it would take for them to research, develop, and execute a custom zero-day.)

There is a reason the DoD and DoE staff robust red teams (i.e., pen testers). The report Defense Science Board Task Force on The Role and Status of DoD Red Teaming Activities is very helpful.

Incidentally, I'd rather not be the guy who debates Marcus on this issue if he wants to argue with a "pen tester." I don't do pen tests for a living. If he just wants an opposing point of view, I can probably provide that.

LBNL/ICSI Enterprise Tracing Project

Thanks to ronaldo in #snort-gui I learned about the LBNL/ICSI Enterprise Tracing Project. According to the site:

A goal of this project is to characterize internal enterprise traffic recorded at a medium-sized site, and to determine ways in which modern enterprise traffic is similar to wide-area Internet traffic, and ways in which it is quite different.

We have collected packet traces that span more than 100 hours of activity from a total of several thousand internal hosts. This wealth of data, which we are publicly releasing in anonymized form, spans a wide range of dimensions.


I decided to take a look at this data through the lens of Structured Traffic Analysis, which I discuss in Extrusion Detection and (IN)SECURE Magazine. I downloaded lbl-internal.20041004-1303.port001.dump.anon and took the following actions.

First I ran capinfos to get a sense of the nature of the trace.

$ sha256 lbl-internal.20041004-1303.port001.dump.anon
> lbl-internal.20041004-1303.port001.dump.anon.sha256
$ capinfos lbl-internal.20041004-1303.port001.dump.anon
File name: lbl-internal.20041004-1303.port001.dump.anon
File type: libpcap (tcpdump, Ethereal, etc.)
Number of packets: 84574
File size: 5907016 bytes
Data size: 33872987 bytes
Capture duration: 600.507393 seconds
Start time: Mon Oct 4 16:03:41 2004
End time: Mon Oct 4 16:13:41 2004
Data rate: 56407.28 bytes/s
Data rate: 451258.22 bits/s
Average packet size: 400.51 bytes

We can see this trace occupies 10 minutes in October 2004, at 451 Kbps, with 84574 packets.

Next I run Tcpdstat to learn a little more about the traffic.

$ tcpdstat lbl-internal.20041004-1303.port001.dump.anon

DumpFile: lbl-internal.20041004-1303.port001.dump.anon
FileSize: 5.63MB
Id: 200410041603
StartTime: Mon Oct 4 16:03:41 2004
EndTime: Mon Oct 4 16:13:41 2004
TotalTime: 600.51 seconds
TotalCapSize: 4.34MB CapLen: 74 bytes
# of packets: 84574 (32.30MB)
AvgRate: 451.17Kbps stddev:304.48K

### IP flow (unique src/dst pair) Information ###
# of flows: 260 (avg. 325.28 pkts/flow)
Top 10 big flow size (bytes/total in %):
37.9% 18.0% 15.8% 7.4% 6.8% 5.0% 1.3% 1.1% 0.7% 0.7%

### IP address Information ###
# of IPv4 addresses: 143
Top 10 bandwidth usage (bytes/total in %):
56.1% 55.9% 35.0% 23.0% 12.5% 2.7% 1.7% 1.3% 1.3% 1.0%
### Packet Size Distribution (including MAC headers) ###
<<<<
[ 32- 63]: 12784
[ 64- 127]: 17662
[ 128- 255]: 27008
[ 256- 511]: 7531
[ 512- 1023]: 2416
[ 1024- 2047]: 17173
>>>>


### Protocol Breakdown ###
<<<<
protocol packets bytes bytes/pkt
------------------------------------------------------------------------
[0] total 84574 (100.00%) 33872987 (100.00%) 400.51
[1] ip 84514 ( 99.93%) 33859701 ( 99.96%) 400.64
[2] tcp 82817 ( 97.92%) 33278039 ( 98.24%) 401.83
[3] http(s) 1727 ( 2.04%) 1251300 ( 3.69%) 724.55
[3] http(c) 1579 ( 1.87%) 267624 ( 0.79%) 169.49
[3] imap 488 ( 0.58%) 122352 ( 0.36%) 250.72
[3] ssh 176 ( 0.21%) 26337 ( 0.08%) 149.64
[3] other 78847 ( 93.23%) 31610426 ( 93.32%) 400.91
[2] udp 399 ( 0.47%) 88116 ( 0.26%) 220.84
[3] dns 50 ( 0.06%) 8669 ( 0.03%) 173.38
[3] other 349 ( 0.41%) 79447 ( 0.23%) 227.64
[2] icmp 375 ( 0.44%) 35880 ( 0.11%) 95.68
[2] ipsec 923 ( 1.09%) 457666 ( 1.35%) 495.85
>>>>

You get some of the same information as noted in Capinfos, but you also get some primitive protocol breakdowns. Unfortunately, 93.23% of the TCP traffic is unrecognized "other."

Let's see if Tethereal does any better:

taosecurity:/home/analyst/lbl$ tethereal -n -r lbl-internal.20041004-1303.port001.dump.anon -q -z io,phs

===================================================================
Protocol Hierarchy Statistics
Filter: frame

frame frames:84574 bytes:33872987
eth frames:84574 bytes:33872987
ip frames:84514 bytes:33859701
tcp frames:82817 bytes:33278039
udp frames:399 bytes:88116
isakmp frames:176 bytes:53996
short frames:176 bytes:53996
short frames:207 bytes:32742
short frames:923 bytes:457666
icmp frames:375 bytes:35880
short frames:30 bytes:11340
arp frames:28 bytes:1792
===================================================================

Unfortunately, Tethereal statistics don't tell you really anything different from Tcpdstat. Usually Tethereal statistics are more informative, but not here. For the sake of comparison, here is what Wireshark GUI statistics tell you.



Notice the format is different (but more human-friendly), and there is no way to copy or save it to a file. That would be a nice feature. (Tshark shows the same output as Tethereal, incidentally.)

The next step is to let Argus parse the file and then let Argus summarize the protocols it sees.

taosecurity:/home/analyst/lbl$ argus -r lbl-internal.20041004-1303.port001.dump.anon -w lbl.arg

taosecurity:/home/analyst/lbl$ ragator -r lbl.arg -w lbl.arg.ragator

taosecurity:/home/analyst/lbl$ racount -ar lbl.arg.ragator
racount records total_pkts src_pkts dst_pkts
total_bytes src_bytes dst_bytes
tcp 234 82817 39423 43394
33203201 10825712 22377489
udp 84 399 341 58
87969 77032 10937
icmp 36 375 224 151
35682 21416 14266
arp 4 28 28 0
1792 1792 0
non-ip 4 32 32 0
11494 11494 0
sum 363 83651 40048 43603
33340138 10937446 22402692

The next step is to see the IP addresses involved in this trace.

taosecurity:/home/analyst/lbl$ rahosts -nr lbl.arg.ragator
13.59.236.185
33.115.84.19
56.173.106.169
57.161.221.95
57.172.228.116
59.11.88.73
59.79.189.88
59.133.234.45
59.152.11.128
59.214.234.155
59.223.4.38
59.223.8.17
69.152.121.223
92.1.70.86
92.2.245.156
118.123.53.121
118.132.250.187
118.133.86.156
118.133.157.28
118.160.89.230
118.172.218.242
128.3.2.67
128.3.44.26
128.3.44.90
128.3.44.94
128.3.44.98
128.3.44.101
128.3.44.112
128.3.44.167
128.3.44.242
128.3.45.7
128.3.45.10
128.3.45.84
128.3.45.105
128.3.45.128
128.3.45.164
128.3.45.225
128.3.45.232
128.3.46.51
128.3.46.146
128.3.46.165
128.3.46.179
128.3.46.190
128.3.46.202
128.3.46.232
128.3.46.246
128.3.46.252
128.3.47.46
128.3.47.49
128.3.47.58
128.3.47.114
128.3.47.119
128.3.47.161
128.3.47.183
128.3.47.191
128.3.47.207
128.3.47.209
128.3.47.255
128.3.70.147
128.3.71.140
128.3.95.149
128.3.96.157
128.3.96.230
128.3.97.58
128.3.97.204
128.3.99.54
128.3.99.102
128.3.99.118
128.3.100.81
128.3.100.204
128.3.148.125
128.3.161.74
128.3.161.96
128.3.161.98
128.3.161.165
128.3.161.182
128.3.161.223
128.3.161.230
128.3.162.146
128.3.164.191
128.3.164.194
128.3.164.203
128.3.189.187
128.3.189.248
128.3.190.85
128.3.193.169
128.3.193.172
128.3.194.133
128.3.194.169
128.3.194.231
128.3.204.42
128.3.209.152
128.3.212.21
128.3.212.208
131.243.63.245
131.243.89.55
131.243.89.131
131.243.91.153
131.243.91.229
131.243.140.105
131.243.140.156
131.243.141.187
131.243.160.216
131.243.208.56
131.243.208.210
131.243.219.216
137.107.86.84
148.184.171.6
148.184.171.104
148.184.175.97
148.184.191.214
159.29.113.169
163.27.195.211
163.27.232.226
167.130.77.99
169.182.111.161
172.16.34.231
194.80.36.186
198.166.39.133
201.52.39.133
202.46.87.173
203.13.173.243
204.116.246.71
205.103.33.197
207.215.132.184
207.235.114.53
207.235.115.253
207.235.214.252
207.235.255.108
207.245.43.126
208.0.11.26
208.233.189.150
208.235.59.226
216.192.122.101
218.105.16.20
218.131.115.53
218.165.163.184
218.195.4.173
218.201.93.0

That's a lot of addresses for a 10 minute trace. Given the preponderance of 128.3.0.0/16 addresses, I'm guessing that is the HOME_NET.

The next step involves creating what I call session combinations. Essentially I remove the source port as a factor and I group on source IP, destination IP, and destination port.

taosecurity:/home/analyst/lbl$ ra -nn -r lbl.arg.ragator -s
saddr daddr dport proto | sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 |
uniq -c

1 a6:c6:c9:23:cc: a9:71:1d:9f:85: 321
1 3b:d:21:32:30:a 80:b:98:3b:b9:e 2457
1 33.115.84.19 128.3.47.46.5554 tcp
1 33.115.84.19 128.3.47.46.9898 tcp
1 33.115.84.19 128.3.44.101.5554 tcp
1 33.115.84.19 128.3.44.101.9898 tcp
1 33.115.84.19 128.3.45.105.5554 tcp
1 33.115.84.19 128.3.45.105.9898 tcp
1 33.115.84.19 128.3.46.146.5554 tcp
1 33.115.84.19 128.3.46.146.9898 tcp
1 33.115.84.19 128.3.46.202.5554 tcp
1 33.115.84.19 128.3.46.202.9898 tcp
1 33.115.84.19 128.3.46.232.5554 tcp
1 33.115.84.19 128.3.46.232.9898 tcp
1 33.115.84.19 128.3.47.209.5554 tcp
1 33.115.84.19 128.3.47.209.9898 tcp
1 34:c9:c8:fa:af: a9:71:1d:9f:85: 381
1 34:c9:c8:fa:af: a9:71:1d:9f:85: 390
1 69.152.121.223 128.3.46.179 icmp
1 118.132.250.187 128.3.44.112.1518 tcp
1 118.132.250.187 128.3.44.112.1525 tcp
4 128.3.44.26 128.3.190.85.143 tcp
1 128.3.44.26 128.3.47.255.138 udp
1 128.3.44.26 128.3.97.204.53 udp
4 128.3.44.26 128.3.164.194.143 tcp
1 128.3.44.26 128.3.189.187.138 udp
1 128.3.44.26 128.3.189.248 icmp
1 128.3.44.26 128.3.189.248.138 udp
1 128.3.44.26 128.3.189.248.139 tcp
1 128.3.44.26 128.3.189.248.2074 tcp
1 128.3.44.90 128.3.212.208.514 udp
1 128.3.44.98 128.3.97.204.53 udp
2 128.3.44.98 128.3.99.118.993 tcp
1 128.3.44.98 128.3.164.191.5730 tcp
1 128.3.44.101 128.3.97.58.123 udp
1 128.3.44.101 128.3.99.54.123 udp
2 128.3.44.112 59.11.88.73.80 tcp
5 128.3.44.112 59.223.4.38.80 tcp
2 128.3.44.112 59.223.8.17.80 tcp
1 128.3.44.112 128.3.47.255.137 udp
1 128.3.44.112 128.3.47.255.138 udp
3 128.3.44.112 128.3.97.204.53 udp
2 128.3.44.112 218.201.93.0.443 tcp
6 128.3.44.112 59.79.189.88.80 tcp
1 128.3.44.112 128.3.164.194.143 tcp
1 128.3.44.112 148.184.171.6 icmp
1 128.3.44.112 148.184.171.6.135 tcp
2 128.3.44.112 148.184.171.6.139 tcp
1 128.3.44.112 148.184.171.6.389 udp
2 128.3.44.112 148.184.171.6.445 tcp
2 128.3.44.112 218.105.16.20.80 tcp
2 128.3.44.112 218.195.4.173.80 tcp
2 128.3.44.112 118.133.157.28.80 tcp
4 128.3.44.112 118.133.86.156.80 tcp
2 128.3.44.112 148.184.175.97 icmp
1 128.3.44.112 148.184.175.97.135 tcp
1 128.3.44.112 148.184.175.97.139 tcp
2 128.3.44.112 148.184.175.97.389 udp
1 128.3.44.112 148.184.175.97.445 tcp
1 128.3.44.112 163.27.195.211.443 tcp
2 128.3.44.112 163.27.195.211.80 tcp
1 128.3.44.112 163.27.232.226.80 tcp
1 128.3.44.112 205.103.33.197.80 tcp
2 128.3.44.112 208.235.59.226.80 tcp
4 128.3.44.112 118.132.250.187.443 tcp
1 128.3.44.112 148.184.171.104 icmp
1 128.3.44.112 148.184.171.104.139 tcp
1 128.3.44.112 148.184.171.104.445 tcp
2 128.3.44.112 148.184.191.214.389 udp
2 128.3.44.112 207.235.214.252.80 tcp
1 128.3.44.112 207.235.255.108.5002 tcp
1 128.3.44.167 131.243.208.56.123 udp
1 128.3.44.242 128.3.212.208.514 udp
1 128.3.45.7 128.3.96.157.22 tcp
1 128.3.45.7 128.3.99.102.53 udp
1 128.3.45.10 208.0.11.26.80 tcp
1 128.3.45.10 128.3.47.255.137 udp
1 128.3.45.10 128.3.47.255.138 udp
1 128.3.45.10 128.3.97.204 icmp
2 128.3.45.10 128.3.97.204.53 udp
1 128.3.45.10 128.3.148.125.1521 tcp
26 128.3.45.10 137.107.86.84.80 tcp
1 128.3.45.10 203.13.173.243 icmp
1 128.3.45.10 203.13.173.243.53 udp
1 128.3.45.10 56.173.106.169.80 tcp
1 128.3.45.10 59.214.234.155.80 tcp
2 128.3.45.10 169.182.111.161.80 tcp
1 128.3.45.84 128.3.212.208.514 udp
1 128.3.45.105 128.3.96.157.67 udp
1 128.3.45.128 118.123.53.121.80 tcp
5 128.3.45.128 207.245.43.126.80 tcp
55 128.3.45.128 218.131.115.53.80 tcp
1 128.3.45.128 207.215.132.184.80 tcp
14 128.3.45.128 208.233.189.150.80 tcp
1 128.3.45.164 128.3.97.204.53 udp
1 128.3.45.164 128.3.161.182.139 tcp
1 128.3.45.164 128.3.161.223.138 udp
1 128.3.45.164 167.130.77.99.80 tcp
1 128.3.45.225 128.3.47.255.138 udp
1 128.3.45.225 128.3.70.147.161 udp
1 128.3.45.225 128.3.71.140.161 udp
6 128.3.45.225 128.3.97.204.53 udp
1 128.3.45.225 172.16.34.231.161 udp
1 128.3.45.232 202.46.87.173.80 tcp
1 128.3.46.51 128.3.212.208.514 udp
1 128.3.46.146 128.3.212.21 2054
1 128.3.46.146 128.3.96.230 2054
1 128.3.46.146 33.115.84.19 2054
1 128.3.46.146 128.3.162.146 2054
1 128.3.46.165 128.3.161.223.138 udp
1 128.3.46.165 128.3.161.223.139 tcp
1 128.3.46.165 128.3.161.223.2645 tcp
1 128.3.46.165 128.3.164.194.993 tcp
1 128.3.46.165 128.3.209.152 icmp
1 128.3.46.190 128.3.161.74 icmp
1 128.3.46.190 128.3.47.255.138 udp
1 128.3.46.190 128.3.161.165 icmp
1 128.3.46.190 128.3.161.223.139 tcp
1 128.3.46.190 128.3.161.230 icmp
1 128.3.46.190 128.3.164.194.993 tcp
1 128.3.46.190 131.243.141.187 icmp
1 128.3.46.246 128.3.209.152 icmp
4 128.3.46.252 128.3.95.149.111 udp
1 128.3.47.46 128.3.212.208.514 udp
1 128.3.47.49 131.243.219.216.137 udp
1 128.3.47.58 128.3.209.152 icmp
1 128.3.47.114 128.3.212.208.514 udp
1 128.3.47.119 128.3.47.255.138 udp
1 128.3.47.119 128.3.193.169.139 tcp
1 128.3.47.119 128.3.209.152 icmp
2 128.3.47.161 128.3.164.194.993 tcp
1 128.3.47.161 128.3.164.203.389 tcp
1 128.3.47.183 128.3.47.255.138 udp
1 128.3.47.183 128.3.189.248.139 tcp
1 128.3.47.183 204.116.246.71.1863 tcp
6 128.3.47.183 218.165.163.184.80 tcp
1 128.3.47.191 128.3.47.255.138 udp
1 128.3.47.191 131.243.89.131.161 udp
1 128.3.47.191 131.243.91.153.161 udp
1 128.3.47.191 131.243.91.229.161 udp
3 128.3.47.207 128.3.2.67.80 tcp
1 128.3.47.207 128.3.161.96.88 tcp
1 128.3.47.207 128.3.97.204.53 udp
1 128.3.47.207 128.3.164.194.993 tcp
1 128.3.47.207 128.3.193.169.139 tcp
1 128.3.47.207 128.3.193.169.80 tcp
2 128.3.47.207 128.3.193.172.80 tcp
1 128.3.47.207 128.3.194.133.161 udp
1 128.3.47.207 128.3.194.169.161 udp
1 128.3.47.207 128.3.194.231.161 udp
1 128.3.47.207 131.243.140.156 icmp
1 128.3.47.207 131.243.140.156.1026 tcp
1 128.3.47.207 131.243.140.156.135 tcp
1 128.3.47.207 131.243.140.156.445 tcp
1 128.3.96.157 128.3.45.105 icmp
1 128.3.96.230 128.3.47.46 icmp
1 128.3.96.230 128.3.44.101 icmp
1 128.3.96.230 128.3.45.105 icmp
1 128.3.96.230 128.3.46.146 icmp
7 128.3.96.230 128.3.46.146.161 udp
1 128.3.96.230 128.3.46.202 icmp
1 128.3.96.230 128.3.46.232 icmp
1 128.3.96.230 128.3.47.209 icmp
1 128.3.100.81 57.161.221.95.500 udp
1 128.3.100.81 59.133.234.45.500 udp
1 128.3.100.81 57.172.228.116.500 udp
1 128.3.100.81 118.172.218.242.500 udp
1 128.3.100.204 92.1.70.86.500 udp
1 128.3.100.204 92.2.245.156.500 udp
1 128.3.100.204 118.160.89.230.500 udp
1 128.3.100.204 131.243.63.245.500 udp
1 128.3.161.98 128.3.46.190.1050 tcp
1 128.3.161.165 128.3.46.190.1047 tcp
1 128.3.161.165 128.3.46.190.1048 tcp
1 128.3.161.223 128.3.46.165.139 tcp
1 128.3.162.146 128.3.46.146 icmp
1 128.3.164.191 128.3.44.98.4543 tcp
1 128.3.164.194 128.3.44.112.1395 tcp
1 128.3.204.42 128.3.44.26.38293 udp
1 128.3.209.152 128.3.47.58.38293 udp
1 128.3.209.152 128.3.46.165.38293 udp
1 128.3.209.152 128.3.46.246.38293 udp
1 128.3.209.152 128.3.47.119.38293 udp
1 128.3.212.21 128.3.46.146 icmp
1 128.3.212.208 128.3.44.90 icmp
1 128.3.212.208 128.3.44.94.137 udp
1 128.3.212.208 128.3.45.84 icmp
1 128.3.212.208 128.3.45.84.137 udp
1 128.3.212.208 128.3.46.51 icmp
1 128.3.212.208 128.3.46.51.137 udp
1 128.3.212.208 128.3.47.46 icmp
1 128.3.212.208 128.3.44.242 icmp
1 128.3.212.208 128.3.47.114 icmp
1 128.3.212.208 128.3.47.114.137 udp
1 131.243.89.55 128.3.47.58.139 tcp
1 131.243.140.105 128.3.46.190.1057 tcp
1 131.243.160.216 128.3.46.190.1119 tcp
1 131.243.208.210 128.3.44.167 icmp
1 148.184.191.214 128.3.44.112 icmp
1 194.80.36.186 128.3.46.232 icmp
1 207.235.114.53 128.3.47.183.4206 tcp
1 207.235.115.253 128.3.44.112.4973 tcp
1 216.192.122.101 128.3.44.94.49201 tcp
1 229.97.122.203 1 0 man

I like creating these session combinations because they show me connections to hosts and destination ports. I can review these target ports, for example, to look for sessions which might be interesting. This is as far as we can go, because all of the application layer details for these sessions have been eliminated by the Tcpmkpub anonymization tool.

At some point I plan to update this methodology using Argus 3.0, and automate the process.