Kamis, 31 Desember 2009

Best Book Bejtlich Read in 2009

It's the end of the year, which means it's time to name the winner of the Best Book Bejtlich Read award for 2009!



Although I've been reading and reviewing digital security books seriously since 2000, this is only the fourth time I've formally announced a winner; see 2008, 2007, and 2006.



2009 was a slow year, due to a general lack of long-haul air travel (where I might read a whole book on one leg) and the general bleed-over from my day work into my outside-work time.



My ratings for 2009 can be summarized as follows:



  • 5 stars: 6 books


  • 4 stars: 5 books


  • 3 stars: 4 books


  • 2 stars: 0 books


  • 1 stars: 0 books




Here's my overall ranking of the five star reviews; this means all of the following are excellent books.



And, the winner of the Best Book Bejtlich Read in 2009 award is...



1. SQL Injection Attacks and Defense by Justin Clarke, et al; Syngress. This was a really tough call. Any of the top 4 books could easily have been the best book I read in 2009. Congratulations to Syngress for publishing another winner. SQL injection is probably the number one problem for any server-side application, and this book is unequaled in its coverage.



Looking at the publisher count, top honors in 2009 go to Syngress for 2 titles, followed by Wiley, Cisco Press, O'Reilly, and devGuide.net, each with one.



Thank you to all publishers who sent me books in 2009. I have plenty more to read in 2010.



Congratulations to all the authors who wrote great books in 2009, and who are publishing titles in 2010!

Rabu, 30 Desember 2009

Every Software Vendor Must Read and Heed

Matt Olney and I spoke about the role of a Product Security Incident Response Team (PSIRT) at my SANS Incident Detection Summit this month. I asked if he would share his thoughts on how software vendors should handle vulnerability discovery in their software products.

I am really pleased to report that Matt wrote a thorough, public blog post titled Matt's Guide to Vendor Response. Every software vendor must read and heed this post. "Software vendor" includes any company that sells a product that runs software, whether it is a PC, mobile device, or a hardware platform executing firmware. Hmm, that includes just about everyone these days, except the little old ladies selling fabric at the hobby store.

Seriously, let's make 2010 the year of the PSIRT -- the year companies make dealing with vulnerabilities in their software an operational priority. I'm not talking about "building security in" -- that's been going on for a while. Until I can visit a variation of company.com/psirt, I'm not satisfied. For that matter, I'd like to see company.com/cirt as well, so outsiders can contact a company that might be inadvertently causing trouble for Internet users. (And yes, if you're wondering, we're working on both at my company!)

Difference Between Bejtlich Class and SANS Class

A comment on my last post, Reminder: Bejtlich Teaching at Black Hat DC 2010, a reader asked:

I am trying to get my company sponsorship for your class at Black Hat. However, I was ask to justify between your class and SANS 503, Intrusion Detection In-Depth.

Would you be able to provide some advice?


That's a good question, but it's easy enough to answer. The overall point to keep in mind is that TCP/IP Weapons School 2.0 is a new class, and when I create a new class I design it to be different from everything that's currently on the market. It doesn't make sense to me to teach the same topics, or use the same teaching techniques, found in classes already being offered. Therefore, when I first taught TWS2 at Black Hat DC last year, I made sure it was unlike anything provided by SANS or other trainers.

Beyond being unique, here are some specific points to consider. I'm sure I'll get some howls of protest from the SANS folks, but they have their own platform to justify their approach. The two classes are very different, each with a unique focus. It's up to the student to decide what sort of material he or she wants to learn, in what environment, using whatever methods he or she prefers. I don't see anything specifically "wrong" with the SANS approach, but I maintain that a student will learn skills more appropriate for their environment in my class.

  • TWS2 is a case-driven, hands-on, lab-centric class. SANS is largely a slide-driven class.

    When you attend my class you get three handouts: 1) a workbook explaining how to analyze digital evidence; 2) a workbook with questions for 15 cases; and 3) a teacher's guide answering all of the questions for the 15 cases. There are no slides aside from a few housekeeping items and a diagram or two to explain how the class is set up.

    When you attend SANS you will receive several sets of slide decks that the instructor will show during the course of the class. You will also have labs but they are not the focus of the class.

  • I designed TWS2 to meet the needs of a wide range of students, from beginners to advanced practitioners. TWS2 attendees typically finish 5-7 cases per class, with the remainder suitable for "homework." Students can work at their own pace, although we cover certain cases at checkpoints during the class. A few students have completed all 15 cases, and I often ask if those students are looking for a new opportunity with my team!

  • TWS2 is about investigating digital evidence, primarily in the form of network traffic, logs, and some memory captures. The focus is overwhelmingly on the content and not the container. SANS spends more time on the container and less on the content.

    For example, if you look at the SANS course overview, you'll see they spend the first three days on TCP/IP headers and analysis with Tcpdump. Again, there's nothing wrong with that, but I don't care so much about what bit in the TCP header corresponds to the RST flag. That was mildly interesting in the late 1990s when that part of the SANS course was written, but the content of a network conversation has been more important this decade. Therefore, my class focuses on what is being said and less on how it was transmitted.

  • TWS2 is not about Snort. While students do have access to a fully-functional Sguil instance with Snort alerts, SANCP session data, and full content libpcap network traffic, I do not spend time explaining how to write Snort alerts. SANS spends at least one day talking about Snort.

  • TWS is not about SIM/SEM/SIEM. Any "correlation" between various forms of evidence takes place in the student's mind, or using the free Splunk instance containing the logs collected from each case. If you consider dumping evidence into a system like Splunk, and then querying that evidence, to be "correlation," then we have "correlation." (Please see Defining Security Event Correlation for my thoughts on that subject.) SANS spends two days on fairly simple open source options for "correlation" and "traffic analysis."

  • TWS cases cover a wide variety of activity, while SANS is narrowly focused on suspicious and malicious network traffic. I decided to write cases that cover many of the sorts of activities I expect an enterprise incident detector and responder to encounter during his or her professional duties.

    I also do not dictate any single approach to investigating each case. Just like real life, I want the student to produce an answer. I care less about how he or she analyzed the data to produce that answer, as long as the chain of reasoning is sound and the student can justify and repeat his or her methodology.


I hope that helps prospective students make a choice. I'll note that I don't send any of my analysts to the SANS "intrusion detection" class. We provide in-house training that includes my material but also focuses on the sorts of decision-making and evidence sources we find to be most effective in my company. Also please note this post concentrated on the differences between my class and the SANS "intrusion detection" class, and does not apply to other SANS classes.

Minggu, 20 Desember 2009

Reminder: Bejtlich Teaching at Black Hat DC 2010

Black Hat was kind enough to invite me back to teach multiple sessions of my 2-day course this year.

First up is Black Hat DC 2010 Training on 31 January and 01 February 2010 at Grand Hyatt Crystal City in Arlington, VA.

I will be teaching TCP/IP Weapons School 2.0.

Registration is now open. Black Hat set five price points and deadlines for registration, but only these three are left.

  • Regular ends 15 Jan

  • Late ends 30 Jan

  • Onsite starts at the conference


Seats are filling -- it pays to register early!

If you review the Sample Lab I posted earlier this year, this class is all about developing an investigative mindset by hands-on analysis, using tools you can take back to your work. Furthermore, you can take the class materials back to work -- an 84 page investigation guide, a 25 page student workbook, and a 120 page teacher's guide, plus the DVD. I have been speaking with other trainers who are adopting this format after deciding they are also tired of the PowerPoint slide parade.

Feedback from my 2009 sessions was great. Two examples:

"Truly awesome -- Richard's class was packed full of content and presented in an understandable manner." (Comment from student, 28 Jul 09)

"In six years of attending Black Hat (seven courses taken) Richard was the best instructor." (Comment from student, 28 Jul 09)

If you've attended a TCP/IP Weapons School class before 2009, you are most welcome in the new one. Unless you attended my Black Hat training in 2009, you will not see any repeat material whatsoever in TWS2. Older TWS classes covered network traffic and attacks at various levels of the OSI model. TWS2 is more like a forensics class, with network, log, and related evidence.

I will also be teaching in Barcelona and Las Vegas, but I will announce those dates later.

I strongly recommend attending the Briefings on 2-3 Feb. Maybe it's just my interests, but I find the scheduled speaker list to be very compelling.

I look forward to seeing you. Thank you.

Jumat, 18 Desember 2009

Favorite Speaker Quotes from SANS Incident Detection Summit

Taking another look at my notes, I found a bunch of quotes from speakers that I thought you might like to hear.

  • "If you think you're not using a MSSP, you already are. It's called anti-virus." Can anyone claim that, from the CIRTs and MSSPs panel?

  • Seth Hall said "Bro is a programming language with a -i switch to sniff traffic."

  • Seth Hall said "You're going to lose." Matt Olney agreed and expanded on that by saying "Hopefully you're going to lose in a way you recognize."

  • Matt Olney also said "Give your analyst a chance." ["All we are sayyy-ing..."]

  • Matt Jonkman said "Don't be afraid of blocking." It's not 2004 anymore. Matt emphasized the utility of reputation when triggering signatures, for example firing an alert when an Amazon.com-style URL request is sent to a non-Amazon.com server.

  • Ron Shaffer said "Bad guys are following the rules of your network to accomplish their mission."

  • Steve Sturges said "Snort 3.0 is a research project."

  • Gunter Ollmann said "Threats have a declining interest in persistence. Just exploit the browser and disappear when closed. Users are expected to repeat risky behavior, and become compromised again anyway."


Thanks again to all of our speakers!

Notes from Tony Sager Keynote at SANS

I took a few notes at the SANS Incident Detection Summit keynote by Tony Sager last week. I thought you might like to see what I recorded.

All of the speakers made many interesting comments, but it was really only during the start of the second day, when Tony spoke, when I had time to write down some insights.

If you're not familiar with Tony, he is chief of the Vulnerability Analysis and Operations (VAO) Group in NSA.

  • These days, the US goes to war with its friends (i.e., allies fight with the us against a common adversary). However, the US doesn't know its friends until the day before the war, and not all of the US' friends like each other. These realities complicate information assurance.

  • Commanders have been trained to accept a certain level of error in physical space. They do not expect to know the exact number of bullets on hand before a battle, for example. However, they often expect to know exactly how many computers they have at hand, as well as their state. Commanders will need to develop a level of comfort with uncertainty.

  • Far too much information assurance is at the front line, where the burden rests with the least trained, least experienced, yet well-meaning, people. Think of the soldier fresh from tech school responsible for "making it work" in the field. Hence, Tony's emphasis on shifting the burden to vendors where possible.

  • "When nations compete, everybody cheats." [Note: this is another way to remember that with information assurance, the difference is the intelligent adversary.]

  • The bad guy's business model is more efficient than the good guy's business model. They are global, competitive, distributed, efficient, and agile. [My take on that is the financially-motivated computer criminals actually earn ROI from their activities because they are making money. Defenders are simply avoiding losses.

  • The best way to defeat the adversary is to increase his cost, level of uncertainty, and exposure. Introducing these, especially uncertainty, causes the adversary to stop, wait, and rethink his activity.

  • Defenders can't afford perfection, and the definition changes by the minute anyway. [This is another form of the Defender's Dilemma -- what should we try to save, and what should we sacrifice? On the other hand we have the Intruder's Dilemma, which Aaron Walters calls the Persistence Paradox -- how to accomplish a mission that changes a system while remaining undetected.]

  • Our problems are currently characterized by coordination and knowledge management, and less by technical issues.

  • Human-to-human contact doesn't scale. Neither does narrative text. Hence Tony's promotion of standards-based communication.


Thanks again to Tony and our day one keynote Ron Gula!

Selasa, 15 Desember 2009

Security as Interdepartmental conflict...

I received this message in my hotmail this morning:



Why does Microsoft get dinged for this type of presentation? Why does it happen? On a small scale it was probably because the hotmail Calendar team wasn't talking with the hotmail Security team.  But that doesn't answer much.  Computer security is still, in almost all industries and architectures, and "add-in".  It is overlaid on top of existing products and architectures.  The "security guys" are on separate teams, their training is exclusive, their recommendations are "integrated" into existing products. The practice of security  never fully integrates into test suites for most product development because  it can't be marketed like a popsicle.  It is sold as an immunity, a dose of antibiotic, a pill.   Compatibility of security architecture with existing product development has ambiguous ownership.

Sabtu, 12 Desember 2009

Keeping FreeBSD Up-to-Date in BSD Magazine

Keep your eyes open for the latest printed BSD Magazine, with my article Keeping FreeBSD Up-To-Date: OS Essentials. This article is something like 18 pages long, because at the last minute the publishers had several authors withdraw articles. The publishers decided to print the extended version of my article, so it's far longer than I expected! We're currently editing the companion piece on keeping FreeBSD applications up-to-date. I expect to also submit an article on running Sguil on FreeBSD 8.0 when I get a chance to test the latest version in my lab.

Thanks for a Great Incident Detection Summit

We had a great SANS WhatWorks in Incident Detection Summit 2009 this week! About 100 people attended. I'd like to thank those who joined the event as attendees; those who participated as keynotes (great work Ron Gula and Tony Sager), guest moderators (Rocky DeStefano, Mike Cloppert, and Stephen Windsor), speakers, and panelists; Debbie Grewe and Carol Calhoun from SANS for their excellent logistics and planning, along with our facilitators, sound crew, and staff; our sponsors, Allen Corp., McAfee, NetWitness, and Splunk; and also Alan Paller for creating the two-day "WhatWorks" format.

I appreciate the feedback from everyone who spoke to me. It sounds like the mix of speakers and panels was a hit. I borrowed this format from Rob Lee and his Incident Repsonse and Computer Forensics summits, so I am glad people liked it. I think the sweet spot for the number of panelists might be 4 or 5, depending on the topic. If it's more theoretical, with a greater chance of audience questions, a smaller number is better. If it's more of a "share what you know," like the tools and techniques panel, then a bigger number is ok.

Probably the best news from the Summit was the fact that SANS already scheduled the second edition -- the SANS WhatWorks in Incident Detection Summit 2010, 8-9 December 2010 in DC. I still need to talk to SANS about how it will work. They've asked me to combine log management with incident detection. I think that is interesting, since I included content on logs in this year's incident detection event. I'd like to preserve the single-track nature of the Summit, but it might be useful to have a few break-outs for people who want to concentrate on a single technology or technique.

I appreciate the blog coverage from Tyler Hudak and Matt Olney so far. Please let me know what you thought of the last event, and if you have any requests for the next one.

Before December 2010, however, I'm looking forward to the SANS What Works in Forensics and Incident Response Summit 2010, 8-9 July 2010, also in DC.

The very next training event for me is my TCP/IP Weapons School 2.0 at Black Hat in DC, 31 Jan - 1 Feb. Regular registration ends 15 January, so sign up while there are still seats left! This class tends to sell out due to the number of defense industry participants in the National Capitol Region.

Minggu, 06 Desember 2009

Troubleshooting FreeBSD Wireless Problem

My main personal workstation is a Thinkpad x60s. As I wrote in Triple-Boot Thinkpad x60s, I have Windows XP, Ubuntu Linux, and FreeBSD installed. However, I rarely use the FreeBSD side. I haven't run FreeBSD on the desktop for several years, but I like to keep FreeBSD on the laptop in case I encounter a situation on the road where I know how to solve a problem with FreeBSD but not Windows or Linux. (Yes I know about [insert favorite VM product here]. I use them. Sometimes there is no substitute for a bare-metal OS.)

When I first installed FreeBSD on the x60s (named "neely" here), the wireless NIC, an Intel(R) PRO/Wireless 3945ABG, was not supported on FreeBSD 6.2. So, I used a wireless bridge. That's how the situation stayed until I recently read M.C. Widerkrantz's FreeBSD 7.2 on the Lenovo Thinkpad X60s. It looked easy enough to get the wireless NIC running now that it was supported by the wpi driver. I had used freebsd-update to upgrade the 6.2 to 7.0, then 7.0 to 7.1, and finally 7.1 to 7.2. This is where the apparent madness began.

I couldn't find the if_wpi.ko or wpifw.ko kernel modules in /boot/kernel. However, on another system (named "r200a") which I believe had started life as a FreeBSD 7.0 box (but now also ran 7.2), I found both missing kernel modules. Taking a closer look, I simply counted the number of files on my laptop /boot/kernel and compared that list to the number of files on the other FreeBSD 7.2 system.

$ wc -l boot-kernel-neely.06dec09a.txt
545 boot-kernel-neely.06dec09a.txt
$ wc -l boot-kernel-r200a.06dec09a.txt
1135 boot-kernel-r200a.06dec09a.txt

Wow, that is a big difference. Apparently, the upgrade process from 6.2 to 7.x did not bring almost 600 files, now present on a system that started life running 7.x.

Since all I really cared about was getting wireless running on the laptop, I copied the missing kernel modules to /boot/kernel on the laptop. I added the following to /boot/loader.conf:

legal.intel_wpi.license_ack=1
if_wpi_load="YES"

After rebooting I was able to see the wpi0 device.

wpi0: mem 0xedf00000-0xedf00fff irq 17 at device 0.0 on pci3
wpi0: Ethernet address: [my MAC]
wpi0: [ITHREAD]
wpi0: timeout resetting Tx ring 1
wpi0: timeout resetting Tx ring 3
wpi0: timeout resetting Tx ring 4
wpi0: link state changed to UP

I think I will try upgrading the 7.2 system to 8.0 using freebsd-update, then compare the results to a third system that started life as 7.0, then upgraded from 7.2 to 8.0. If the /boot/kernel directories are still different, I might reinstall 8.0 on the laptop from media or the network.

Sabtu, 05 Desember 2009

Cell Tracking

This is the link to an absolutely extraordinary post  on privacy by Christopher Soghoian:
http://paranoia.dubfire.net/2009/12/8-million-reasons-for-real-surveillance.html . Mr. Soghoian's post describes the evolution of "Cell Tracking", an issue the EFF has discussed for a number of years at http://www.eff.org/issues/cell-tracking. An exceptional video on current status of the law  for "cell tracking"  and "mobility tracking" can be found here:  http://www.youtube.com/watch?v=YFo2VcfWCBQ&feature=channel/

The information reminds me that the OS inside most cell-phones is a literal "black box".  Because I run midpssh, I can usually find cell's IP address in the netstat tables of my SSH Server. I can see there may be some filtered ports on my phone.  But I cannot:
(1) access a console or ssh prompt
(2) run a network sniffer or IDS on my cell phone to see if someone is "pinging" my location or hacking me.

Your cell phone is a tracking device that forbids you from root access.

Kamis, 03 Desember 2009

Let a Hundred Flowers Blossom


I know many of us work in large, diverse organizations. The larger or more complex the organization, the more difficult it is to enforce uniform security countermeasures. The larger the population to be "secure," the more likely exceptions will bloom. Any standard tends to devolve to the least common denominator. There are some exceptions, such as FDCC, but I do not know how widespread that standard configuration is inside the government.

Beyond the difficulty of applying a uniform, worthwhile standard, we run into the diversity vs monoculture argument from 2005. I tend to side with the diversity point of view, because diversity tends to increase the cost borne by an intruder. In other words, it's cheaper to develop exploitation methods for a target who 1) has broadly similar, if not identical, systems and 2) publishes that standard so the intruder can test attacks prior to "game day."

At the end of the day, the focus on uniform standards is a manifestation of the battle between two schools of thought: Control-Compliant vs Field-Assessed Security. The control-compliant team believes that developing the "best standard," and then applying that standard everywhere, is the most important aspect of security. The field-assessed team (where I devote my effort) believes the result is more important than how you get there.

I am not opposed to developing standards, but I do think that the control-compliant school of thought is only half the battle -- and that controls occupy far more time and effort than they are worth. If the standard whithers in the face of battle, i.e., once field-assessed it is found to be lacking, then the standard is a failure. Compliance with a failed standard is worthless at that point.

However, I'd like to propose a variation of my original argument. What if you abandon uniform standards completely? What if you make the focus of the activity field-assessed instead of control-compliant, by conducting assessments of systems? In other words, let a hundred flowers blossom.

(If you don't appreciate the irony, do a little research and remember the sorts of threats that occupy much of the time of many this blog's readers!)

So what do I mean? Rather than making compliance with controls the focus of security activity, make assessment of the results the priority. Conduct blue and red team assessments of information assets to determine if they meet various resistance and (maybe) "survivability" metrics. In other words, we won't care how you manage to keep an intruder from exploiting your system, as long as it takes longer for a blue or red assesor with time X and skill level Y and initial access level Z (or something to that effect).

In such a world, there's plenty of room for the person who wants to run Plan 9 without anti-virus, the person who runs FreeBSD with no graphical display or Web browser, the person who runs another "nonstandard" platform or system -- as long as their system defies the field assessment conducted by the blue and red teams. (Please note the one "standard" I would apply to all assets is that they 1) do no harm to other assets and 2) do not break any laws by running illegal or unlicensed software.)

If a "hundred flowers" is too radical, maybe consider 10. Too tough to manage all that? Guess what -- you are likely managing it already. So-called "unmanaged" assets are everywhere. You probably already have 1000 variations, never mind 100. Maybe it's time to make the system's inability to survive against blue and red teams the measure of failure, not whether the system is "compliant" with a standard, the measure of failure?

Now, I'm sure there is likely to be a high degree of correlation between "unmanaged" and vulnerable in many organizations. There's probably also a moderate degree of correlation between "exceptional" (as in, this box is too "special" to be considered "managed") and vulnerable. In other instances, the exceptional systems may be impervious to all but the most dedicated intruders. In any case, accepting that diversity is a fact of life on modern networks, and deciding to test the resistance level of those assets, might be more productive than seeking to develop and apply uniform standards.

What do you think?