Sabtu, 31 Desember 2005

Thank You for Another Great Year

Exactly one year ago today I posted a thank-you note for the great year of blogging in 2004. A look at the 2004 statistics shows as recently as July 2004, this blog had less than 6,000 visitors per month, as tracked by Sitemeter. I have no idea how Atom, RSS, and other republishing is affects those statistics. Soon after my first book was published, we broke through the 10,000 per month mark and have never looked back.

As you can see from the 2005 chart above, we're at the 22,000 per month mark now, and broke through 25,000 in August during my coverage of Ciscogate. This blog continues to be a nonpaying venture, despite offers to commercialize, syndicate and repackage the content elsewhere. Others already do this without my permission, but I thank those more responsible people who ask before posting my content elsewhere. For example, I've given the great publisher Apress blanket permission to quote anything I say here. This is my small way to say thank you for the books they've sent me to review. One of my New Year's resolutions for 2006 is to dedicate specific time early each morning (before my 1 year old daughter wakes up) to read, review, and recommend books. I managed to read and review 26 technical books in 2005, but I have a backlog of over 50 waiting for attention.


I read every book upon which I make comments at Amazon.com, unlike some others who consider a rehash of a book's back cover to be a "review." I also try to avoid bad books, so don't expect too many low-star reviews.

I have found your comments to be one of the best parts of blogging in 2005. I really appreciate hearing what you have to say, either publicly as a blog comment or privately via email. I don't have time to reply to the few of you who send me multi-page running commentaries on everything I publish or blog, but I appreciate your thoughts nevertheless.

In 2006 I plan to continue blogging about subjects which interest me, like network security monitoring, incident response, forensics, FreeBSD, and related topics. I welcome any thoughts on other issues you find pressing. If you want to see how I keep track of world security events, please visit my interests page. Those are my bookmarks; I avoid browser bookmarks whenever possible.

In 2006 I also plan to devote time and resources to OpenPacket.org. Many of you have offered some form of support. As that project develops I will request assistance, either here or on the OpenPacket.org Blog. 2006 should also be a big year for TaoSecurity, my company. I am not sure if 2006 will be the year I decide to hire employees, but I am considering hiring contract help for some in-house coding projects. These projects would support the company's consulting, incident response, and forensics services. Should anything be of use to the wider community, it will appear on the TaoSecurity products page. If you would be interested in working for TaoSecurity, please feel free to send me your resume in .pdf format to richard at taosecurity dot com. I am always interested in meeting security practitioners who can administer systems properly, perform network- and host-centric incident response and forensics, write security tools, speak and publish original material, and seek to save the world one packet at a time.

I have ideas for additional, specialized training courses for 2006. At the moment demand for private 4-day Network Security Operations classes has been strong. I am working with a few different customers to support specialized training outside the core NSO focus. Some of those endeavors may be offered to the public. I will also submit proposals to speak at a few more USENIX conferences, which are public opportunities for training in network security monitoring. I post word of any place I intend to speak at my events list.

I do not have any new books scheduled for writing in 2006. Having authored or co-authored three books in three years, I expect to take a break. I have ideas for more articles like the one in Information Security Magazine. I should have an article in the February 2006 Sys Admin Magazine on keeping FreeBSD up-to-date.

My family and I wish all of you a prosperous 2006!

Last Day to Register for Discounted Black Hat Federal 2006

I just registered for the two-day Black Hat Federal Briefings 2006 in Crysal City, Arlington, VA. Tomorrow (1 Jan 06) appears to be the last day to register for the conference at a discounted rate. I decided to pay my way to the briefings because the event is local and the lineup looks very good. The rate until tomorrow is $895, and after that the price is $1095.

Jumat, 30 Desember 2005

Comments on Internal Monitoring

Victor Oppleman, co-author of a great book called Extreme Exploits, is writing a new book. The title is The Secrets to Carrier Class Network Security, and it should be published this summer. Victor asked me to write a chapter on network security monitoring for the new book. Since I do not recycle material, I am working on a chapter with new material. I intend to discuss internal monitoring because I am consulting on such a case now.

Do any of you have stories, comments, suggestions, or other ideas that might make good additions to this chapter? For example, I am considering addressing threat-centric vs. target centric sensor positioning, internal network segmentation to facilitate visibility, tapping trunks, new sorts of taps, sensor clusters, and stealthy internal sensor deployment. Does that give any of you ideas?

Anything submitted will be given credit via an inline name reference like "Bamm Visscher points out that..." or a footnote with your name and a reference to "personal communication" or "blog comment." The chapter is due to Victor next week, so I am not looking for any large contributions. A few paragraphs or even a request to cover a certain topic would be helpful. Thank you.

Kamis, 29 Desember 2005

Ethereal 0.10.14 Available

Ethereal version 0.10.14 was released Tuesday. It addresses vulnerabilities in the IRC, GTP, and OSPF protocol dissectors. Smart bot net IRC operators could inject evil traffic to attack security researchers looking at command and control messages. That's a great reason to not collect traffic directly with Ethereal. Instead, collect it with Tcpdump, then review it as a non-root user using Ethereal.

Rabu, 28 Desember 2005

First Sguil VM Available

I am happy to announce the availability of the first public Sguil sensor, server, and database in VM format. It's about 91 MB. Once it has been shared with all of the Sourceforge mirrors, you can download it here. I built it using the script described earlier.

So how do you use this? First, you need to have something like the free VMware Player for Windows or Linux. You can also use VMware Workstation or another variant if you like. When you download sguil0-6-0p1_freebsd6-0_1024mb.zip and expand it, you will find a directory like this:

FreeBSD.nvram
FreeBSD.vmsd
FreeBSD.vmx
FreeBSD-000001-cl1.vmdk

By opening the FreeBSD.vmx file in VMware Player, you should be able to start the VM.

Here are some important details.

  • The root password is r00t.

  • The user analyst is a member of the wheel group, so it can su to root. The analyst password is analyst.

  • The user sguil is not a member of the wheel group, so it can not directly su to root. The sguil password is sguil.

  • The host's management IP is 192.168.2.121. It is assigned the lnc0 interface and it is bridged via VMware.

  • The netmask is 255.255.255.0 and the default gateway is 192.168.2.1.

  • The default nameserver is 192.168.2.1.

  • Interface lnc1 is also bridged. It is not assigned an IP because it is used for sniffing.


You will probably want to change these parameters manually to meet your own network needs. For example, as root and logged in to the terminal:

ifconfig lnc0 down
ifconfig lnc0 inet 192.168.3.3 netmask 255.255.255.0 up
route add default 192.168.3.3
echo "nameserver 192.168.3.254" > /etc/resolv.conf

Make similar changes to the values in /etc/rc.conf if you want the new network scheme to survive a reboot.

You'll probably also want to change /etc/hosts to reflect your new IPs.

Important: As soon as you have network connectivity to the Internet, you must update the system time. When my VM wakes up, it still thinks it is Wednesday night. If you try connecting to it with a Sguil client, the times will not match properly. I recommend running something simple like the following as root on the VM:

ntpdate clock.isc.org

This will validate outside Internet connectivity and update the time. You can also manually set the time with the 'date' command. Note this VM does not have any man pages installed. If you need them for FreeBSD, look here.

Account passwords, for example, should be changed if you want to hook up this VM in any place outside a lab. Once the VM boots, I recommend logging in to two terminals. In one terminal, log in as user sguil. Execute the three scripts in sguil's home directory, namely the following, in this order:

sguild_start.sh
sensor_agent_start.sh
barnyard_start.sh

This will start the Sguil server, sensor, and Barnyard.

In the second terminal, log in as root. Start the following scripts:

sancp_start.sh
snort_start.sh
/usr/local/bin/log_packets.sh restart

This will start SANCP, Snort, and log_packets.sh, which uses a second instance of Snort to log full content data.

Once all the components are running, you need to connect to the Sguil server using a Sguil client. I did not install the Sguil client on the VM in order to save space (and to simplify this first round of work).

The easiest way to get a Sguil client running is to download and install the free standard ActiveTcl distribution for Windows. (Yes, Windows has the easiest client install, thanks to ActiveTcl. Linux might be as easy, but I don't have a Linux desktop to test.)

Once ActiveTcl is installed, download the Sguil client for Windows. It is a .zip that you need to extract. Once you do, change into the sguil-0.6.0p1/client directory. You'll see sguil.conf. Make the following edits:

# set ETHEREAL_PATH /usr/sbin/ethereal
# win32 example
set ETHEREAL_PATH "c:/progra~1/ethereal/ethereal.exe"
# Where to save the temporary raw data files on the client system
# You need to remember to delete these yourself.
# set ETHEREAL_STORE_DIR /tmp
# win32 example
set ETHEREAL_STORE_DIR "c:/tmp"
# Favorite browser for looking at sig info on snort.org
# set BROWSER_PATH /usr/bin/mozilla
# win32 example (IE)
set BROWSER_PATH c:/progra~1/intern~1/iexplore.exe

Next, edit the sguil.tk file to make one change as shown next:

set VERSION "SGUIL-0.6.0"

Now create a c:\tmp directory, and make sure you have Ethereal installed if you want to look at full content data in Ethereal.

You're ready to try the client.

Start Sguil by double-clicking on the sguil.tk icon in the Windows explorer. Initially Windows will not know how to run .tk files. Associate this file and other .tk files with the C:\Tcl\bin\wish84.exe program.

The Sguil host is the IP address of the Sguil server. In my VM that is 192.168.2.121. If you leave the demo.sguil.net address, you will connect to Bamm's demo server.

The default port of 7734 is the right port. For the Sguil user and password, the VM uses user sguil, password sguil.

Do not enable OpenSSL encryption. The VM is not built to include that. Select the sensor shown (gruden in the VM), and then click Start Sguil. You should next see the client.

If you want to get Snort to trip on traffic, try using Nmap to perform an OS identification (nmap -O) on the management IP address of the VM.

If you have any questions, please post them here. Better yet, visit us at irc.freenode.net in channel #snort-gui.

My next idea is to add a Sguil client, and document and script the process. That may wait until Sguil 0.6.1 is released however.

UPDATE: For a new VM with the client, please see this post.

Rough Sguil Installation Script

My last Sguil Installation Guide, for Sguil 0.5.3 was a mix of English description and command line statements. This did not help much when I needed to install a new Sguil deployment. I essentially followed my guide and typed everything by hand.

Today I decided that would be the end of that process. I am excited by the new InstantNSM project, and I intend to support it with respect to FreeBSD. But for today, I decided to just script as many Sguil installation commands as possible. For items that I couldn't easily script (due to my weak script-fu), I decided to edit the files manually and generate a patch for each one.

This post describes the end result, which you can download at www.bejtlich.net/sguil_install_v0.1.sh. I should warn you that this is not meant for public production use. However, someone trying to install Sguil might find it useful.

The purpose of this script is to automate, as much as possible, the creation of a Sguil sensor, server, and database on a FreeBSD 6.0/i386 platform. The platform is a VMware image whose hostname is gruden.taosecurity.com and whose management IP address is 192.168.2.121. I have stored several files at www.bejtlich.net to facilitate the installation. I will explain where that matters as I progress.


#!/bin/sh
#
# Sguil installation script by Richard Bejtlich (richard@taosecurity.com)
# v1-0 28 December 2005
#
# Tested on FreeBSD 6.0 RELEASE
#
# This script sets up all Sguil components on a single FreeBSD 6.0 system
# This is not intended for production use where separate sensor, server,
# and client boxes are recommended

echo "Sguil Installation Script"
echo
echo "By Richard Bejtlich"
echo
echo "This is mainly for personal use, but it documents how to build"
echo "a FreeBSD 6.0 system with Sguil sensor, server, and database"
echo "components. The Sguil client must be deployed separately."

First I update the time. I am running this in a VM and time can be problematic. With FreeBSD 6 as a guest OS on VMware Workstation, I create /boot/loader.conf with 'hint.apic.0.disabled=1' to mitigate time issues.

# Update date and time

ntpdate clock.isc.org

Next I set some environment variables. I designate my proxy server, which received heavy use as I tested this script. Note that using a proxy server means copies of patches and other files are cached. To clear the cache after changing a file and uploading it www.bejtlich.net, the process involves stopping Sguid, clearing the cache map with 'echo "" >> /usr/local/squid/cache/swap.state', and restarting Sguid.

# Set environment variable for package adds

# Use proxy server if you have it!

HTTP_PROXY=192.168.2.7:3128; export HTTP_PROXY

By default this script used FreeBSD 6 packages.

# Use the following for FreeBSD 5 packages

#PACKAGESITE=ftp://ftp2.freebsd.org/pub/FreeBSD/ports/i386/packages-5-stable/Latest/; export PACKAGESITE

# FreeBSD 6 packages

PACKAGESITE=ftp://ftp2.freebsd.org/pub/FreeBSD/ports/i386/packages-6-stable/Latest/; export PACKAGESITE

Here is where the sensor name is determined. In other places (like patch files) I use the sensor name, gruden, explicity.

# Determine sensor name

SENSOR=`hostname -s`

# Set Sguil version

SGUIL=sguil-0.6.0p1

# Set Snort major version

SNORTMV=2.4

Now I create directories used by Sguil components.

# Create directories

mkdir -p /nsm/$SENSOR/dailylogs
mkdir -p /nsm/$SENSOR/portscans
mkdir -p /nsm/$SENSOR/sancp
mkdir -p /nsm/rules/$SENSOR
mkdir -p /var/log/snort
mkdir -p /usr/local/etc/nsm
mkdir -p /usr/local/src
mkdir -p /nsm/archive
mkdir -p /nsm/rules/$SENSOR


chown -R sguil:sguil /nsm

chown -R sguil:sguil /var/log/snort

chown -R sguil:sguil /usr/local/etc/nsm

Now I start getting software packages and archives.

# Retrieve software

cd /usr/local/src
fetch http://internap.dl.sourceforge.net/sourceforge/sguil/$SGUIL.tar.gz
tar -xzf $SGUIL.tar.gz

# Install Snort

pkg_add -r snort

cd /nsm/rules/$SENSOR
fetch http://www.snort.org/pub-bin/downloads.cgi/Download/vrt_pr/snortrules-pr-$SNORTMV.tar.gz
tar -xzf snortrules-pr-$SNORTMV.tar.gz
mv /nsm/rules/$SENSOR/rules/* /nsm/rules/$SENSOR

chown -R sguil:sguil /usr/local/etc/snort

cd /root

# Install Tcl

pkg_add -r tcl84
mv /usr/local/bin/tclsh /usr/local/bin/tclsh.orig
ln -s /usr/local/bin/tclsh8.4 /usr/local/bin/tclsh

The installation of Barnyard uses a package I built, as described here, because the stock Barnyard package does not support Sguil 0.6.0p1.

# Install Barnyard

cd /tmp
fetch http://www.bejtlich.net/barnyard-0.2.0.tbz
pkg_add barnyard-0.2.0.tbz

# Install SANCP

pkg_add -r sancp

# Install MySQL

pkg_add -r mysql50-server
/usr/local/bin/mysql_install_db --user=mysql
/usr/local/bin/mysqld_safe --user=mysql &

# Install Tcltls

pkg_add -r tcltls

# Install Tcllib

pkg_add -r tcllib

# Install TclX

pkg_add -r tclX

I have to install my own version of MySQLTcl. This was not as complicated as Barnyard. The problem with the stock package is that it is compiled against MySQL 4.1.x, and I am using MySQL 5.0.x. Simply building my own package on sguilref, a FreeBSD 6 host with MySQL 5.0.16 installed, is enough to create the proper mysqltcl package.

# Install MySQLTcl from own version compiled for MySQL 5.x

fetch http://www.bejtlich.net/mysqltcl-3.01.tbz
pkg_add mysqltcl-3.01.tbz

# Install P0f

pkg_add -r p0f

# Install Tcpflow

pkg_add -r tcpflow

Now I copy some configuration files and set up the Sguil database.

# Copy configuration files

cp /usr/local/src/$SGUIL/sensor/sensor_agent.conf /usr/local/etc/nsm
cp /usr/local/src/$SGUIL/server/sguild.conf /usr/local/etc/nsm
cp /usr/local/etc/snort/snort.conf-sample /usr/local/etc/nsm/snort.conf
cp /usr/local/etc/barnyard.conf-sample /usr/local/etc/nsm/barnyard.conf
cp /usr/local/src/$SGUIL/sensor/sancp/sancp.conf /usr/local/etc/nsm
cp /usr/local/src/$SGUIL/sensor/log_packets.sh /usr/local/etc/nsm

# Set up database

/usr/local/bin/mysql -e "CREATE DATABASE sguildb"
/usr/local/bin/mysql -D sguildb < /usr/local/src/$SGUIL/server/sql_scripts/create_sguildb.sql
/usr/local/bin/mysql -e "GRANT ALL on sguildb.* to sguil@localhost"
/usr/local/bin/mysql -e "GRANT FILE on *.* to sguil@localhost"
/usr/local/bin/mysql -e "SET password for sguil@localhost=password('sguil')"
/usr/local/bin/mysql -e "SHOW TABLES" sguildb
/usr/local/bin/mysql -e "SET password for root@localhost=password('r00t')"
/usr/local/bin/mysql --password=r00t -e "FLUSH PRIVILEGES"

I couldn't think of an easy way to apply changes to the configuration files, so I edited them by hand to suit my needs and generated patches.

Here is my patch generation procedure for sensor_agent.conf.patch as an example.

First, make a copy that will contain the changes.

cp sensor_agent.conf sensor_agent.conf.diff

Now edit sensor_agent.conf.diff to include the desired changes. I use vi. Next create the patch.

diff -u sensor_agent.conf sensor_agent.conf.diff > sensor_agent.conf.patch

The sensor_agent.conf.patch looks like this:

--- sensor_agent.conf Wed Dec 28 14:57:30 2005
+++ sensor_agent.conf.diff Wed Dec 28 14:58:33 2005
@@ -13,7 +13,7 @@
set DAEMON 0

# Name of sguild server
-set SERVER_HOST 192.168.8.8
+set SERVER_HOST localhost
# Port sguild listens on for sensor connects
set SERVER_PORT 7736
# Port sensor_agent lisens on for barnyard connects
@@ -22,10 +22,10 @@
# Note: Sensors monitoring multiple interfaces need to use a unique 'hostname'
# for each interface. Make sure this name is the same in the respective
# log_packets.sh
-set HOSTNAME gateway
+set HOSTNAME gruden

# The root of your log dir for data like pcap, portscans, sessions, etc
-set LOG_DIR /snort_data
+set LOG_DIR /nsm

# Where to look for files created by modded spp_portscan
set PORTSCAN_DIR ${LOG_DIR}/${HOSTNAME}/portscans
@@ -49,7 +49,7 @@
# 2: sancp (http://www.metre.net/sancp.html)

#Enable Stream4 keep_stats (1=enable 0=disable)
-set S4_KEEP_STATS 1
+set S4_KEEP_STATS 0
# Where to look for ssn files created by modded spp_stream4
set SSN_DIR ${LOG_DIR}/${HOSTNAME}/ssn_logs

I do not think this is a bad way to handle the issue, although I welcome simpler suggestions. If you wanted to use my script, for example, you could copy the patches, edit them, and then have the script apply them as shown below. Note this is a place where sensor name and IP address can matter. Note in the above patch the sensor name, gruden, is explicitly mentioned.

# Fetch text file patches

cd /usr/local/etc/nsm

fetch http://www.bejtlich.net/sensor_agent.conf.patch
fetch http://www.bejtlich.net/sguild.conf.patch
fetch http://www.bejtlich.net/snort.conf.patch
fetch http://www.bejtlich.net/barnyard.conf.patch
fetch http://www.bejtlich.net/sancp.conf.patch
fetch http://www.bejtlich.net/log_packets.sh.patch
fetch http://www.bejtlich.net/log_packets.sh.crontab

# Apply patches

patch -p0 < sensor_agent.conf.patch
patch -p0 < sguild.conf.patch
patch -p0 < snort.conf.patch
patch -p0 < barnyard.conf.patch
patch -p0 < sancp.conf.patch
patch -p0 < log_packets.sh.patch
crontab -u root log_packets.sh.crontab

Next I put log_packets.sh where it belongs, move some Snort configuration files, and retrieve some simple startup scripts.

# Install log_packets.sh

cp /usr/local/etc/nsm/log_packets.sh /usr/local/bin

# Copy Snort conf files

cp /nsm/rules/$SENSOR/classification.config /usr/local/etc/nsm
cp /nsm/rules/$SENSOR/gen-msg.map /usr/local/etc/nsm
cp /nsm/rules/$SENSOR/reference.config /usr/local/etc/nsm
cp /nsm/rules/$SENSOR/sid-msg.map /usr/local/etc/nsm
cp /nsm/rules/$SENSOR/threshold.conf /usr/local/etc/nsm
cp /nsm/rules/$SENSOR/unicode.map /usr/local/etc/nsm

# Get startup scripts

cd /home/sguil
fetch http://www.bejtlich.net/barnyard_start.sh
fetch http://www.bejtlich.net/sguild_start.sh
fetch http://www.bejtlich.net/sensor_agent_start.sh

chown sguil:sguil /home/sguil/*.sh
chmod +x /home/sguil/*.sh

cd /root
fetch http://www.bejtlich.net/snort_start.sh
fetch http://www.bejtlich.net/sancp_start.sh

chmod +x /root/*.sh

Now I modify /etc/rc.conf so MySQL will start at boot, but only listen on localhost. The sniffing interface on this system is lnc1, so I bring it up without the capability to arp.

# Modify /etc/rc.conf

echo "mysql_enable=YES" >> /etc/rc.conf
echo "mysql_args=--bind-address=127.0.0.1" >> /etc/rc.conf
echo "ifconfig_lnc1=-arp" >> /etc/rc.conf

Several of the Sguil components, like barnyard, sensor_agent.conf, and SANCP run as user sguil and need to write their PID files to /var/run. I decided to make /var/run mode 777 to let them write to the directory. This is not the best idea, so I might change it.

# Set up /var/run

chmod 777 /var/run

Finally I add the user 'sguil' with password 'sguil' so clients can access the Sguil server.

# Add Sguil client user

echo "Create a Sguil client user password when prompted."
cd /usr/local/src/$SGUIL/server
./sguild -c sguild.conf -u sguild.users -adduser sguil

In this last section I tell how to get all of the components running. By default all of them will run in the background. Each *start.sh script has an option for running in the foreground for debugging purposes, if you uncomment the foreground option and comment out the background option.

# Messages to users

echo "To start Sguil, execute the following."
echo
echo "As user sguil:"
echo
echo "/home/sguil/sguild_start.sh"
echo "/home/sguil/sensor_agent_start.sh"
echo "/home/sguil/barnyard_start.sh"
echo
echo "Next, as user root:"
echo
echo "/root/sancp_start.sh"
echo "/root/snort_start.sh"
echo "/usr/local/bin/log_packets.sh restart"
echo
echo "You will then be able to connect using the separate Sguil client."

Once you have this script installed on a suitable FreeBSD 6/i386 system, you can run it. Here is the partition layout I created, using only 1024 MB. I installed the "minimal" distribution, which is the smallest non-custom distro.

$ uname -a
FreeBSD gruden.taosecurity.com 6.0-RELEASE FreeBSD
6.0-RELEASE #0: Thu Nov 3 09:36:13 UTC 2005
root@x64.samsco.home:/usr/obj/usr/src/sys/GENERIC i386
$ df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/ad0s1a 124M 55M 59M 48% /
devfs 1.0K 1.0K 0B 100% /dev
/dev/ad0s1g 62M 40K 57M 0% /home
/dev/ad0s1f 124M 4.0K 114M 0% /nsm
/dev/ad0s1h 62M 12K 57M 0% /tmp
/dev/ad0s1d 248M 100M 128M 44% /usr
/dev/ad0s1e 124M 206K 114M 0% /var

I added two users.

  • User analyst is a member of the wheel group and can therefore su - to root.

  • User sguil is not a member of the wheel group. However, I run as many parts of Sguil as possible using this user.


Here is how to invoke the script:

$ su -
Password:
gruden# fetch http://www.bejtlich.net/sguil_install_v0.1.sh
sguil_install_v1-0.sh 100% of 6023 B 83 kBps
gruden# chmod +x sguil_install_v0.1.sh
gruden# ./sguil_install_v0.1.sh

When the sguil client user password prompt appears, enter something like 'sguil'. This is the only pause in the script.

The end result of running this script inside a FreeBSD VM I created is a Sguil sensor, server, and database. I'll describe that in my next post.

Manually Patching Barnyard Package

I'm currently working on a VM image of FreeBSD 6.0 with the components needed for a demonstration Sguil sensor, server, and database deployment. I'm using a minimal FreeBSD installation; /usr, for example, began at 100 MB.

I intend to install as many Sguil components as possible using precompiled packages. Unfortunately, the Barnyard package used to read Snort unified output spool files does not contain support for the latest version of Sguil. To deal with this problem, I am creating a custom Sguil package.

I'm not building the package on the host that will eventually run Barnyard. That host, gruden, does not have a compiler and other development tools. Instead I'm working on the package on another FreeBSD 6.0/i386 host, sguilref. First I see what packages Barnyard needs to build.

sguilref:/usr/ports/security/barnyard# make pretty-print-build-depends-list
This port requires package(s) "autoconf-2.59_2 m4-1.4.3 perl-5.8.7" to build.

I know sguilref has these packages already installed, so I am ready to start. First I retrieve the source code with 'make fetch'.

sguilref:/usr/ports/security/barnyard# make fetch
===> WARNING: Vulnerability database out of date, checking anyway
===> Found saved configuration for barnyard-0.2.0
=> barnyard-0.2.0.tar.gz doesn't seem to exist in /usr/ports/distfiles/.
=> Attempting to fetch from http://heanet.dl.sourceforge.net/sourceforge/barnyard/.
barnyard-0.2.0.tar.gz 100% of 157 kB 107 kBps

Now I extract it.

sguilref:/usr/ports/security/barnyard# make extract
===> WARNING: Vulnerability database out of date, checking anyway
===> Found saved configuration for barnyard-0.2.0
===> Extracting for barnyard-0.2.0
=> MD5 Checksum OK for barnyard-0.2.0.tar.gz.
=> No SHA256 checksum recorded for barnyard-0.2.0.tar.gz.

At this point I need to edit the Makefile. I make a copy called Makefile.orig for reference. Then I edit the Makefile to include a new option, WITH_SGUIL, that I will be able to use when invoking 'make'. You can see the contents of the new Makefile with the diff command.

sguilref:/usr/ports/security/barnyard# diff -u Makefile.orig Makefile
--- Makefile.orig Wed Dec 28 11:30:24 2005
+++ Makefile Wed Dec 28 11:34:05 2005
@@ -18,7 +18,8 @@
RUN_DEPENDS= ${LOCALBASE}/bin/snort:${PORTSDIR}/security/snort

OPTIONS= MYSQL "Enable MySQL support" off - POSTGRESQL "Enable PostgreSQL support" off
+ POSTGRESQL "Enable PostgreSQL support" off + SGUIL "Enable Sguil support" off

USE_AUTOCONF_VER= 259
USE_AUTOHEADER_VER= 259
@@ -43,6 +44,11 @@
.if defined(WITH_POSTGRESQL)
USE_PGSQL= yes
CONFIGURE_ARGS+= --enable-postgres
+.endif
+
+.if defined(WITH_SGUIL)
+USE_SGUIL= yes
+CONFIGURE_ARGS+= --enable-tcl --with-tcl=/usr/local/lib/tcl8.4
.endif

post-patch:

Now I am ready to copy the patches from my Sguil source distribution.

sguilref:/usr/ports/security/barnyard# cd work/barnyard-0.2.0
sguilref:/usr/ports/security/barnyard/work/barnyard-0.2.0# cp
/usr/local/src/sguil-0.6.0p1/sensor/barnyard_mods/op_sguil.* src/output-plugins/
sguilref:/usr/ports/security/barnyard/work/barnyard-0.2.0# cp
/usr/local/src/sguil-0.6.0p1/sensor/barnyard_mods/configure.in .

Now I can apply those patches.

sguilref:/usr/ports/security/barnyard/work/barnyard-0.2.0# cd src/output-plugins/
sguilref:/usr/ports/security/barnyard/work/barnyard-0.2.0/src/output-plugins# patch
-p0 < /usr/local/src/sguil-0.6.0p1/sensor/barnyard_mods/op_plugbase.c.patch
Hmm... Looks like a new-style context diff to me...
The text leading up to this was:
--------------------------
|*** op_plugbase.c.old Sun Mar 28 18:14:19 2004
|--- op_plugbase.c Mon Apr 4 10:39:54 2005
--------------------------
Patching file op_plugbase.c using Plan A...
Hunk #1 succeeded at 27.
Hunk #2 succeeded at 47.
done

With the right files patched, I can make a custom Barnyard package.

sguilref:/usr/ports/security/barnyard/work/barnyard-0.2.0/src/output-plugins# cd ../../../..
sguilref:/usr/ports/security/barnyard# make package WITH_SGUIL=yes
===> Patching for barnyard-0.2.0
===> Applying FreeBSD patches for barnyard-0.2.0
===> barnyard-0.2.0 depends on file: /usr/local/bin/autoconf259 - found
===> Configuring for barnyard-0.2.0
...edited...
checking for tclsh8.4... tclsh8.4
checking for the tcl version number... 8.4, patchlevel .11
...edited...
===> Registering installation for barnyard-0.2.0
===> Building package for barnyard-0.2.0
Creating package /usr/ports/packages/All/barnyard-0.2.0.tbz
Registering depends: snort-2.4.3_1 pcre-6.4.
Creating bzip'd tar ball in '/usr/ports/packages/All/barnyard-0.2.0.tbz'

Now I have a custom Barnyard package in /usr/ports/packages/All/barnyard-0.2.0.tbz. The last step is to see what packages Barnyard needs when it runs.

sguilref:/usr/ports/security/barnyard# make pretty-print-run-depends-list
This port requires package(s) "pcre-6.4 snort-2.4.3_1" to run.

I know that pcre-6.4 and snort-2.4.3_1 will be installed when I put Snort on this system. That means I can do a 'pkg_add barnyard-0.2.0.tbz' and the process will only look for pcre-6.4 and snort-2.4.3_1, which will be installed prior to Barnyard.

I plan to submit these steps to the Barnyard package maintainer to see if he might be able to get them merged.

Selasa, 27 Desember 2005

The October 2005 and December 2005 issues of login magazine feature some interesting articles.

  • Michael W. Lucas wrote FreeBSD 5 SMPng, which does not appear to be online and will be available to non-USENIX members in October 2006. Michael uses layman-friendly language to explain architectural decisions made to properly implement SMP in FreeBSD 5.x and beyond. He explains that removing the Big Giant Lock involved deciding to "make it run" first and then "make it fast" second. Given the arrival of dual-core on the laptop, desktop, and server, with more cores on the way, FreeBSD's SMP work is being validated.

  • Marc Fiuczynski wrote Better Tools for Kernel Evolution, Please! about the problems with the current Linux kernel development model. I am not sure his proposed solution, C4 (CrossCutting C Compiler), is the answer. As mentioned in the conference report on Marc's talk at HotOS X, "Jay Lepreau commented that the problem is that Linux has a pope model -- there’s only one integrator."

  • Peter Baer Galvin wrote about Solaris 10 Containers. This article explained some of the concepts behind containers, which are a way to run multiple instances of the same version of Solaris on a single Solaris system. They sound more advanced than FreeBSD jails.

  • Hobbit wrote DNS-based Spam Rejection, which uses pattern matching for DNS records to reject mail. Yes, that is the same Hobbit who wrote Netcat.
  • The December Security issue began strong with musings by new ;login: editor Rik Farrow. He makes some great points about weakness in depth. He notes that Microsoft's research OS Singularity, "like [Cisco] IOS, runs entirely in Ring 0, avoiding the performance penalties for context switches -- Singularity can switch between processes almost two orders of magnitude faster than BSD, which goes through context switching. Again, the penalty is the reduction in security by running all processes in Ring 0." Now, I am not even close to being a kernel developer, but I cannot believe Microsoft is toying with the idea of running everything in Ring 0. Is this just hubris on the part of Microsoft's developers? Do they seriously think they are smarter than everyone else who came before, and that they are going to get Singularity "right"?

  • Last week I ranted against the folly of a "pull the plug" first mentality to host-based forensics. Thankfully, Using Memory Dumps in Digital Forensics by Sam Stover and Matt Dickerson, explains why it is not a good idea to power down immediately.


Getting free copies of these magazines is almost a good enough reason to attend USENIX conferences!

Taps and Hubs, Part Deux

Yesterday I described why the scenario depicted above does not work. Notice, however, that the hub in the figure is an EN104TP 10 Mbps hub. Sensors plugged into the hub see erratic traffic.

If that 10 Mbps hub is replaced with a 10/100 Mbps hub, like the DS108, however, the situation changes.



With a 100 Mbps hub, each sensor can see traffic without any problems. Apparently the original issue involved the 10 Mbps hub not handling traffic from the single interface of the port aggregator tap, which must have operated at 100 Mbps and failed to autonegotiate to 10 Mbps properly.

We also previously explained why the next setup is a terrible idea:



In a very helpful comment to the last post, Joshua suggested the following setup:



This arrangement takes the output of a traditional two output tap and sends each output to a separate 100 Mbps hub. Sensors can then connect one output from each of their two sniffing interfaces to each hub. The sensor must take care of bonding the traffic on its two interfaces. This arranagement is novel because it allows more than one sensor to receive tap output. In the situation depicted, up to seven sensors could receive tap output.

So what is the bottom line? It remains true that hubs can never be used to combine the outputs of a traditional two output tap into a "single interface". However, it is possible to use them in the arrangements depicted in this post.

Senin, 26 Desember 2005

Network Monitoring Platforms on VMware Workstation

Several of you have asked about my experiences using FreeBSD sensors inside VMware Workstation. I use VMs in my Network Security Operations class. I especially use VMs on the final day of training, when each team in the class gets access to a VM attack host, a VM target, a VM sensor, and a VM to be monitored defensively. As currently configured, each host has at least one NIC bridged to the network. The sensor VMs have a second interface with no IP also bridged to the network. When any VM takes action against another, the sensors see it. This scenario does not describe how a VM sensor might watch traffic from a tap, however.

I decided to document how to use VMware to create a sensor that sniffs traffic from a tap. I outline two scenarios. The first uses a port aggregator tap with a single interface out to a sensor. The second uses a traditional tap with two interfaces out to a sensor. The VMware Workstation host OS in this study is Windows Server 2003 Enterprise x64 Edition Service Pack 1 on a Shuttle SB81P with a Broadcom Gigabit NIC and a quad port 10/100 Adaptec PCI NIC. I should mention at this point that this scenario is strictly for use in the classroom. I would never deploy an operational sensor inside a VM on a Windows server platform. I might consider running a sensor in a VM on a Linux server platform. Windows is not built for sniffing duties. Even with the DHCP service disabled, I still cannot tell the Windows interfaces to be configured without an IP address. If anyone has comments on this, please share.

The first step I take is to identify the interface I wish to use for management and the interfaces I wish to use for sniffing. A look at the Network Connections for this system shows the following interfaces are available:



I am using one of the Adaptec interfaces as a host management interface. The Broadcom Gigabit NIC is plugged into the single output from a port aggregator tap. Two other Adaptec interfaces are plugged into the two outputs of a traditional tap. The remaining Adaptec interface is not connected to anything.

Three of the NICs are in the process of "Acquiring network addresses" even with DHCP disabled on the server. Overall this output somewhat confusing, especially if you want to match up interfaces to physical NIC ports. Here is output from ipconfig /all:



Windows is calling the management interface Ethernet adapter Local Area Connection 3. You can see it has the highest of the four Adaptec MAC addresses -- 00-00-D1-EC-F5. I do not know why Windows decided to call it LAC 3. LAC 2 is disconnected. LAC (which doesn't have a number at all -- it's simply Ethernet adapter Local Area Connection) is the Broadcom Gigabit NIC connected to the port aggregator tap. LACs 3 and 4 are connected to the two outputs of the traditional tap.

Notice the LAC does not correspond to the name of the interface shown in the screen shots! For example, LAC 3 is called Ethernet Adapter #4. (Why again did I choose to demonstrate this on Windows?)

With our NICs identified, we can match them up to VMware interfaces. Here is the summary page for the VMware Virtual Network Editor.



This screen is a little cramped, so take a look at the next screen shot showing the Host Virtual Network Mapping.



What I'm doing here is specifically assigning VMnets to individual physical interfaces. This will allow me to assign these VMnets to virtual interfaces on each VM, which I do next. Before starting that process, here is the auto interface bridging selection tab in VMware Workstation:



This shows that three of my adapters are specifically selected to not be automatically bridged.

Now let's look at the host configuration for the VM sensor. The box has two interfaces. The first is automatically bridged. The second has a custom setup.



For the first interface, lnc0, it will use the automatic bridge settings to connect to an automatically chosen adapter. This will be LAC3.



The second interface has a custom setting. Here it will listen to the Broadcom Gigabit interface plugged into the port aggregator tap.



Once I boot the sensor VM, I can SSH to its management interface (lnc0) and see ifconfig output:

# ifconfig -a
lnc0: flags=108843 mtu 1500
inet 192.168.2.91 netmask 0xffffff00 broadcast 192.168.2.255
inet6 fe80::20c:29ff:fe5f:51ea%lnc0 prefixlen 64 scopeid 0x1
ether 00:0c:29:5f:51:ea
lnc1: flags=108802 mtu 1500
ether 00:0c:29:5f:51:f4
plip0: flags=108810 mtu 1500
lo0: flags=8049 mtu 16384
inet 127.0.0.1 netmask 0xff000000
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x4

Once I bring up the lnc1 interface, it sees ICMP traffic as I desire:

# ifconfig lnc1 up -arp
# tcpdump -c 4 -n -i lnc1 -s 1515 icmp
tcpdump: WARNING: lnc1: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on lnc1, link-type EN10MB (Ethernet), capture size 1515 bytes
23:21:06.853616 IP 69.243.40.166 > 216.239.37.99: icmp 40: echo request seq 2304
23:21:06.925324 IP 216.239.37.99 > 69.243.40.166: icmp 40: echo reply seq 2304
23:21:07.829845 IP 69.243.40.166 > 216.239.37.99: icmp 40: echo request seq 2560
23:21:07.874166 IP 216.239.37.99 > 69.243.40.166: icmp 40: echo reply seq 2560
4 packets captured
259 packets received by filter
0 packets dropped by kernel

No problem. Now let's see how we can handle combining dual outputs from the traditional tap.

The first issue is dealing with the limitation of having only three virtual NICs in any VM. To address this, we will redeploy lnc1 to watch one of the outputs from the traditional tap, and create lnc2 to watch the other.

With this setup, Ethernet 2 is watching VMnet 3 and Ethernet 3 is watching VMnet 4.



With the interfaces created and the sensor booted, I bond them with the following script:

#!/bin/sh
kldload ng_ether
ifconfig lnc1 promisc -arp up
ifconfig lnc2 promisc -arp up

ngctl mkpeer . eiface hook ether
ngctl mkpeer ngeth0: one2many lower one
ngctl connect lnc1: ngeth0:lower lower many0
ngctl connect lnc2: ngeth0:lower lower many1

ifconfig ngeth0 -arp up

When done, I have ngeth0 ready and I can sniff with it:

# tcpdump -c 4 -n -i ngeth0 -s 1515 icmp
tcpdump: WARNING: ngeth0: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ngeth0, link-type EN10MB (Ethernet), capture size 1515 bytes
23:29:49.524910 IP 69.243.40.166 > 216.239.37.99: icmp 40: echo request seq 3328
23:29:49.567338 IP 216.239.37.99 > 69.243.40.166: icmp 40: echo reply seq 3328
23:29:50.510332 IP 69.243.40.166 > 216.239.37.99: icmp 40: echo request seq 3584
23:29:50.555589 IP 216.239.37.99 > 69.243.40.166: icmp 40: echo reply seq 3584
4 packets captured
58 packets received by filter
0 packets dropped by kernel

This testing shows you can get a sensor running in a VM to see the same sorts of traffic a physical sensor might see.

Incidentally, the folks at VMware notice I use their products, given their October 2005 post to the VMTN Blog.

I would be interested in hearing if you run sensors inside VMs, for either research or production purposes.

Taps and Hubs Never, Ever Mix

I've written about not using taps with hubs in January 2004 and again in a prereview of Snort Cookbook. The diagram below shows why it's a bad idea to try to "combine" outputs from a traditional tap into a hub.



The diagram shows a traditional two-output tap connecting to a hub. Why would someone do this? This unfortunate idea tries to give a sensor with a single sniffing interface the ability to see traffic from both tap outputs simultaneously. The proper way to address the issue is shown below.



A method to bond interfaces with FreeBSD is listed here. We could avoid the interface bonding issue if we replace the dual output tap with a so-called port aggregator tap, like the one pictured at left. As long as the total aggregate bandwidth of the monitored link does not exceed 100 Mbps (for a 100 Mbps tap), then we can use it as shown below.



What do we do if we have more than one sensor platform? In other words, we may have an IDS and some other device that needs to inspect traffic provided by the port aggregator tap. We might be tempted to do the following, which shows putting the single output from the port aggregator tap into a hub, then plugging the two sensors into the hub.



This is a bad idea. The interface provided by the single port aggregator tap output is full duplex. It will not work properly when connected to the inherently half duplex interface on the hub. When each sensor interface is plugged into the hub, they will auto-negotiate at half duplex as well. Subtle problems will appear when they try to monitor traffic sent from the tap. Consider the following ICMP traffic sniffed using a scenario like that shown above. Host 69.243.40.166 used the -s 256 option for ping to send larger than normal ICMP packets.

16:20:15.930505 IP 69.243.40.166 > 216.109.117.107: icmp 264: echo request seq 0
16:20:15.942576 IP 216.109.117.107 > 69.243.40.166: icmp 264: echo reply seq 0
16:20:16.934919 IP 69.243.40.166 > 216.109.117.107: icmp 264: echo request seq 1
16:20:16.947981 IP 216.109.117.107 > 69.243.40.166: icmp 264: echo reply seq 1
16:20:17.956721 IP 216.109.117.107 > 69.243.40.166: icmp 264: echo reply seq 2
...edited...
16:20:25.010988 IP 69.243.40.166 > 216.109.117.107: icmp 264: echo request seq 9
16:20:25.022211 IP 216.109.117.107 > 69.243.40.166: icmp 264: echo reply seq 9
16:20:27.030011 IP 69.243.40.166 > 216.109.117.107: icmp 264: echo request seq 11
16:20:27.042325 IP 216.109.117.107 > 69.243.40.166: icmp 264: echo reply seq 11
16:20:28.039553 IP 69.243.40.166 > 216.109.117.107: icmp 264: echo request seq 12
16:20:28.050413 IP 216.109.117.107 > 69.243.40.166: icmp 264: echo reply seq 12
16:20:29.048828 IP truncated-ip - 248 bytes missing!
69.243.40.166 > 216.109.117.107: icmp 264: echo request seq 13
16:20:29.060525 IP 216.109.117.107 > 69.243.40.166: icmp 264: echo reply seq 13
...edited...
16:20:40.153733 IP 69.243.40.166 > 216.109.117.107: icmp 264: echo request seq 24
16:20:41.163216 IP 69.243.40.166 > 216.109.117.107: icmp 264: echo request seq 25
16:20:41.175272 IP 216.109.117.107 > 69.243.40.166: icmp 264: echo reply seq 25
16:20:42.172485 IP truncated-ip - 248 bytes missing!
69.243.40.166 > 216.109.117.107: icmp 264: echo request seq 26
16:20:42.185592 IP 216.109.117.107 > 69.243.40.166: icmp 264: echo reply seq 26

The first four packets look ok. Echo request seq 0 is matched by echo reply seq 0; and echo 1 for reply 1. The same doesn't hold for seq 2, which is missing its echo request. Later seq 9 appears, after no problems with ICMP seq 3-8. Suddenly there is no mention of seq 10. Seq 12 is ok, but the echo request for seq 13 is abnormally truncated! Later we see the echo request for seq 24, but no reply. We see the echo request and reply for seq 25, only to be followed by an abnormally truncated eqo request for seq 26. This is definitely troublesome. From the perspective of the host sending the ICMP traffic, no packets were dropped or received abnormally.

The proper way to address this problem, if port aggregation is desired, is to use a dual port aggregator tap, as shown below.



That solution provides a single tap output interface to each sensor. If one does not want to use port aggregation, and one can have the sensor bond interfaces, something like the Regeneration Tap shown at left can be used. In this case, two outputs are provided for each sensor, and they bond them together to see a single full duplex traffic stream.

Notice that in no circumstances can one combine a tap and a hub. Therefore, taps and hubs never, ever, mix. Remember that this holiday season!

Update: Ok, that is not entirely accurate. It is accurate for the scenarios depicted here, but some creative thinking and a very helpful comment by Joshua resulted in this follow-on post!

Where Should I Be in 2006?

I just updated my events site at TaoSecurity. I keep track of speaking engagements there. For example, I will speak at DoD Cybercrime, SchmooCon 2006, RSA Conference 2006, the 2006 Rocky Mountain Information Security Conference, and the 2006 Computer and Enterprise Investigations Conference.

I will submit tutorial proposals for USENIX 2006 and USENIX Security 2006, and Black Hat USA Training 2006.

What conferences do you attend? Do you think I should try to speak there? Based on your knowledge of my interests (through this blog), what do you think I should discuss? Should I speak to your company or organization? At the moment I have several private Network Security Operations classes on tap for 2006, and my schedule for the first half of the year is already filling.

I appreciate your feedback!

Pulling the Plug in 2005

Every time I attend a USENIX conference, I gather free copies of the ;login: magazine published by the association. The August 2005 issue features some great stories, with some of them available right now to non-USENIX members. (USENIX makes all magazine articles open to the public one year after publication. For example, anyone can now read the entire December 2004 issue.)

An article which caught my eye was Forensics for System Administrators by Sean Peisert. Although the USENIX copy of the article won't be published until August 2006, you can read Sean's copy here (.pdf).

I thought the article was proceeding well until I came across this advice.

"What happens when there is some past event that a system administrator wishes to understand on their system? Where should the administrator, now a novice forensic analyst, begin? There are many variables and questions that must be answered to make proper decisions about this. Under almost all circumstances in which the system can be taken down to do the analysis, the ideal thing to do is halt or power-off the system using a hardware method." (emphasis added)

Is he serious? The article continues:

"[T]he x86 BIOS does not have a monitor mode that supports this [a hardware interrupt]. The solution for everyone else? Pull the plug. The machine will power off, the disk will remain as-is, and there will be no possibility of further contamination of the evidence through some sort of clean-up script left by the intruder, as long as the disk is not booted off or mounted in read/write mode again. The reason for stopping a machine is that it prevents further alteration of the evidence. The reason for halting with a hardware interrupt, rather than using the UNIX halt or shutdown command is that if a root compromise occurred, those commands could have been trojaned by an intruder to clean up evidence."

I can't believe I'm reading this advice in 2005, only 6 days from 2006. This is the advice I heard nearly 10 years ago. "Pulling the plug" as the first step in a forensic investigation is absolutely terrible advice. I am not a host-based forensics guru, but I know that a live response, first described in the June 2001 book Incident Response by Mandia, Prosise, and Pepe, should be part of even the most basic forensically-minded sys admin's techniques. Sean could have even looked into the ;login: archives to find Keith Jones' article in the November 2001 issue describing live response.

Live response is a technique to retrieve volatile information from a running system in a forensically sound manner. Live response can be frustrated by some binary and kernel alteration techniques, but it is a good (non-network-centric) first step whenever a host is suspected of being compromised. Those who want to know more about live response, and see how helpful the advice can be, will enjoy reading Real Digital Forensics.

Sean tries to defend pulling the plug here:

"In our first example intrusion, I took a preliminary look at the syslog and saw that dates of suspicious logins went back at least three weeks. Given that the intrusion seemed to be going on for so long, I decided that I could no longer trust the system to reliably and accurately report evidence about itself. Therefore, pulling the plug on the machine was the best option."

That is a really weak excuse. Certainly a non-ankle-biter attacker will take steps to hide his presence. That does not mean that no attempt should be made to collect volatile system information!

Sean continues:

"It is certainly the case that halting a system can help perserve more evidence, particularly that in swap, slack, or otherwise unallocated space on disk. But it also can destroy some evidence. For example, halting a system will wipe out the contents of memory, hindering the ability of an analyst to dump a memory image to disk. However, in the forensic discussions in this article, slack space and memory dumps are outside the scope of our analysis. In our case, halting a system merely helped to preserve real evidence, and had the intrusion in our first example been discovered sooner, and the system sooner halted as a result, the intruder would have had less time to cover their tracks. Then, as I will discuss, certain helpful log files that were deleted may have been recoverable."

If Sean is worried that an intruder will take actions to "cover their tracks," then the live response can be performed after the victim host has been cut off from the Internet. Sure, the most 31337 attackers may detect this and start self-cleansing procedures, but how often does that happen? Also, collecting live response data does not usually trigger any cleaning mechanisms. The sort of data one collects is the normal information a system administrator might inspect during the course of regular duties.

The fundamental issue here is whether pulling the plug should be the first response activity or not. In my experience, cutting off remote access is the first step. Analysis of NSM data involving the target host is second. Live response is the third. Forensic duplication and analysis is the fourth, if the previous two steps point to compromise and the resources for investigation and available.

This part of the article makes me sad:

"This material is based on work sponsored by the United States Air Force and supported by the Air Force Research Laboratory under Contract F30602-03-C-0075 and performed in conjunction with Lockheed Martin Information Assurance. Thanks to Sid Karin, Abe Singer, Matt Bishop, and Keith Marzullo, who provided valuable discussions during the writing of this article."

First, why is the Air Force paying for advice that should have been abandoned in 1998, the last time I remember the Air Force suggesting these sorts of actions? Second, why didn't any of the article reviewers speak out against this bad advice?

Sabtu, 24 Desember 2005

Reprinting Security Tools and Exploits

Yesterday I blogged about reprinted material in Syngress' "new" Writing Security Tools and Exploits. A commment on that post made me take another look at this book in light of other books by James Foster already published by Syngress. Here is what I found.

  • Chapter 3, "Exploits: Stack" is the same as Chapter 5, "Stack Overflows" in Buffer Overflow Attacks, published several months ago.

  • Chapter 4, "Exploits: Heap" is the same as Chapter 6, "Heap Corruption" in Buffer.

  • Chapter 5, "Exploits: Format String" is the same as Chapter 7, "Format String Attacks" in Buffer.

  • Chapter 6, "Writing Exploits I" is the same as Chapter 10, "Writing Exploits I" in Sockets, Shellcode, Porting, and Coding, another Syngress book by Foster published several months ago.

  • Chapter 7, "Writing Exploits II" is the same as Chapter 11, "Writing Exploits II" in Sockets.

  • Chapter 8, "Coding for Ethereal" appears to be Chapters 11, "Capture File Formats", and 12, "Protocol Dissectors", from Nessus, Snort, and Ethereal Power Tools.

  • Chapter 9, "Coding for Nessus" is the same as Chapter 2, "NASL Scripting" in Sockets and Chapter 9 in Penetration Tester's Open Source Toolkit.

  • Appendix A, "Data Conversion Reference" is the same as Appendix A in Buffer.

  • Appendix B, "Syscall Reference" is the same as Appendix B in Buffer and Appendix D in Sockets.


At the end of the day this 12 chapter Writing book offers only Chapters 1, 2, 10, 11, and 12 as new material.

I decided to next take a look at Sockets, Shellcode, Porting, and Coding to see what material it may have duplicated. Here is what I found.

  • Chapter 8, "Writing Shellcode I" appears the same as Chapter 2, "Understanding Shellcode" in the previously published Buffer Overflow Attacks.

  • Chapter 9, "Writing shellcode II" appears the same as Chapter 3, "Writing Shellcode" in Buffer.

  • Several of the case studies appear to be duplicates of material from Buffer, like "xlockmore User-Supplied Format String Vulnerability", "X11R6 4.2 XLOCALEDIR Overflow", and "OpenSSL SSLv2 Malformed Client Key Remote Buffer".


I guess it's easier to be "authored in over fifteen books" when your material is recycled.

Jumat, 23 Desember 2005

Pre-Review: Writing Security Tools and Exploits

Yesterday I posted a pre-review for Penetration Tester's Open Source Toolkit. I wrote that I thought the two chapters on Metasploit looked interesting. Today I received a review copy of the new Syngress book pictured at left, Writing Security Tools and Exploits by James Foster, Vincent Liu, et al. This looks like a great book, with chapters on various sorts of exploits, plus sections on extending Nessus, Ethereal, and Metasploit.

Metasploit, hmm. I looked at chapters 10 and 11 in Writing and found them to be identical to chapters 12 and 13 in Penetration. Identical! I can't remember the last time I saw a publisher print the same chapters in two different books. I assume James Foster wanted the chapters he wrote for Penetration to appear in Writing because he follows with a new chapter 12 on more Metasploit extensions.

This realization made me remember another Syngress book that I received earlier this year -- Nessus, Snort, & Ethereal Power Tools. I saw that Noam Rathaus had written chapters on Nessus for both Power Tools and Penetration. Could they be the same? Sure enough, chapters 3 and 4 in Power Tools match chapters 10 and 11 in Penetration.

So, 4 out of the 13 chapters in Penetration are published in other books. I would enjoy hearing someone at Syngress explain this, or perhaps one of the authors could comment?

Windows Via Real Thin Clients

Real thin clients, like the Sun Ray 170, don't run operating systems like Windows or Linux. I like the Sun Ray, since its Sun Ray Server Software runs on either Solaris or Red Hat Enterprise Linux. That's fine for users who want to access applications on Solaris or Linux. What about those who need Windows? I can think of four options:

  1. Run a Windows VM inside the free VMware Player on the Red Hat Enterprise Linux user's desktop.

  2. Run VMware Workstation on each user's desktop.

  3. Run VMware GSX Server on the Red Hat Enterprise Linux server running Sun Ray Server Software, and let users connect to the Windows VMs using the VMware Virtual Machine Console

  4. Run VMware ESX Server on a separate platform, and let users connect to the Microsoft VMs using the Remote Console


Is anyone trying this already?

Update: I noticed a similar issue appeared in the VMTN Blog.

Notes on Trafshow 5

Trafshow is a ncurses-based program that shows a snapshot of active network sessions in near real time. I like to use it with OpenSSH sessions on sensors to get a quick look at hosts that might be hogging bandwidth. Recently Trafshow 5 became available in the FreeBSD ports tree (net/trafshow), so I have started using it.

When I showed it in class last week, I realized I did not recognize the color scheme depicted in the screen shot above. I learned that the configuration file /usr/local/etc/trafshow controls these colors:

# The colors are:
# black red green yellow blue magenta cyan white
#
# The upper-case Fcolor mean bright *on* and Bcolor blink *on*.
#

#default white:blue

# following color settings looks nice under black-on-gray xterm (xterm-color)

# Private IP Addresses will be alarmed by Red foreground.
# Source Destination Color

10.0.0.0/8 any Red
any 10.0.0.0/8 Red
127.0.0.1/8 any Red
any 127.0.0.1/8 Red
172.16.0.0/16 any Red
any 172.16.0.0/16 Red
192.168.0.0/16 any Red
any 192.168.0.0/16 Red

# Network Services.
# Service Color Comments

135 Red # netbios
137 red # netbios
138 red # netbios
139 red # netbios

snmp white
smux white
162 White # snmp-trap
67 white # bootp/dhcp-server
68 white # bootp/dhcp-client
546 white # dhcpv6-client
547 white # dhcpv6-server
timed white
who white

domain cyan
389 cyan # ldap
636 cyan # ldaps
*/icmp Cyan

http blue
https blue
3128 Blue # http-proxy
8080 Blue # http-proxy

smtp Green
nntp Green
pop3 green
995 green # pop3s
143 green # imap2,4
220 green # imap3

ftp yellow
20 Yellow # ftp-data
tftp Yellow
nfs Yellow
6000 Yellow # X11

ssh magenta
telnet Magenta
sunrpc Magenta
513/tcp Magenta # rsh
514/tcp Magenta # rcmd

As you can see in the screen shot, we have SSH, WHOIS, ICMP, DNS, IRC, and NTP active.

You may notice records without port information. For example, the 7th record shows source 69.243.40.166 and destination 204.152.184.73 speaking protocol 6 (TCP). No ports are listed. However, the first two records list the two sides of a conversation between those two hosts. Similarly, the last two records show traffic involving 69.243.40.166 and 65.201.175.103, with no ports. If we look at the 9th record, however, we see those two IPs speaking on port 43 TCP (WHOIS).

A quick look at Argus data from yesterday (when I took this screenshot) reveals that the port 43 TCP traffic was the only conversation between those two hosts:

ra -nn -r argus2.arg -L0 -A - host 65.201.175.103

StartTime Flgs Type SrcAddr Sport Dir DstAddr Dport
SrcPkt DstPkt SAppBytes DAppBytes State

22 Dec 05 17:11:52 tcp 69.243.40.166.49202 -> 65.201.175.103.43
6 6 16 2736 FIN

This indicates to me that the records without port data are related to those with port data, because in this second case only one session involved both IPs.

I will contract Trafshow's author to gain confirmation.

One aspect of the new Trafshow I do not like is the way it opens a port to listen for NetFlow records:

orr:/home/richard$ sockstat -4 | grep trafshow
root trafshow 1078 4 udp4 *:9995 *:*

To disable this NetFlow collector function, invoke Trafshow with the '-u 0' option.

One feature of Trafshow 5 that I like is the ability to listen on an interface that does not have an IP address assigned. Previous Trafshow versions would complain and fail if they were told to listen on an interface with no IP.

Kamis, 22 Desember 2005

Pre-Review: Penetration Tester's Open Source Toolkit

Today I received a copy of the new Syngress book Penetration Tester's Open Source Toolkit by Johnny Long, Chris Hurley, SensePost, Mark Wolfgang, Mike Petruzzi, et al. This book appears unnecessarily massive; it's probably 1/2 thicker than my first book, but at 704 pages it's nearly 100 pages shorter than Tao. I think Syngress used thicker, "softer" paper, if that makes sense to anyone.

The majority of the book appears to be the standard sort of hacker stuff one finds in books like Hacking Exposed, with some exceptions. The book contains two chapters on Metasploit which look helpful. I do not know yet how well these Metasploit 2.0-based chapters apply to the new Metasploit 3.0, whose alpha stage was announced last week. Similarly, chapters on Nessus may not hold up well for Nessus 3.0, also recently released.

A major selling point of the new book is its integration of the Auditor live CD. I learned that Auditor is going to merge with "competitor" IWHAX to produce BackTrack in early 2006. Consolidation among similar open source projects to pool resources and create better results? Heresy!

Remote Heap Overflow in VMware Products

Thanks to a heads-up from "yomama" in the #snort channel, I learned of this advisory from Tim Shelton:

"A vulnerability was identified in VMware Workstation (And others) vmnat.exe, which could be exploited by remote attackers to execute arbitrary commands.

This vulnerability allows the escape from a VMware Virtual Machine into userland space and compromising the host.

'Vmnat' is unable to process specially crafted 'EPRT' and 'PORT' FTP Requests."

This implies that someone who connects to a FTP server using traffic that is processed by vmnat.exe can exploit vmnat.exe.

As a VMware Workstation user, I am glad to see they have published a new version to address the vulnerability.

Rabu, 21 Desember 2005

Two Great Wiretapping Articles

Given the recent coverage of wiretapping in the mainstream media, I thought I would point out two excellent articles in the latest issue of IEEE Security & Privacy Magazine. Thankfully, both are available online:

Both concentrate on technical issues of wiretapping. The first concentrates on how to tap a physical line or switch, and ways to defeat those taps. The second describes why incorporating wiretap features into VoIP is a bad idea. Each article discusses relevant laws.

Brief Thoughts on Cisco AON

I received my copy of Cisco's Packet Magazine, Fourth Quarter 2005 recently. The new digital format for the magazine makes linking to anything impossible, but I found the relevant article as a .pdf.

It describes the company's Application-Oriented Networking (AON) initiative. According to this story that quotes Cisco personnel, AON "is a network-embedded intelligent message routing system that integrates application message-level communication, visibility, and security into the fabric of the network." According to this document:

Cisco AON is currently available in two products that integrate into Cisco switches and routers:

  • Cisco Catalyst® 6500 Series AON module, which is primarily deployed in enterprise core or data centers

  • Cisco 2600/2800/3700/3800 series AON module, which is primarily deployed at branch offices


AON is part of Cisco's Intelligent Information Network project. From the article:

"The Cisco AON module in the branch puts intelligent decision-making at the network edge. It can intercept and analyze traffic in various message formats and protocols and bridge between them, provide security, and validate messages, creating a transparent interface between trading partners and, in effect, a good business-to-business gateway. It can manage remote devices that send messages to the Cisco Integrated Services Router in the branch. It can also filter messages from multiple sources that come into the branch router for duplicates or by other criteria, aggregate them, make decisions according to instructions, and transmit selected messages to a sister AON module deployed in the data center." (emphasis added)

I find this aspect very interesting. It sounds like AON could be used to enforce protocol and security policies. I wonder if this might eventually happen on a per-port basis? Security on a per-port basis would allow validation of network traffic itself, not just whether a host should be accessing the network. Per-port security would move the job of enforcing security away from choke-point products like firewalls (which include IPSs, application firewalls, whatever) and into switches.

This is not necessarily a great idea, as this Register article confirms. One of the strengths of the Internet has been the fact that it inverted the telecom model, where the network was smart and the end device (the phone) was dumb. The traditional Internet featured a relatively dumb network whose main job was to get traffic from point A to point B. The intelligence was found in those end points. This Internet model simplified troubleshooting and allowed a plethora of protocols to be carried from point A to point B.

With so-called "intelligent networking," points A and B have to be sure that the network will transmit their conversation, and not block, modify, or otherwise interfere with that exchange to the detriment of the end hosts. As a security person I am obviously in favor of efforts to enforce security policies, but I am not in favor of adding another layer of complexity on top of existing infrastructures if it can be avoided.

Navy Installing Sun Ray Thin Clients

I've written about Sun's Sun Ray 170 thin client before. The Sun Ray is a true thin client, and to me it is the best way for enterprises to win the battle of the desktop against Microsoft-centric threats. Accordingly, I would like to congratulate the US Navy after reading Navy opts for thin-client systems onboard ships:

"Bob Stephenson, chief technology officer for command, control, communications, computers and intelligence operations at Spawar, said the Navy plans to use the thin-client systems from Sun Microsystems on all major surface ships in the fleet.

Thin clients will be installed on 160 vessels, Stephenson said. ..

Mario Diaz, Sun Microsystems' Navy sales manager, said the Navy will deploy the company's Sun Ray thin clients connected to servers running the Trusted Solaris operating system, which can collapse multiple networks onto a single network while providing separate levels of classification."

As a former Air Force officer, I'm biased towards the Air Force. However, I've written that I think the Air Force is fighting the last war, having decided to adopt "standardized and securely configured Microsoft software throughout the service." Whee, that only took what, 10 years? Kudos to the Navy for stepping forward with an innovative solutions.

Changes Coming in Sguil 0.6.1

Sguil 0.6.0p1 introduced the use of MERGE tables in MySQL to improve database performance.

Sguil 0.6.1, in development now, will bring UNION functionality to database queries. This will also improve performance.

Consider the following standard event or alert query in Sguil. This query says return Snort alerts where 151.201.11.227 is the source IP OR the destination IP. OR is a slow operation compared to UNION. Sguil 0.6.1 will use a new query.

Here we look for Snort alerts where 220.98.198.35 is the source IP address, and use UNION to return those results with alerts where 220.98.198.35 is the destination IP address.

UNION functionality was not available in MySQL 3.x, but it appeared in 4.x. Many Sguil users are running MySQL 5.x now.

Those screen shots just show the WHERE portions of the database queries. Here is each version of similar queries look like in their entirety:

Sguil 0.5.3 and older:

SELECT sensor.hostname, sancp.sancpid, sancp.start_time as datetime, sancp.end_time,
INET_NTOA(sancp.src_ip), sancp.src_port, INET_NTOA(sancp.dst_ip), sancp.dst_port,
sancp.ip_proto, sancp.src_pkts, sancp.src_bytes, sancp.dst_pkts, sancp.dst_bytes
FROM sancp
IGNORE INDEX (p_key)
INNER JOIN sensor ON sancp.sid=sensor.sid
WHERE sancp.start_time > '2005-08-02' AND ( sancp.src_ip = INET_ATON('82.96.96.3') OR
sancp.dst_ip = INET_ATON('82.96.96.3') )

EXPLAIN
+----+-------------+--------+--------+--------------------------+---------+---------+-------------------+-----------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------+--------+--------------------------+---------+---------+-------------------+-----------+-------------+
| 1 | SIMPLE | sancp | ALL | src_ip,dst_ip,start_time | NULL | NULL | NULL | 100458818 | Using where |
| 1 | SIMPLE | sensor | eq_ref | PRIMARY | PRIMARY | 4 | sguildb.sancp.sid | 1 | |
+----+-------------+--------+--------+--------------------------+---------+---------+-------------------+-----------+-------------+

The actual query returns an empty set after 5mins 29.14secs on Bamm's database.

Sguil 0.6.0p1:

(
SELECT sensor.hostname, sancp.sancpid, sancp.start_time as datetime, sancp.end_time,
INET_NTOA(sancp.src_ip), sancp.src_port, INET_NTOA(sancp.dst_ip), sancp.dst_port,
sancp.ip_proto, sancp.src_pkts, sancp.src_bytes, sancp.dst_pkts, sancp.dst_bytes
FROM sancp
IGNORE INDEX (p_key)
INNER JOIN sensor ON sancp.sid=sensor.sid
WHERE sancp.start_time > '2005-08-02' AND sancp.src_ip = INET_ATON('82.96.96.3')
) UNION (
SELECT sensor.hostname, sancp.sancpid, sancp.start_time as datetime, sancp.end_time,
INET_NTOA(sancp.src_ip), sancp.src_port, INET_NTOA(sancp.dst_ip), sancp.dst_port,
sancp.ip_proto, sancp.src_pkts, sancp.src_bytes, sancp.dst_pkts, sancp.dst_bytes
FROM sancp
IGNORE INDEX (p_key)
INNER JOIN sensor ON sancp.sid=sensor.sid
WHERE sancp.start_time > '2005-08-02' AND sancp.dst_ip = INET_ATON('82.96.96.3')
)

EXPLAIN
+----+--------------+------------+--------+-------------------+---------+---------+-------------------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------+------------+--------+-------------------+---------+---------+-------------------+------+-------------+
| 1 | PRIMARY | sancp | ref | src_ip,start_time | src_ip | 5 | const | 108 | Using where |
| 1 | PRIMARY | sensor | eq_ref | PRIMARY | PRIMARY | 4 | sguildb.sancp.sid | 1 | |
| 2 | UNION | sancp | ref | dst_ip,start_time | dst_ip | 5 | const | 108 | Using where |
| 2 | UNION | sensor | eq_ref | PRIMARY | PRIMARY | 4 | sguildb.sancp.sid | 1 | |
|NULL| UNION RESULT | union1,2 | ALL | NULL | NULL | NULL | NULL | NULL | |
+----+--------------+------------+--------+-------------------+---------+---------+-------------------+------+-------------+

The actual query returns an empty set in .33secs on Bamm's database.

Selasa, 20 Desember 2005

Guidance Software 0wn3d

This morning I read stories by Brian Krebs and Joris Evers explaining how Guidance Software, maker of host-based forensics suite Encase, was compromised. Guidance CEO John Colbert claims "a person compromised one of our servers," including "names, addresses and credit card details" of 3,800 Guidance customers. Guidance claims to have learned about the intrusion on 7 December. Victim Kessler International reports the following:

"Our credit card fraud goes back to Nov. 25. If Guidance knew about it on Dec. 7, they should have immediately sent out e-mails. Why send out letters through U.S. mail while we could have blocked our credit cards?"

Guidance could face severe financial trouble. According to reporter Joris Evers:

"Guidance stored customer names and addresses and retained card value verification, or CVV, numbers, Colbert said. The CVV number is a three-digit code found on the back of most credit cards that is used to prevent fraud in online and telephone sales. Visa and MasterCard prohibit sellers from retaining CVV once a transaction has been completed."

Reporter Krebs explains the implications:

"Companies that violate those standards can be fined $500,000 per violation. Credit card issuers generally levee such fines against the bank that processes payment transactions for the merchant that commits the violations. The fines usually are passed on to the offending company."

Since Guidance's customers include "hundreds of security researchers and law enforcement agencies worldwide, including the U.S. Secret Service, the FBI and New York City police," I don't think those customers will tolerate this breach of trust.

Why did it take Guidance at least 12 days (from the first known fraudulent purchases on 25 Nov to the reported discovery on 7 Dec) to learn they were owned? I think this is an example of a company familiar with creating host-centric forensic software, but unfamiliar with sound operational security and proper policy, architecture, and monitoring to prevent or at least detect intrusions. Furthermore, who will be fired and/or fined for storing CVVs indefinitely?

Senin, 19 Desember 2005

Disk Ring Buffer in Tcpdump 3.9.4

I finally got a chance to try Tcpdump 3.9.4 and Libpcap 0.9.4 on FreeBSD using the net/tcpdump and net/libpcap ports. I was unable to install them using packages, so I used the ports tree. I initally got the following error:

===> Extracting for tcpdump-3.9.4
=> MD5 Checksum OK for tcpdump-3.9.4.tar.gz.
=> SHA256 Checksum OK for tcpdump-3.9.4.tar.gz.
===> Patching for tcpdump-3.9.4
===> tcpdump-3.9.4 depends on shared library: pcap.2 - not found
===> Verifying install for pcap.2 in /usr/ports/net/libpcap
===> WARNING: Vulnerability database out of date, checking anyway
=> libpcap-0.9.4.tar.gz doesn't seem to exist in /usr/ports/distfiles/.
=> Attempting to fetch from http://www.tcpdump.org/release/.
libpcap-0.9.4.tar.gz 100% of 415 kB 73 kBps
===> Extracting for libpcap-0.9.4
=> MD5 Checksum OK for libpcap-0.9.4.tar.gz.
=> SHA256 Checksum OK for libpcap-0.9.4.tar.gz.
===> Patching for libpcap-0.9.4
...edited...
===> Installing for libpcap-0.9.4
===> Generating temporary packing list
===> Checking if net/libpcap already installed
[ -d /usr/local/lib ] || (mkdir -p /usr/local/lib; chmod 755
/usr/local/lib)
install -o root -g wheel -m 444 libpcap.a /usr/local/lib/libpcap.a
ranlib /usr/local/lib/libpcap.a
[ -d /usr/local/include ] || (mkdir -p /usr/local/include; chmod 755
/usr/local/include)
install -o root -g wheel -m 444 ./pcap.h /usr/local/include/pcap.h
install -o root -g wheel -m 444 ./pcap-bpf.h /usr/local/include/pcap-bpf.h
install -o root -g wheel -m 444 ./pcap-namedb.h
/usr/local/include/pcap-namedb.h
[ -d /usr/local/man/man3 ] || (mkdir -p /usr/local/man/man3; chmod 755
/usr/local/man/man3)
install -o root -g wheel -m 444 ./pcap.3 /usr/local/man/man3/pcap.3
===> Compressing manual pages for libpcap-0.9.4
===> Registering installation for libpcap-0.9.4
===> Returning to build of tcpdump-3.9.4
Error: shared library "pcap.2" does not exist
*** Error code 1

Stop in /usr/ports/net/tcpdump.

I took at look at the Makefile for net/tcpdump and saw this:

# TODO: Add strict sanity check that we're compiling against a
# version of libpcap with which this tcpdump release is compatible.
#
.if defined(TCPDUMP_OVERWRITE_BASE) || !defined(WITH_LIBPCAP_BASE)
LIB_DEPENDS= pcap.2:${PORTSDIR}/net/libpcap
.endif

I noticed this created when building libpcap-0.9.4:

/usr/ports/net/libpcap/work/libpcap-0.9.4/pcap.3

I also saw this on the system:

/usr/src/contrib/libpcap/pcap.3

So I changed tcpdump's Makefile like so:

LIB_DEPENDS= pcap.3:${PORTSDIR}/net/libpcap

I was then able to finish the installation. (I emailed the port maintainer asking if my fix made sense.) I ran Tcpdump:

orr:/usr/ports/net/tcpdump# /usr/local/sbin/tcpdump -V
tcpdump version 3.9.4
libpcap version 0.9.4
Usage: tcpdump [-aAdDeflLnNOpqRStuUvxX] [-c count] [ -C file_size ]
[ -E algo:secret ] [ -F file ] [ -i interface ] [ -M secret ]
[ -r file ] [ -s snaplen ] [ -T type ] [ -w file ]
[ -W filecount ] [ -y datalinktype ] [ -Z user ]
[ expression ]

Note this is not the version installed on the base system:

orr:/usr/ports/net/tcpdump# tcpdump -V
tcpdump version 3.8.3
libpcap version 0.8.3
Usage: tcpdump [-aAdDeflLnNOpqRStuUvxX] [-c count] [ -C file_size ]
[ -E algo:secret ] [ -F file ] [ -i interface ] [ -r file ]
[ -s snaplen ] [ -T type ] [ -w file ] [ -y datalinktype ]
[ expression ]

To look at the new man page for 3.9.4, I had to tell 'man' where to find the new man pages:

man -M /usr/local/man tcpdump

In the man page I saw the following two options:

-C Before writing a raw packet to a savefile, check whether the
file is currently larger than file_size and, if so, close the
current savefile and open a new one. Savefiles after the first
savefile will have the name specified with the -w flag, with a
number after it, starting at 1 and continuing upward. The units
of file_size are millions of bytes (1,000,000 bytes, not
1,048,576 bytes).
...edited...
-W Used in conjunction with the -C option, this will limit the num-
ber of files created to the specified number, and begin over-
writing files from the beginning, thus creating a 'rotating'
buffer. In addition, it will name the files with enough leading
0s to support the maximum number of files, allowing them to sort
correctly.

Awesome. Let's try it. Here I tell Tcpdump to save five 10 million byte files using the -W 5 and -C 10 switches.

orr:/home/richard$ sudo /usr/local/sbin/tcpdump -n -i fxp0 -s 1515 -C 10 -W 5 -w /nsm/test1.lpc
tcpdump: listening on fxp0, link-type EN10MB (Ethernet), capture size 1515 bytes

If I watch the /nsm directory, I see these files being created. Here the first four files have already appeared:

-rw-r--r-- 1 root sguil 10000114 Dec 19 16:29 test1.lpc0
-rw-r--r-- 1 root sguil 10000018 Dec 19 16:29 test1.lpc1
-rw-r--r-- 1 root sguil 10000110 Dec 19 16:29 test1.lpc2
-rw-r--r-- 1 root sguil 10001244 Dec 19 16:30 test1.lpc3
-rw-r--r-- 1 root sguil 7913472 Dec 19 16:30 test1.lpc4

Soon the first three files are already overwritten, and the fourth file is being overwritten as I check on the /nsm directory again:

-rw-r--r-- 1 root sguil 10000264 Dec 19 16:30 test1.lpc0
-rw-r--r-- 1 root sguil 10001286 Dec 19 16:30 test1.lpc1
-rw-r--r-- 1 root sguil 10000858 Dec 19 16:30 test1.lpc2
-rw-r--r-- 1 root sguil 1245184 Dec 19 16:30 test1.lpc3
-rw-r--r-- 1 root sguil 10000214 Dec 19 16:30 test1.lpc4

So, this system works as advertised. Unfortunately, the file naming convention simply adds 0,1,2,3, or 4 to the end of the specified file name of test1.lpc. This is not how Tethereal handles file naming. I am not sure yet if Tcpdump's system will be suitable for my needs. I imagine that when capturing GB-sized files is involved, the file timestamps may be enough to differentiate them?

By the way, people always ask "Why don't you use Tcpdump's -s 0 option to automatically specify a snaplen?" Here's why:

orr:/home/richard$ sudo /usr/local/sbin/tcpdump -n -i fxp0 -s 0 -C 10 -W 5 -w /nsm/test1.lpc
tcpdump: listening on fxp0, link-type EN10MB (Ethernet), capture size 65535 bytes

65535 bytes? That's IP's theoretical maximum. What's so good about that? It seems hamfisted.