Senin, 17 April 2006

Profiling Sensors with Bpfstat

In the TaoSecurity lab I have three physical boxes that perform monitoring duties. I wanted to see how each of them performed full content data collection.

Note: I do not consider what I am about to blog as any sort of thorough or comprehensive test. In fact, I expect some of you to flail about in anger that I didn't take into account your favorite testing methodologies!

I would be happy to hear constructive feedback. I am aware that anything resembling a test brings out some of the worst flame wars known to man. With those caveats aside, let's move on!

These are rough specifications for each system.


  • bourque: Celeron 633 MHz midtower with 320 MB RAM, 9541 MB Quantum Fireball HDD, 4310 MB Quantum Fireball HDD, Adaptec ANA-62044 PCI quad NIC

  • hacom: Via Nehemia 1 GHz small form factor PC with 512 MB RAM, 238 MB HDD, and three Intel Pro/1000 Gigabit adapters

  • shuttle: Intel PIV 3.2 GHz small form factor PC with 2 GB RAM, 2x74 GB HDDs, integrated Broadcom BCM5751 Gigabit Ethernet and Intel Pro/1000 MT Dual Gigabit adapter


Each sensor runs FreeBSD 6.0 with binary security updates applied.

How does one verify that a sensor is logging full content data appropriately? I decided to have each sensor listen to the two outputs of a traditional Fast Ethernet tap, provided by Net Optics. The tap watched a link between a FreeBSD FTP client and a Debian Linux FTP server. I had the FTP client download a 87 MB .zip file (my first Sguil image, in fact) from the server while the sensor watched.

I bonded the two interfaces monitoring the tapped link so that a single ngeth0 interface would see both sides of the FTP data transfer. I ran Tcpdump on the sensor like so:

tcpdump -n -i ngeth0 -s 1515 -w testX.lpc

Before starting the FTP transfer, I started Bpfstat on the sensor to watch for packet drops.

Here's how the FTP transfer looked.

ftp> get sguil0-6-0p1_freebsd6-0_1024mb.zip
local: sguil0-6-0p1_freebsd6-0_1024mb.zip remote: sguil0-6-0p1_freebsd6-0_1024mb.zip
227 Entering Passive Mode (172,16,1,1,128,85)
150 Opening BINARY mode data connection for 'sguil0-6-0p1_freebsd6-0_1024mb.zip' (91706415 bytes).
100% |*************************************| 89557 KB 9.72 MB/s 00:00 ETA
226 Transfer complete.
91706415 bytes received in 00:08 (9.72 MB/s)

As you can see I'm transferring at around 78 Mbps. This is a limitation of the hardware. I was able to run Iperf at over 90 Mbps, but no data was being saved.

After transferring the file via FTP from server to client, I used Tcpflow on the sensor to reassemble the FTP data stream carrying the 87 MB .zip file.

The original 87 MB file was 91706415 bytes. Every time I reassembled the FTP data session, I got files 91706415 in size. I ran a MD5 hash of each reconstructed file, however, and found none of them matched the original 87 MB .zip. This meant I was dropping packets somewhere.

To identify the bottleneck, I decided to use Bpfstat.

Here are the results when run on sensor bourque:

bourque:/root# bpfstat -i 5 -I ngeth0
pid netif flags recv drop match sblen hblen command
993 ngeth0 p--s- 11 0 11 0 0 tcpdump
993 ngeth0 p--s- 39 0 39 0 0 tcpdump
993 ngeth0 p--s- 21540 15475 21540 32740 31480 tcpdump
993 ngeth0 p--s- 75819 53392 75819 32740 32740 tcpdump
993 ngeth0 p--s- 95851 67142 95851 0 0 tcpdump

Wow, that's a lot of dropped packets. Here are the results for sensor hacom:

hacom:/root# bpfstat -i 5 -I ngeth0
pid netif flags recv drop match sblen hblen command
635 ngeth0 p--s- 41322 217 41322 32740 31480 tcpdump
635 ngeth0 p--s- 94843 420 94843 14124 0 tcpdump

That's a bit better. Let's look at shuttle, the most robust sensor available.

shuttle:/root# bpfstat -i 5 -I ngeth0
pid netif flags recv drop match sblen hblen command
689 ngeth0 p--s- 0 0 0 0 0 tcpdump
689 ngeth0 p--s- 7 0 7 0 0 tcpdump
689 ngeth0 p--s- 39 0 39 0 0 tcpdump
689 ngeth0 p--s- 23810 0 23810 17356 0 tcpdump
689 ngeth0 p--s- 77414 19 77414 15656 0 tcpdump
689 ngeth0 p--s- 95851 19 95851 0 0 tcpdump

That's excellent, but Bpfstat still reports dropping packets. That means I will not be able to reconstruct the FTP data session using this equipment.

None of this hardware is similar, but you can see how a progression from a slower CPU, less RAM, and less respected NICs to a faster CPU, more RAM, and Intel NICs results in better performance. All of these systems used 32 bit 33 MHz PCI buses for add-on cards, so I would expect PCI-X or PCI Express would improve performance.

If you're thinking that the ngeth0 bonded system might have degraded performance, that wasn't the case. Here is Bpfstat output for one of the transfers, when watching only the data from FTP server to client.

shuttle:/root# bpfstat -i 5 -I em1
pid netif flags recv drop match sblen hblen command
803 em1 p--s- 0 0 0 0 0 tcpdump
803 em1 p--s- 14106 0 14106 16852 0 tcpdump
803 em1 p--s- 49470 35 49470 27576 0 tcpdump
803 em1 p--s- 63341 35 63341 0 0 tcpdump

It appears to have dropped more traffic than the ngeth0 system. I had similar results on the other boxes.

0 komentar:

Posting Komentar