Kamis, 28 September 2006

Preview: Hunting Security Bugs

Yesterday I received a copy of Hunting Security Bugs. One of this book's authors is Tom Gallagher, who posted thoughts on Microsoft's security initiatives.

This looks like a great book, especially as a companion to The Security Development Lifecycle, also by Microsoft authors.

A third book, The Practical Guide to Defect Prevention, arrives in the spring. This may be too developer-oriented for my needs, but I might take a look at it.

I am glad to see Microsoft sharing the knowledge it has gained through its ongoing security program.

You can look at my Amazon.com Wish List to track books I plan to read, but don't have copies. My reading page shows books I own that I plan to read. The reading page also links to my recommended books lists.

Security Scruples Poll

Dark Reading is conducting a Security Scruples Poll. Some of the preliminary results are disturbing. I'll withhold commentary until I see the poll is closed and results are disclosed. Please consider taking the poll. It has some interesting questions, and it takes about 5 minutes.

Rabu, 27 September 2006

Review of Apache Security Books Posted

Amazon.com just posted my two reviews on books about Apache. The first is Apache Security by Ivan Ristic. Here is a link to the five star review.
The second is Preventing Web Attacks with Apache by Ryan Barnett. Here is a link to the four star review.

Both reviews share the same introduction.

I recently received copies of Apache Security (AS) by Ivan Ristic and Preventing Web Attacks with Apache (PWAWA) by Ryan Barnett. I read AS first, then PWAWA. Both are excellent books, but I expect potential readers want to know which is best for them. The following is a radical simplification, and I could honestly recommend readers buy either (or both) books. If you are more concerned with a methodical, comprehensive approach to securing Apache, choose AS. If you want more information on offensive aspects of Web security, choose PWAWA.

These are my 39th and 40th reviews of 2006. I should break my previous high reading mark of 42 books, accomplished in 2001.

Congratulations to Ivan for the acquisition of Thinking Stone by Breach Security.

Senin, 25 September 2006

Symantec Internet Security Threat Report Volume X

Symantec has posted (for free, no registration!) the latest Internet Security Threat Report. I'm very pleased to see that such a high-profile report uses threat and vulnerability terms properly, and features details on the methodology used to produce the report. Here's some of the Executive Summary.

In contrast to previously observed widespread, network-based attacks, attackers today tend to be more focused, often targeting client-side applications... The current threat landscape is populated by lower profile, more targeted attacks, attacks that propagate at a slower rate in order to avoid detection and thereby increase the likelihood of successful compromise.

Instead of exploiting vulnerabilities in servers, as traditional attacks often did, these threats tend to exploit vulnerabilities in client-side applications that require a degree of user interaction, such as word processing and spreadsheet programs.

A number of these have been zero-day vulnerabilities. These types of threats also attempt to escape detection in order to remain on host systems for longer periods so that they can steal information or provide remote access.


Do you see how important it is to differentiate between threats and vulnerabilities when the terms are used in the same sentence? Bravo Symantec.

This volume of the Internet Security Threat Report will offer an analysis and discussion of threat activity that took place between January 1 and June 30, 2006. This brief summary will offer a synopsis of the data and trends discussed in the main report. Symantec will continue to monitor and assess threat activity in order to best prepare consumers and enterprises for the complex Internet security issues to come.

How does Symantec "monitor and assess threat activity"? By watching, of course.

The Symantec™ Global Intelligence Network comprehensively tracks attack activity across the entire Internet. The Global Intelligence Network, which includes the Symantec DeepSight™ Threat Management System and Symantec™ Managed Security Services, consists of over 40,000 sensors monitoring network activity in over 180 countries. As well, Symantec gathers malicious code data along with spyware and adware reports from over 120 million client, server, and gateway systems that have deployed Symantec’s antivirus products.

They're not using counts of vulnerabilities announced on mailing lists. They're watching exploitation of their customer base.

Their Vulnerability Trend Highlights are fascinating:

  • Symantec documented 2,249 new vulnerabilities, up 18% over the second half of
    2005. This is the highest number ever recorded for a six-month period.

  • Web application vulnerabilities made up 69% of all vulnerabilities this period.

  • Mozilla browsers had the most vulnerabilities, 47, compared to 38 in Microsoft Internet Explorer.

  • In the first six months of 2006, 80% of vulnerabilities were considered easily exploitable, up from 79%.

  • Seventy-eight percent of easily exploitable vulnerabilities affected Web applications.

  • The window of exposure for enterprise vulnerabilities was 28 days.

  • Internet Explorer had an average window of exposure of nine days, the largest of any Web browser. Apple Safari averaged five days, followed by Opera with two days and Mozilla with one day.

  • In the first half of 2006, Sun operating systems had the highest average patch development time, with 89 days, followed by Hewlett Packard with 53 days, Apple with 37 days and Microsoft and Red Hat with 13 days.


I think it's interesting that Mozilla had more vulnerabilities, but a far smaller vulnerability window, than Internet Explorer.

I recommend reading the whole report, or at least the executive summary.

Review of The TCP/IP Guide Posted

Amazon.com just posted my 4 star review of The TCP/IP Guide. From the review:

Right away I must state that I did not read "The TCP/IP Guide" (TTG) cover-to-cover. I doubt anyone will, which raises interesting issues. This review is based on the sections I did read and my comparisons with other protocol books.

Protocol books should be divided into two eras. The first is the "Stevens era" meaning those written around the time Richard Stevens' "TCP/IP Illustrated, Vol 1: The Protocols" was published. For six years (1994-2000) Stevens' book was clearly the best protocol book, and it taught legions of networking pros TCP/IP. The second is the "modern era," beginning in 2000 and continuing to today. TTG fits in this group.

I question the approach taken by TTG. The book contains extremely basic information (what is networking, why use layers, what is a protocol, etc.) and extremely obscure information (PPP Link Control Protocol Frame Types and Fields, SNMPv2 PDU Error Status Field Values, Interpretation of Standard Telnet NVT ASCII Control Codes, etc.). If TTG were an introductory book, it wouldn't need the obscure material. If TTG were a reference, it wouldn't need the introductory material.


At 1616 pages and nearly 5 pounds, we should be dropping these books out of B-2s!

Sabtu, 23 September 2006

Net Optics Think Tank Tuesday in Fairfax, VA

Don't forget to attend the free Net Optics Think Tank on Tuesday, 26 September 2006 in Fairfax, VA. It looks like I will be speaking during lunch from 1215 to 1315. Please register. I expect to see a lot of cool Net Optics gear on display, along with insights from those who make products for enterprise network instrumentation.

Throughput Testing Through a Bridge

In my earlier posts I've discussed throughput testing. Now I'm going to introduce an inline system as a bridge. You could imagine that this system might be a firewall, or run Snort in inline mode. For the purposes of this post, however, we're just going to see what effect the bridge has on throughput between a client and server.

This is the new system. It's called cel600, and it's running the same GENERIC.POLLING kernel mentioned earlier.

FreeBSD 6.1-RELEASE-p6 #0: Sun Sep 17 17:09:24 EDT 2006
root@kbld.taosecurity.com:/usr/obj/usr/src/sys/GENERIC.POLLING
Timecounter "i8254" frequency 1193182 Hz quality 0
CPU: Intel Celeron (598.19-MHz 686-class CPU)
Origin = "GenuineIntel" Id = 0x686 Stepping = 6
Features=0x383f9ff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,SEP,MTRR,PGE,MCA,CMOV,PA
T,PSE36,MMX,FXSR,SSE>
real memory = 401260544 (382 MB)
avail memory = 383201280 (365 MB)

This system has two dual NICs in it. em0 and em1 are Gigabit fiber, and em2 and em3 are Gigabit copper.

cel600:/root# ifconfig em0
em0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
options=48<VLAN_MTU,POLLING>
inet6 fe80::204:23ff:feb1:7f22%em0 prefixlen 64 scopeid 0x1
ether 00:04:23:b1:7f:22
media: Ethernet autoselect (1000baseSX <full-duplex>)
status: active
cel600:/root# ifconfig em1
em1: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
options=48<VLAN_MTU,POLLING>
inet6 fe80::204:23ff:feb1:7f23%em1 prefixlen 64 scopeid 0x2
ether 00:04:23:b1:7f:23
media: Ethernet autoselect (1000baseSX <full-duplex>)
status: active
cel600:/root# ifconfig em2
em2: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
options=48<VLAN_MTU,POLLING>
inet6 fe80::204:23ff:fec5:4e80%em2 prefixlen 64 scopeid 0x3
ether 00:04:23:c5:4e:80
media: Ethernet autoselect (1000baseTX <full-duplex>)
status: active
cel600:/root# ifconfig em3
em3: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
options=48<VLAN_MTU,POLLING>
inet6 fe80::204:23ff:fec5:4e81%em3 prefixlen 64 scopeid 0x4
ether 00:04:23:c5:4e:81
media: Ethernet autoselect (1000baseTX <full-duplex>)
status: active

I configure them in /etc/rc.conf this way:

ifconfig_em0="polling up"
ifconfig_em1="polling up"
ifconfig_em2="polling up"
ifconfig_em3="polling up"
cloned_interfaces="bridge0 bridge1"
ifconfig_bridge0="addm em0 addm em1 monitor up"
ifconfig_bridge1="addm em2 addm em3 monitor up"

The end result is two bridge interfaces.

bridge0: flags=48043<UP,BROADCAST,RUNNING,MULTICAST,MONITOR> mtu 1500
ether ac:de:48:e5:e7:69
priority 32768 hellotime 2 fwddelay 15 maxage 20
member: em1 flags=3<LEARNING,DISCOVER>
member: em0 flags=3<LEARNING,DISCOVER>
cel600:/root# ifconfig bridge1
bridge1: flags=48043>UP,BROADCAST,RUNNING,MULTICAST,MONITOR> mtu 1500
ether ac:de:48:0c:26:66
priority 32768 hellotime 2 fwddelay 15 maxage 20
member: em3 flags=3<LEARNING,DISCOVER>
member: em2 flags=3<LEARNING,DISCOVER>

Notice these two pseudo-interfaces are both in MONITOR mode. That was set automatically.

With the bridge in place, I can conduct throughput tests.

Here is the client's view.

asa633:/root# iperf -c 172.16.6.2 -t 60 -i 5
------------------------------------------------------------
Client connecting to 172.16.6.2, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.6.1 port 57355 connected with 172.16.6.2 port 5001
[ 3] 0.0- 5.0 sec 55.9 MBytes 93.9 Mbits/sec
[ 3] 5.0-10.0 sec 51.6 MBytes 86.6 Mbits/sec
[ 3] 10.0-15.0 sec 72.3 MBytes 121 Mbits/sec
[ 3] 15.0-20.0 sec 54.6 MBytes 91.6 Mbits/sec
[ 3] 20.0-25.0 sec 61.4 MBytes 103 Mbits/sec
[ 3] 25.0-30.0 sec 75.4 MBytes 127 Mbits/sec
[ 3] 30.0-35.0 sec 60.2 MBytes 101 Mbits/sec
[ 3] 35.0-40.0 sec 47.8 MBytes 80.2 Mbits/sec
[ 3] 40.0-45.0 sec 74.7 MBytes 125 Mbits/sec
[ 3] 45.0-50.0 sec 59.0 MBytes 99.0 Mbits/sec
[ 3] 50.0-55.0 sec 54.0 MBytes 90.6 Mbits/sec
[ 3] 55.0-60.0 sec 76.8 MBytes 129 Mbits/sec
[ 3] 0.0-60.0 sec 744 MBytes 104 Mbits/sec

Here is the server's view.

poweredge:/root# iperf -s -B 172.16.6.2
------------------------------------------------------------
Server listening on TCP port 5001
Binding to local address 172.16.6.2
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 4] local 172.16.6.2 port 5001 connected with 172.16.6.1 port 57355
[ 4] 0.0-60.0 sec 744 MBytes 104 Mbits/sec

Compared to the straight-through tests, you can see the effect on throughput caused by the bridge.

[ 4] 0.0-60.0 sec 1.19 GBytes 170 Mbits/sec

Of interest during the test is the interrupt count on the bridge.

last pid: 728; load averages: 0.00, 0.09, 0.06 up 0+00:06:36 17:58:40
22 processes: 1 running, 21 sleeping
CPU states: 0.4% user, 0.0% nice, 0.4% system, 17.1% interrupt, 82.1% idle
Mem: 7572K Active, 4776K Inact, 16M Wired, 8912K Buf, 339M Free
Swap: 768M Total, 768M Free

Let's try the UDP test. First, the client view.

asa633:/root# iperf -c 172.16.6.2 -u -t 60 -i 5 -b 500M
------------------------------------------------------------
Client connecting to 172.16.6.2, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 9.00 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.6.1 port 51356 connected with 172.16.6.2 port 5001
[ 3] 0.0- 5.0 sec 169 MBytes 284 Mbits/sec
[ 3] 5.0-10.0 sec 169 MBytes 284 Mbits/sec
[ 3] 10.0-15.0 sec 171 MBytes 287 Mbits/sec
[ 3] 15.0-20.0 sec 171 MBytes 287 Mbits/sec
[ 3] 20.0-25.0 sec 171 MBytes 287 Mbits/sec
[ 3] 25.0-30.0 sec 171 MBytes 287 Mbits/sec
[ 3] 30.0-35.0 sec 171 MBytes 287 Mbits/sec
[ 3] 35.0-40.0 sec 172 MBytes 288 Mbits/sec
[ 3] 40.0-45.0 sec 172 MBytes 288 Mbits/sec
[ 3] 45.0-50.0 sec 172 MBytes 288 Mbits/sec
[ 3] 50.0-55.0 sec 172 MBytes 288 Mbits/sec
[ 3] 0.0-60.0 sec 2.00 GBytes 287 Mbits/sec
[ 3] Sent 1463703 datagrams
[ 3] Server Report:
[ 3] 0.0-60.0 sec 1.93 GBytes 276 Mbits/sec 0.014 ms 53386/1463702 (3.6%)
[ 3] 0.0-60.0 sec 1 datagrams received out-of-order

Now the server view.

poweredge:/root# iperf -s -u -B 172.16.6.2
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 172.16.6.2
Receiving 1470 byte datagrams
UDP buffer size: 41.1 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.6.2 port 5001 connected with 172.16.6.1 port 51356
[ 3] 0.0-60.0 sec 1.93 GBytes 276 Mbits/sec 0.014 ms 53386/1463702 (3.6%)
[ 3] 0.0-60.0 sec 1 datagrams received out-of-order

Here's the result from the straight-through test.

[ 3] 0.0-60.0 sec 1.94 GBytes 277 Mbits/sec 0.056 ms 62312/1478219 (4.2%)

The results are almost identical.

Here is the bridge's interrupt count as shown in a top excerpt.

last pid: 751; load averages: 0.00, 0.03, 0.04 up 0+00:10:20 18:02:24
22 processes: 1 running, 21 sleeping
CPU states: 0.0% user, 0.0% nice, 0.4% system, 19.8% interrupt, 79.8% idle
Mem: 7564K Active, 4788K Inact, 16M Wired, 8928K Buf, 339M Free
Swap: 768M Total, 768M Free

With the Gigabit fiber tests done, let's look at Gigabit copper.

First, a TCP test as seen by the client.

asa633:/root# iperf -c 172.16.7.2 -t 60 -i 5
------------------------------------------------------------
Client connecting to 172.16.7.2, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.7.1 port 58824 connected with 172.16.7.2 port 5001
[ 3] 0.0- 5.0 sec 76.3 MBytes 128 Mbits/sec
[ 3] 5.0-10.0 sec 76.3 MBytes 128 Mbits/sec
[ 3] 10.0-15.0 sec 76.8 MBytes 129 Mbits/sec
[ 3] 15.0-20.0 sec 76.6 MBytes 129 Mbits/sec
[ 3] 20.0-25.0 sec 76.8 MBytes 129 Mbits/sec
[ 3] 25.0-30.0 sec 75.4 MBytes 127 Mbits/sec
[ 3] 30.0-35.0 sec 76.3 MBytes 128 Mbits/sec
[ 3] 35.0-40.0 sec 76.1 MBytes 128 Mbits/sec
[ 3] 40.0-45.0 sec 76.5 MBytes 128 Mbits/sec
[ 3] 45.0-50.0 sec 75.4 MBytes 126 Mbits/sec
[ 3] 50.0-55.0 sec 76.7 MBytes 129 Mbits/sec
[ 3] 55.0-60.0 sec 76.4 MBytes 128 Mbits/sec
[ 3] 0.0-60.0 sec 916 MBytes 128 Mbits/sec

Here is the server's view.

poweredge:/root# iperf -s -B 172.16.7.2
------------------------------------------------------------
Server listening on TCP port 5001
Binding to local address 172.16.7.2
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 4] local 172.16.7.2 port 5001 connected with 172.16.7.1 port 58824
[ 4] 0.0-60.0 sec 916 MBytes 128 Mbits/sec

That is better than the result for fiber from above.

[ 4] 0.0-60.0 sec 744 MBytes 104 Mbits/sec

It's not as good as the result for straight-through copper.

[ 4] 0.0-60.0 sec 1.16 GBytes 166 Mbits/sec

It seemed as though the bridge interrupt count was lower than the fiber TCP tests.

last pid: 754; load averages: 0.00, 0.01, 0.02 up 0+00:13:48 18:05:52
22 processes: 1 running, 21 sleeping
CPU states: 0.0% user, 0.0% nice, 0.4% system, 16.7% interrupt, 82.9% idle
Mem: 7560K Active, 4792K Inact, 16M Wired, 8928K Buf, 339M Free
Swap: 768M Total, 768M Free

Finally, UDP copper tests. Here is the client view.

asa633:/root# iperf -c 172.16.7.2 -u -t 60 -i 5 -b 500M
------------------------------------------------------------
Client connecting to 172.16.7.2, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 9.00 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.7.1 port 62131 connected with 172.16.7.2 port 5001
[ 3] 0.0- 5.0 sec 129 MBytes 217 Mbits/sec
[ 3] 5.0-10.0 sec 129 MBytes 217 Mbits/sec
[ 3] 10.0-15.0 sec 129 MBytes 217 Mbits/sec
[ 3] 15.0-20.0 sec 129 MBytes 217 Mbits/sec
[ 3] 20.0-25.0 sec 129 MBytes 217 Mbits/sec
[ 3] 25.0-30.0 sec 129 MBytes 216 Mbits/sec
[ 3] 30.0-35.0 sec 129 MBytes 216 Mbits/sec
[ 3] 35.0-40.0 sec 129 MBytes 216 Mbits/sec
[ 3] 40.0-45.0 sec 129 MBytes 216 Mbits/sec
[ 3] 45.0-50.0 sec 129 MBytes 216 Mbits/sec
[ 3] 50.0-55.0 sec 129 MBytes 216 Mbits/sec
[ 3] 0.0-60.0 sec 1.51 GBytes 216 Mbits/sec
[ 3] Sent 1103828 datagrams
[ 3] Server Report:
[ 3] 0.0-60.0 sec 1.46 GBytes 209 Mbits/sec 0.047 ms 35057/1103827 (3.2%)
[ 3] 0.0-60.0 sec 1 datagrams received out-of-order

Here is the server view.

poweredge:/root# iperf -s -u -B 172.16.7.2
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 172.16.7.2
Receiving 1470 byte datagrams
UDP buffer size: 41.1 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.7.2 port 5001 connected with 172.16.7.1 port 62131
[ 3] 0.0-60.0 sec 1.46 GBytes 209 Mbits/sec 0.047 ms 35057/1103827 (3.2%)
[ 3] 0.0-60.0 sec 1 datagrams received out-of-order

Let's compare that to the fiber UDP test from above.

[ 3] 0.0-60.0 sec 1.93 GBytes 276 Mbits/sec 0.014 ms 53386/1463702 (3.6%)

This time, the results are much worse than the UDP over fiber results.

When I tested UDP over crossover copper, this was the result.

[ 3] 0.0-60.0 sec 1.86 GBytes 267 Mbits/sec 0.024 ms 40962/1401730 (2.9%)

The top excerpt is about the same as the fiber UDP test.

last pid: 754; load averages: 0.01, 0.01, 0.01 up 0+00:16:21 18:08:25
22 processes: 1 running, 21 sleeping
CPU states: 0.0% user, 0.0% nice, 0.0% system, 17.1% interrupt, 82.9% idle
Mem: 7564K Active, 4788K Inact, 16M Wired, 8928K Buf, 339M Free
Swap: 768M Total, 768M Free

It's not really feasible to make any solid assumptions based on these tests. They're basically get to get a ballpark feel for the capabilities of a given architecture, but you need to repeat them multiple times to get some confidence in the results.

If you want built-in repeatability and confidence testing, try Netperf.

With these results, however, I have some idea of what I can expect from this particular hardware setup, namely a bridge between a client sending data to a server.

  • TCP over fiber: about 104 Mbps

  • UDP over fiber: about 276 Mbps

  • TCP over copper: about 128 Mbps

  • UDP over copper: about 209 Mbps


Rounding down, and acting conservatively, I would feel this setup could handle somewhere around 100 Mbps (aggregated) over fiber and around 125 Mbps over copper. Note this says nothing about any software running on the bridge and its ability to do whatever function it is designed to perform. This is just a throughput estimate.

In my next related posts I'll introduce bypass switches and see how they influence this process.

I'll also rework the configuration into straight-through, bridged, and switched modes to test latency using ping.

FreeBSD Device Polling Results for Gigabit Copper

In my post FreeBSD Device Polling I ran my tests over Gigabit fiber connections. I thought I would repeat the tests for Gigabit copper, connected by normal straight-through cables. (One benefit of Gigabit copper Ethernet NICs is there's no need for crossover cables.)

Although I booted my two test boxes, asa633 and poweredge, with kernels offering polling, neither interface had polling enabled by default. This is asa633's NIC:

em1: flags=8843 mtu 1500
options=b
inet6 fe80::20e:cff:feba:e726%em1 prefixlen 64 scopeid 0x4
inet 172.16.7.1 netmask 0xffffff00 broadcast 172.16.7.255
ether 00:0e:0c:ba:e7:26
media: Ethernet autoselect (1000baseTX )
status: active

This is poweredge's NIC.

em1: flags=8843 mtu 1500
options=b
inet6 fe80::207:e9ff:fe11:a0a0%em1 prefixlen 64 scopeid 0x4
inet 172.16.7.2 netmask 0xffffff00 broadcast 172.16.7.255
ether 00:07:e9:11:a0:a0
media: Ethernet autoselect (1000baseTX )
status: active

First I ran unidirectional TCP tests, from asa633 to poweredge, without polling.

asa633:/root# iperf -c 172.16.7.2 -t 60 -i 5
------------------------------------------------------------
Client connecting to 172.16.7.2, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.7.1 port 58672 connected with 172.16.7.2 port 5001
[ 3] 0.0- 5.0 sec 90.2 MBytes 151 Mbits/sec
[ 3] 5.0-10.0 sec 91.1 MBytes 153 Mbits/sec
[ 3] 10.0-15.0 sec 90.0 MBytes 151 Mbits/sec
[ 3] 15.0-20.0 sec 91.2 MBytes 153 Mbits/sec
[ 3] 20.0-25.0 sec 89.8 MBytes 151 Mbits/sec
[ 3] 25.0-30.0 sec 90.9 MBytes 153 Mbits/sec
[ 3] 30.0-35.0 sec 91.7 MBytes 154 Mbits/sec
[ 3] 35.0-40.0 sec 92.0 MBytes 154 Mbits/sec
[ 3] 40.0-45.0 sec 89.9 MBytes 151 Mbits/sec
[ 3] 45.0-50.0 sec 90.1 MBytes 151 Mbits/sec
[ 3] 50.0-55.0 sec 90.4 MBytes 152 Mbits/sec
[ 3] 55.0-60.0 sec 91.0 MBytes 153 Mbits/sec
[ 3] 0.0-60.0 sec 1.06 GBytes 152 Mbits/sec

Here is what the server saw.

poweredge:/root# iperf -s -B 172.16.7.2
------------------------------------------------------------
Server listening on TCP port 5001
Binding to local address 172.16.7.2
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 4] local 172.16.7.2 port 5001 connected with 172.16.7.1 port 58672
[ 4] 0.0-60.0 sec 1.06 GBytes 152 Mbits/sec

Interrupt levels for both systems was similar to the Gigabit copper results.

Here is the change with polling enabled via 'ifconfig em1 polling'. First, the client.

asa633:/root# iperf -c 172.16.7.2 -t 60 -i 5
------------------------------------------------------------
Client connecting to 172.16.7.2, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.7.1 port 52789 connected with 172.16.7.2 port 5001
[ 3] 0.0- 5.0 sec 79.2 MBytes 133 Mbits/sec
[ 3] 5.0-10.0 sec 76.3 MBytes 128 Mbits/sec
[ 3] 10.0-15.0 sec 80.4 MBytes 135 Mbits/sec
[ 3] 15.0-20.0 sec 123 MBytes 207 Mbits/sec
[ 3] 20.0-25.0 sec 126 MBytes 212 Mbits/sec
[ 3] 25.0-30.0 sec 110 MBytes 185 Mbits/sec
[ 3] 30.0-35.0 sec 89.1 MBytes 149 Mbits/sec
[ 3] 35.0-40.0 sec 77.0 MBytes 129 Mbits/sec
[ 3] 40.0-45.0 sec 76.8 MBytes 129 Mbits/sec
[ 3] 45.0-50.0 sec 103 MBytes 172 Mbits/sec
[ 3] 50.0-55.0 sec 128 MBytes 215 Mbits/sec
[ 3] 55.0-60.0 sec 120 MBytes 201 Mbits/sec
[ 3] 0.0-60.0 sec 1.16 GBytes 166 Mbits/sec

Now the server.

poweredge:/root# iperf -s -B 172.16.7.2
------------------------------------------------------------
Server listening on TCP port 5001
Binding to local address 172.16.7.2
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 4] local 172.16.7.2 port 5001 connected with 172.16.7.1 port 52789
[ 4] 0.0-60.0 sec 1.16 GBytes 166 Mbits/sec

Polling didn't improve the situation much for TCP.

Here are the results for unidirectional UDP tests, without polling.

This is the client.

asa633:/root# iperf -c 172.16.7.2 -u -t 60 -i 5 -b 500M
------------------------------------------------------------
Client connecting to 172.16.7.2, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 9.00 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.7.1 port 60193 connected with 172.16.7.2 port 5001
[ 3] 0.0- 5.0 sec 129 MBytes 217 Mbits/sec
[ 3] 5.0-10.0 sec 129 MBytes 217 Mbits/sec
[ 3] 10.0-15.0 sec 129 MBytes 217 Mbits/sec
[ 3] 15.0-20.0 sec 127 MBytes 212 Mbits/sec
[ 3] 20.0-25.0 sec 129 MBytes 217 Mbits/sec
[ 3] 25.0-30.0 sec 129 MBytes 217 Mbits/sec
[ 3] 30.0-35.0 sec 129 MBytes 217 Mbits/sec
[ 3] 35.0-40.0 sec 129 MBytes 217 Mbits/sec
[ 3] 40.0-45.0 sec 129 MBytes 217 Mbits/sec
[ 3] 45.0-50.0 sec 129 MBytes 216 Mbits/sec
[ 3] 50.0-55.0 sec 129 MBytes 216 Mbits/sec
[ 3] 0.0-60.0 sec 1.51 GBytes 216 Mbits/sec
[ 3] Sent 1102470 datagrams
[ 3] Server Report:
[ 3] 0.0-60.0 sec 787 MBytes 110 Mbits/sec 0.042 ms 541153/1102469 (49%)
[ 3] 0.0-60.0 sec 1 datagrams received out-of-order

Notice the huge drops. Here is the server.

poweredge:/root# iperf -s -u -B 172.16.7.2
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 172.16.7.2
Receiving 1470 byte datagrams
UDP buffer size: 41.1 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.7.2 port 5001 connected with 172.16.7.1 port 60193
[ 3] 0.0-60.0 sec 787 MBytes 110 Mbits/sec 0.042 ms 541153/1102469 (49%)
[ 3] 0.0-60.0 sec 1 datagrams received out-of-order

Here are results with polling enabled.

The client:

asa633:/root# iperf -c 172.16.7.2 -u -t 60 -i 5 -b 500M
------------------------------------------------------------
Client connecting to 172.16.7.2, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 9.00 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.7.1 port 53387 connected with 172.16.7.2 port 5001
[ 3] 0.0- 5.0 sec 163 MBytes 274 Mbits/sec
[ 3] 5.0-10.0 sec 164 MBytes 275 Mbits/sec
[ 3] 10.0-15.0 sec 164 MBytes 275 Mbits/sec
[ 3] 15.0-20.0 sec 164 MBytes 275 Mbits/sec
[ 3] 20.0-25.0 sec 164 MBytes 275 Mbits/sec
[ 3] 25.0-30.0 sec 164 MBytes 275 Mbits/sec
[ 3] 30.0-35.0 sec 163 MBytes 274 Mbits/sec
[ 3] 35.0-40.0 sec 164 MBytes 275 Mbits/sec
[ 3] 40.0-45.0 sec 164 MBytes 275 Mbits/sec
[ 3] 45.0-50.0 sec 164 MBytes 275 Mbits/sec
[ 3] 50.0-55.0 sec 164 MBytes 275 Mbits/sec
[ 3] 0.0-60.0 sec 1.92 GBytes 275 Mbits/sec
[ 3] Sent 1401731 datagrams
[ 3] Server Report:
[ 3] 0.0-60.0 sec 1.86 GBytes 267 Mbits/sec 0.023 ms 40962/1401730 (2.9%)
[ 3] 0.0-60.0 sec 1 datagrams received out-of-order

Now the server.

poweredge:/root# iperf -s -u -B 172.16.7.2
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 172.16.7.2
Receiving 1470 byte datagrams
UDP buffer size: 41.1 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.7.2 port 5001 connected with 172.16.7.1 port 53387
[ 3] 0.0-60.0 sec 1.86 GBytes 267 Mbits/sec 0.024 ms 40962/1401730 (2.9%)
[ 3] 0.0-60.0 sec 1 datagrams received out-of-order

Like the copper tests, polling really helps with UDP performance.

Given that polling can be enabled and disabled at will via ifconfig, I would like to see 'options DEVICE_POLLING' added to the GENERIC FreeBSD kernel.

Jumat, 22 September 2006

The ZERT Evolution

In January during the WMF fiasco, I wrote The Power of Open Source. What we're now reading in Zero-Day Response Team Launches with Emergency IE Patch is the latest evolution of this idea. The Zeroday Emergency Response Team isn't a bunch of amateurs. These are some of the highest skilled security researchers and practitioners in the public arena. They are stepping up to meet a need not fulfilled by vendors, namely rapid response to security problems.

Why is this the case? Customers running closed operating systems and applications are stuck. They can't fix problems themselves, so they rely on their vendor. In fact, they are paying their vendor to perform the fixing service. To fund development of an alternative fix would be like paying for a fix twice.

ZERT is demonstrating that this model is broken. They are trying to respond as fast as possible to attacks. Because no one can be "ahead of the threat," reaction time is often key. ZERT can act faster than the vendor because ZERT operates in a freer environment:

Please keep in mind while the group performs extensive testing of any patches before releasing them, it is impossible for us to test our patches with each possible system configuration and in each usage scenario. We validate patches to the best of our ability, noting the environments in which the tests were performed and the test results.

So what shall it be? Wait and be owned, or turn to a third party? Perhaps we'll see a more rapid release of a use-at-your-own-risk patch from vendors, followed by a tested-for-stability patch. It's tough to believe that people without access to source code are developing fixes faster that the creators of software!

Generating Multicast Traffic

If you're a protocol junkie like me, you probably enjoy investigating a variety of network traffic types. I don't encounter multicast traffic too often, so the following caught my eye.

I'm using Iperf for some simple testing, and I notice it has a multicast option. Here's how I used it.

In the following scenario, I have two hosts (cel433 and cel600) on the same segment. This is important because the router(s) in this test network are not configured to support multicast.

I set up cel433 as a Iperf server listening on multicast address 224.0.55.55.

cel433:/root# iperf -s -u -B 224.0.55.55 -i 1
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 224.0.55.55
Joining multicast group 224.0.55.55
Receiving 1470 byte datagrams
UDP buffer size: 41.1 KByte (default)

Now I generate multicast traffic from cel600.

cel600:/root# iperf -c 224.0.55.55 -u -T 32 -t 3 -i 1
------------------------------------------------------------
Client connecting to 224.0.55.55, UDP port 5001
Sending 1470 byte datagrams
Setting multicast TTL to 32
UDP buffer size: 9.00 KByte (default)
------------------------------------------------------------
[ 3] local 10.1.10.3 port 51296 connected with 224.0.55.55 port 5001
[ 3] 0.0- 1.0 sec 129 KBytes 1.06 Mbits/sec
[ 3] 1.0- 2.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 2.0- 3.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 0.0- 3.0 sec 386 KBytes 1.05 Mbits/sec
[ 3] Sent 269 datagrams

Here is what cel433 sees:

------------------------------------------------------------
[ 3] local 224.0.55.55 port 5001 connected with 10.1.10.3 port 51296
[ 3] 0.0- 1.0 sec 128 KBytes 1.05 Mbits/sec 0.146 ms 0/ 89 (0%)
[ 3] 1.0- 2.0 sec 128 KBytes 1.05 Mbits/sec 0.100 ms 0/ 89 (0%)
[ 3] 2.0- 3.0 sec 128 KBytes 1.05 Mbits/sec 0.110 ms 0/ 89 (0%)
[ 3] 0.0- 3.0 sec 386 KBytes 1.05 Mbits/sec 0.098 ms 0/ 268 (0%)
[ 3] 0.0- 3.0 sec 1 datagrams received out-of-order

The traffic looks like this:

cel433:/root# tcpdump -n -i xl0 -s 1515 udp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on xl0, link-type EN10MB (Ethernet), capture size 1515 bytes
15:29:53.669508 IP 10.1.10.3.51296 > 224.0.55.55.5001: UDP, length 1470
15:29:53.680789 IP 10.1.10.3.51296 > 224.0.55.55.5001: UDP, length 1470
15:29:53.691934 IP 10.1.10.3.51296 > 224.0.55.55.5001: UDP, length 1470
...truncated...

This is a simple way to generate multicast traffic and ensure a member of the multicast group actually receives it.

Update: I forgot to show the IGMP messages one would see when starting a multicast listener.

This is the interface listening for multicast:

cel433:/root# ifconfig xl0
xl0: flags=8843 mtu 1500
options=9
inet6 fe80::2c0:4fff:fe1c:102b%xl0 prefixlen 64 scopeid 0x6
inet 10.1.10.2 netmask 0xffffff00 broadcast 10.1.10.255
ether 00:c0:4f:1c:10:2b
media: Ethernet autoselect (100baseTX )
status: active

Here are IGMP report and leave messages.

cel433:/root# tcpdump -nevv -i xl0 -s 1515 igmp
tcpdump: listening on xl0, link-type EN10MB (Ethernet), capture size 1515 bytes
06:28:40.887868 00:c0:4f:1c:10:2b > 01:00:5e:00:37:37, ethertype IPv4 (0x0800),
length 46: (tos 0x0, ttl 1, id 59915, offset 0, flags [none], proto: IGMP (2),
length: 32, options
( RA (148) len 4 )) 10.1.10.2 > 224.0.55.55: igmp v2 report 224.0.55.55

06:28:42.196233 00:c0:4f:1c:10:2b > 01:00:5e:00:00:02, ethertype IPv4 (0x0800),
length 46: (tos 0x0, ttl 1, id 59920, offset 0, flags [none], proto: IGMP (2),
length: 32, options
( RA (148) len 4 )) 10.1.10.2 > 224.0.0.2: igmp leave 224.0.55.55

I used the -e option to show the MAC addresses. Notice the destination MAC for these multicast packets.

06:31:21.467919 00:b0:d0:14:b2:11 > 01:00:5e:00:37:37, ethertype IPv4 (0x0800),
length 1512: (tos 0x0, ttl 32, id 1652, offset 0, flags [none], proto: UDP (17),
length: 1498)
10.1.10.3.58479 > 224.0.55.55.5001: [udp sum ok] UDP, length 1470

The 01:00:5e:00:37:37 MAC address is a mapping derived from the 24-bit IANA multicast OUI 01:00:5e and the multicast IP address 224.0.55.55.

FreeBSD Device Polling

Not all of us work with the latest, greatest hardware. If we use open source software, we often find ourselves running it on old hardware. I have a mix of equipment in my lab and I frequently see what I can do with it.

In this post I'd like to talk about some simple network performance measurement testing. Some of this is based on the book Network Performance Toolkit: Using Open Source Testing Tools. I don't presume that any of this is definitive, novel, or particularly helpful for all readers. I welcome constructive ideas for improvements.

For the purposes of this post, I'd like to get a sense of the network throughput between two hosts, asa633 and poweredge.

This is asa633's dmesg output:

FreeBSD 6.1-RELEASE-p6 #0: Wed Sep 20 20:02:56 EDT 2006
root@kbld.taosecurity.com:/usr/obj/usr/src/sys/GENERIC.SECURITY
Timecounter "i8254" frequency 1193182 Hz quality 0
CPU: Intel Celeron (631.29-MHz 686-class CPU)
Origin = "GenuineIntel" Id = 0x686 Stepping = 6
Features=0x383f9ff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,SEP,MTRR,PGE,MCA,CMOV,PA
T,PSE36,MMX,FXSR,SSE>
real memory = 334233600 (318 MB)
avail memory = 317620224 (302 MB)

This is poweredge's dmesg output:

FreeBSD 6.1-RELEASE-p6 #0: Wed Sep 20 20:02:56 EDT 2006
root@kbld.taosecurity.com:/usr/obj/usr/src/sys/GENERIC.SECURITY
ACPI APIC Table: <DELL PE2300 >
Timecounter "i8254" frequency 1193182 Hz quality 0
CPU: Pentium III/Pentium III Xeon/Celeron (498.75-MHz 686-class CPU)
Origin = "GenuineIntel" Id = 0x673 Stepping = 3
Features=0x383fbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CM
OV,PAT,PSE36,MMX,FXSR,SSE>
real memory = 536862720 (511 MB)
avail memory = 515993600 (492 MB)

Neither system has any tuning applied.

Each box has the following relevant interfaces.

asa633:/root# ifconfig em0
em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=b<RXCSUM,TXCSUM,VLAN_MTU>
inet6 fe80::204:23ff:feb1:64e2%em0 prefixlen 64 scopeid 0x3
inet 172.16.6.1 netmask 0xffffff00 broadcast 172.16.6.255
ether 00:04:23:b1:64:e2
media: Ethernet autoselect (1000baseSX )
status: active
asa633:/root# ifconfig em1
em1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=b<RXCSUM,TXCSUM,VLAN_MTU>
inet6 fe80::20e:cff:feba:e726%em1 prefixlen 64 scopeid 0x4
inet 172.16.7.1 netmask 0xffffff00 broadcast 172.16.7.255
ether 00:0e:0c:ba:e7:26
media: Ethernet autoselect (1000baseTX )
status: active

poweredge:/root# ifconfig em0
em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=b<RXCSUM,TXCSUM,VLAN_MTU>
inet6 fe80::204:23ff:feab:964%em0 prefixlen 64 scopeid 0x2
inet 172.16.6.2 netmask 0xffffff00 broadcast 172.16.6.255
ether 00:04:23:ab:09:64
media: Ethernet autoselect (1000baseSX )
status: active
poweredge:/root# ifconfig em1
em1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=b<RXCSUM,TXCSUM,VLAN_MTU>
inet6 fe80::207:e9ff:fe11:a0a0%em1 prefixlen 64 scopeid 0x4
inet 172.16.7.2 netmask 0xffffff00 broadcast 172.16.7.255
ether 00:07:e9:11:a0:a0
media: Ethernet autoselect (1000baseTX )
status: active

The 172.16.6.0/24 interfaces are connected directly via fiber. The 172.16.7.0/24 interfaces are connected directly via copper.

With this setup, let's use Iperf to transmit and receive traffic.

Poweredge runs the server, but let's show the client first.

asa633:/root# iperf -c 172.16.6.2 -t 60 -i 5
------------------------------------------------------------
Client connecting to 172.16.6.2, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.6.1 port 52453 connected with 172.16.6.2 port 5001
[ 3] 0.0- 5.0 sec 82.1 MBytes 138 Mbits/sec
[ 3] 5.0-10.0 sec 83.4 MBytes 140 Mbits/sec
[ 3] 10.0-15.0 sec 83.6 MBytes 140 Mbits/sec
[ 3] 15.0-20.0 sec 83.6 MBytes 140 Mbits/sec
[ 3] 20.0-25.0 sec 83.5 MBytes 140 Mbits/sec
[ 3] 25.0-30.0 sec 84.2 MBytes 141 Mbits/sec
[ 3] 30.0-35.0 sec 85.4 MBytes 143 Mbits/sec
[ 3] 35.0-40.0 sec 85.7 MBytes 144 Mbits/sec
[ 3] 40.0-45.0 sec 86.8 MBytes 146 Mbits/sec
[ 3] 45.0-50.0 sec 88.8 MBytes 149 Mbits/sec
[ 3] 50.0-55.0 sec 90.6 MBytes 152 Mbits/sec
[ 3] 55.0-60.0 sec 91.6 MBytes 154 Mbits/sec
[ 3] 0.0-60.0 sec 1.01 GBytes 144 Mbits/sec

Here is the server's view.

poweredge:/root# iperf -s -B 172.16.6.2
------------------------------------------------------------
Server listening on TCP port 5001
Binding to local address 172.16.6.2
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 4] local 172.16.6.2 port 5001 connected with 172.16.6.1 port 52453
[ 4] 0.0-60.0 sec 1.01 GBytes 144 Mbits/sec

That's interesting. These boxes averaged 144 Mbps. While the tests were running I captured top output. First, the client asa633:

last pid: 840; load averages: 0.24, 0.10, 0.03 up 0+01:03:10 15:48:51
27 processes: 2 running, 25 sleeping
CPU states: 2.7% user, 0.0% nice, 47.1% system, 49.4% interrupt, 0.8% idle
Mem: 8876K Active, 5784K Inact, 17M Wired, 9040K Buf, 273M Free
Swap: 640M Total, 640M Free

PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
840 root 2 102 0 2768K 1696K RUN 0:04 0.00% iperf

Now the server, poweredge.

last pid: 716; load averages: 0.36, 0.12, 0.04 up 0+00:53:13 15:49:10
34 processes: 2 running, 32 sleeping
CPU states: 2.6% user, 0.0% nice, 39.0% system, 56.9% interrupt, 1.5% idle
Mem: 31M Active, 8768K Inact, 20M Wired, 12M Buf, 434M Free
Swap: 1024M Total, 1024M Free

PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
716 root 3 106 0 2956K 1792K RUN 0:07 0.00% iperf

Those seem like high interrupt counts. Before making changes to see if we can improve the situation, let's run Iperf in bidirectional mode. That sends traffic from the client to server and server to client simultaneously.

Here is the client's view.

asa633:/root# iperf -c 172.16.6.2 -d -t 60 -i 5
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 172.16.6.2, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[ 5] local 172.16.6.1 port 64827 connected with 172.16.6.2 port 5001
[ 4] local 172.16.6.1 port 5001 connected with 172.16.6.2 port 61729
[ 5] 0.0- 5.0 sec 33.8 MBytes 56.8 Mbits/sec
[ 4] 0.0- 5.0 sec 52.1 MBytes 87.5 Mbits/sec
[ 5] 5.0-10.0 sec 41.2 MBytes 69.2 Mbits/sec
[ 4] 5.0-10.0 sec 44.0 MBytes 73.8 Mbits/sec
[ 5] 10.0-15.0 sec 44.2 MBytes 74.2 Mbits/sec
[ 4] 10.0-15.0 sec 43.2 MBytes 72.5 Mbits/sec
[ 5] 15.0-20.0 sec 41.7 MBytes 70.0 Mbits/sec
[ 4] 15.0-20.0 sec 46.0 MBytes 77.1 Mbits/sec
[ 4] 20.0-25.0 sec 44.5 MBytes 74.7 Mbits/sec
[ 5] 20.0-25.0 sec 43.4 MBytes 72.8 Mbits/sec
[ 5] 25.0-30.0 sec 40.7 MBytes 68.3 Mbits/sec
[ 4] 25.0-30.0 sec 47.7 MBytes 80.0 Mbits/sec
[ 5] 30.0-35.0 sec 44.4 MBytes 74.6 Mbits/sec
[ 4] 30.0-35.0 sec 44.5 MBytes 74.7 Mbits/sec
[ 5] 35.0-40.0 sec 40.7 MBytes 68.3 Mbits/sec
[ 4] 35.0-40.0 sec 48.9 MBytes 82.1 Mbits/sec
[ 5] 40.0-45.0 sec 44.3 MBytes 74.3 Mbits/sec
[ 4] 40.0-45.0 sec 45.7 MBytes 76.6 Mbits/sec
[ 4] 45.0-50.0 sec 46.8 MBytes 78.5 Mbits/sec
[ 5] 45.0-50.0 sec 43.4 MBytes 72.8 Mbits/sec
[ 5] 50.0-55.0 sec 42.6 MBytes 71.6 Mbits/sec
[ 4] 50.0-55.0 sec 48.4 MBytes 81.2 Mbits/sec
[ 5] 55.0-60.0 sec 45.3 MBytes 75.9 Mbits/sec
[ 5] 0.0-60.0 sec 506 MBytes 70.7 Mbits/sec
[ 4] 55.0-60.0 sec 46.0 MBytes 77.2 Mbits/sec
[ 4] 0.0-60.0 sec 558 MBytes 78.0 Mbits/sec

Here is the server's view.

poweredge:/root# iperf -s -B 172.16.6.2
------------------------------------------------------------
Server listening on TCP port 5001
Binding to local address 172.16.6.2
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
bind failed: Address already in use
[ 4] local 172.16.6.2 port 5001 connected with 172.16.6.1 port 64827
------------------------------------------------------------
Client connecting to 172.16.6.1, TCP port 5001
Binding to local address 172.16.6.2
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[ 6] local 172.16.6.2 port 61729 connected with 172.16.6.1 port 5001
[ 6] 0.0-60.0 sec 558 MBytes 78.0 Mbits/sec
[ 4] 0.0-60.0 sec 506 MBytes 70.7 Mbits/sec

Throughput is about half the previous, which makes sense because we are sending data in two directions.

Here is a snapshot of asa633's top output.

last pid: 868; load averages: 0.34, 0.16, 0.08 up 0+01:09:33 15:55:14
27 processes: 2 running, 25 sleeping
CPU states: 1.2% user, 0.0% nice, 43.0% system, 54.7% interrupt, 1.2% idle
Mem: 8916K Active, 5848K Inact, 18M Wired, 10M Buf, 272M Free
Swap: 640M Total, 640M Free

PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
852 root 2 101 0 3112K 1852K RUN 0:10 0.00% iperf

Here is poweredge.

last pid: 739; load averages: 0.49, 0.19, 0.10 up 0+00:59:47 15:55:44
34 processes: 2 running, 32 sleeping
CPU states: 1.9% user, 0.0% nice, 36.3% system, 61.8% interrupt, 0.0% idle
Mem: 31M Active, 8772K Inact, 20M Wired, 12M Buf, 434M Free
Swap: 1024M Total, 1024M Free

PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
722 root 3 103 0 3120K 1836K RUN 0:19 0.00% iperf
507 mysql 5 20 0 57280K 26256K kserel 0:07 0.00% mysqld

Again, high interrupts. Let's try a undirectional UDP test.

asa633:/root# iperf -c 172.16.6.2 -u -t 60 -i 5 -b 500M
------------------------------------------------------------
Client connecting to 172.16.6.2, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 9.00 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.6.1 port 61919 connected with 172.16.6.2 port 5001
[ 3] 0.0- 5.0 sec 131 MBytes 220 Mbits/sec
[ 3] 5.0-10.0 sec 132 MBytes 221 Mbits/sec
[ 3] 10.0-15.0 sec 131 MBytes 221 Mbits/sec
[ 3] 15.0-20.0 sec 131 MBytes 220 Mbits/sec
[ 3] 20.0-25.0 sec 131 MBytes 220 Mbits/sec
[ 3] 25.0-30.0 sec 131 MBytes 220 Mbits/sec
[ 3] 30.0-35.0 sec 131 MBytes 220 Mbits/sec
[ 3] 35.0-40.0 sec 132 MBytes 221 Mbits/sec
[ 3] 40.0-45.0 sec 132 MBytes 221 Mbits/sec
[ 3] 45.0-50.0 sec 132 MBytes 221 Mbits/sec
[ 3] 50.0-55.0 sec 132 MBytes 221 Mbits/sec
[ 3] 0.0-60.0 sec 1.54 GBytes 221 Mbits/sec
[ 3] Sent 1125481 datagrams
[ 3] Server Report:
[ 3] 0.0-60.3 sec 793 MBytes 110 Mbits/sec 15.711 ms 560027/1125479 (50%)
[ 3] 0.0-60.3 sec 1 datagrams received out-of-order

Here is the server's view.

poweredge:/root# iperf -s -u -B 172.16.6.2
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 172.16.6.2
Receiving 1470 byte datagrams
UDP buffer size: 41.1 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.6.2 port 5001 connected with 172.16.6.1 port 61919
[ 3] 0.0-60.3 sec 793 MBytes 110 Mbits/sec 15.712 ms 560027/1125479 (50%)
[ 3] 0.0-60.3 sec 1 datagrams received out-of-order

Check out the interrupt levels. First, the client, which shows the iperf process working hard to generate packets.

last pid: 914; load averages: 0.64, 0.34, 0.18 up 0+01:20:43 16:06:24
27 processes: 2 running, 25 sleeping
CPU states: 5.4% user, 0.0% nice, 75.5% system, 19.1% interrupt, 0.0% idle
Mem: 8956K Active, 6288K Inact, 18M Wired, 10M Buf, 271M Free
Swap: 640M Total, 640M Free

PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
914 root 2 20 0 2764K 1752K ksesig 0:16 80.33% iperf

On the server, however, the interrupt level. Packets are being lost, as we saw in the server report earlier.

last pid: 767; load averages: 0.79, 0.42, 0.21 up 0+01:10:51 16:06:48
34 processes: 2 running, 32 sleeping
CPU states: 4.1% user, 0.0% nice, 35.2% system, 60.3% interrupt, 0.4% idle
Mem: 31M Active, 8776K Inact, 20M Wired, 12M Buf, 434M Free
Swap: 1024M Total, 1024M Free

PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
767 root 3 110 0 2944K 1760K RUN 0:17 0.00% iperf

Let's see if device polling improves any of these numbers.

Using the technique explained here, I create this kernel:

kbld:/root# cat /usr/src/sys/i386/conf/GENERIC.POLLING
include GENERIC
options DEVICE_POLLING

Now I boot asa633 and poweredge using that kernel.

FreeBSD 6.1-RELEASE-p6 #0: Sun Sep 17 17:09:24 EDT 2006
root@kbld.taosecurity.com:/usr/obj/usr/src/sys/GENERIC.POLLING

Enabling polling gives access to a set of new sysctl knobs.

asa633:/root# sysctl -a | grep poll
kern.polling.burst: 5
kern.polling.burst_max: 150
kern.polling.each_burst: 5
kern.polling.idle_poll: 0
kern.polling.user_frac: 50
kern.polling.reg_frac: 20
kern.polling.short_ticks: 0
kern.polling.lost_polls: 0
kern.polling.pending_polls: 0
kern.polling.residual_burst: 0
kern.polling.handlers: 0
kern.polling.enable: 0
kern.polling.phase: 0
kern.polling.suspect: 0
kern.polling.stalled: 0
kern.polling.idlepoll_sleeping: 1
hw.nve_pollinterval: 0

You don't need to change the value of kern.polling.enable. In fact, doing so generates an error, e.g. kern.polling.enable is deprecated. Use ifconfig(8).

Instead, use ifconfig polling.

asa633:/root# ifconfig em0 polling
asa633:/root# ifconfig em0
em0: flags=8843 mtu 1500
options=4b
inet6 fe80::204:23ff:feb1:64e2%em0 prefixlen 64 scopeid 0x3
inet 172.16.6.1 netmask 0xffffff00 broadcast 172.16.6.255
ether 00:04:23:b1:64:e2
media: Ethernet autoselect (1000baseSX )
status: active

I enable polling on both boxes em0 interfaces.

Here are the test results. First, the client.

asa633:/root# iperf -c 172.16.6.2 -t 60 -i 5
------------------------------------------------------------
Client connecting to 172.16.6.2, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.6.1 port 62829 connected with 172.16.6.2 port 5001
[ 3] 0.0- 5.0 sec 90.5 MBytes 152 Mbits/sec
[ 3] 5.0-10.0 sec 128 MBytes 214 Mbits/sec
[ 3] 10.0-15.0 sec 125 MBytes 209 Mbits/sec
[ 3] 15.0-20.0 sec 105 MBytes 176 Mbits/sec
[ 3] 20.0-25.0 sec 83.7 MBytes 140 Mbits/sec
[ 3] 25.0-30.0 sec 76.7 MBytes 129 Mbits/sec
[ 3] 30.0-35.0 sec 78.1 MBytes 131 Mbits/sec
[ 3] 35.0-40.0 sec 121 MBytes 203 Mbits/sec
[ 3] 40.0-45.0 sec 126 MBytes 212 Mbits/sec
[ 3] 45.0-50.0 sec 115 MBytes 192 Mbits/sec
[ 3] 50.0-55.0 sec 91.9 MBytes 154 Mbits/sec
[ 3] 55.0-60.0 sec 77.9 MBytes 131 Mbits/sec
[ 3] 0.0-60.0 sec 1.19 GBytes 170 Mbits/sec

The server:

poweredge:/root# iperf -s -B 172.16.6.2
------------------------------------------------------------
Server listening on TCP port 5001
Binding to local address 172.16.6.2
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 4] local 172.16.6.2 port 5001 connected with 172.16.6.1 port 62829
[ 4] 0.0-60.0 sec 1.19 GBytes 170 Mbits/sec

Compare that to the previous test result without device polling.

[ 4] 0.0-60.0 sec 1.01 GBytes 144 Mbits/sec

We get better throughput here, but not the amazing improvement we'll see with UDP (later).

The interrupt counts are much better. Here's the client.

last pid: 693; load averages: 0.22, 0.07, 0.05 up 0+00:12:48 16:23:54
27 processes: 2 running, 25 sleeping
CPU states: 7.4% user, 0.0% nice, 59.1% system, 32.7% interrupt, 0.8% idle
Mem: 8928K Active, 5680K Inact, 17M Wired, 8928K Buf, 273M Free
Swap: 640M Total, 640M Free

PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
693 root 2 105 0 2768K 1684K RUN 0:08 0.00% iperf

Check out the server!

last pid: 633; load averages: 0.40, 0.16, 0.10 up 0+00:12:33 16:24:20
34 processes: 2 running, 32 sleeping
CPU states: 1.1% user, 0.0% nice, 27.0% system, 0.4% interrupt, 71.5% idle
Mem: 31M Active, 9168K Inact, 21M Wired, 13M Buf, 433M Free
Swap: 1024M Total, 1024M Free

PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
632 root 3 103 0 2956K 1824K RUN 0:12 0.00% iperf

That's amazing.

Let's try a dual test. The client:

asa633:/root# iperf -c 172.16.6.2 -d -t 60 -i 5
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 172.16.6.2, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[ 5] local 172.16.6.1 port 58192 connected with 172.16.6.2 port 5001
[ 4] local 172.16.6.1 port 5001 connected with 172.16.6.2 port 57738
[ 5] 0.0- 5.0 sec 54.8 MBytes 91.9 Mbits/sec
[ 4] 0.0- 5.0 sec 75.3 MBytes 126 Mbits/sec
[ 5] 5.0-10.0 sec 56.5 MBytes 94.8 Mbits/sec
[ 4] 5.0-10.0 sec 75.3 MBytes 126 Mbits/sec
[ 5] 10.0-15.0 sec 55.7 MBytes 93.4 Mbits/sec
[ 4] 10.0-15.0 sec 76.3 MBytes 128 Mbits/sec
[ 5] 15.0-20.0 sec 48.0 MBytes 80.5 Mbits/sec
[ 4] 15.0-20.0 sec 85.4 MBytes 143 Mbits/sec
[ 5] 20.0-25.0 sec 43.7 MBytes 73.3 Mbits/sec
[ 4] 20.0-25.0 sec 89.1 MBytes 150 Mbits/sec
[ 5] 25.0-30.0 sec 46.3 MBytes 77.7 Mbits/sec
[ 4] 25.0-30.0 sec 83.6 MBytes 140 Mbits/sec
[ 5] 30.0-35.0 sec 50.7 MBytes 85.1 Mbits/sec
[ 4] 30.0-35.0 sec 80.8 MBytes 136 Mbits/sec
[ 5] 35.0-40.0 sec 56.1 MBytes 94.2 Mbits/sec
[ 4] 35.0-40.0 sec 75.1 MBytes 126 Mbits/sec
[ 5] 40.0-45.0 sec 56.1 MBytes 94.2 Mbits/sec
[ 4] 40.0-45.0 sec 76.4 MBytes 128 Mbits/sec
[ 5] 45.0-50.0 sec 48.9 MBytes 82.0 Mbits/sec
[ 4] 45.0-50.0 sec 84.4 MBytes 142 Mbits/sec
[ 5] 50.0-55.0 sec 43.9 MBytes 73.6 Mbits/sec
[ 4] 50.0-55.0 sec 91.0 MBytes 153 Mbits/sec
[ 4] 0.0-60.0 sec 979 MBytes 137 Mbits/sec
[ 5] 55.0-60.0 sec 44.6 MBytes 74.8 Mbits/sec
[ 5] 0.0-60.0 sec 605 MBytes 84.6 Mbits/sec

The server:

poweredge:/root# iperf -s -B 172.16.6.2
------------------------------------------------------------
Server listening on TCP port 5001
Binding to local address 172.16.6.2
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
bind failed: Address already in use
[ 4] local 172.16.6.2 port 5001 connected with 172.16.6.1 port 58192
------------------------------------------------------------
Client connecting to 172.16.6.1, TCP port 5001
Binding to local address 172.16.6.2
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[ 6] local 172.16.6.2 port 57738 connected with 172.16.6.1 port 5001
[ 6] 0.0-60.0 sec 979 MBytes 137 Mbits/sec
[ 4] 0.0-60.0 sec 605 MBytes 84.6 Mbits/sec

Compare those results with their non-device polling counterparts.

[ 6] 0.0-60.0 sec 558 MBytes 78.0 Mbits/sec
[ 4] 0.0-60.0 sec 506 MBytes 70.7 Mbits/sec

Here's the client top excerpt:

last pid: 697; load averages: 0.21, 0.15, 0.09 up 0+00:16:02 16:27:08
27 processes: 2 running, 25 sleeping
CPU states: 5.1% user, 0.0% nice, 54.1% system, 40.9% interrupt, 0.0% idle
Mem: 9012K Active, 5688K Inact, 17M Wired, 8928K Buf, 272M Free
Swap: 640M Total, 640M Free

PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
697 root 2 108 0 3112K 1852K RUN 0:08 0.00% iperf

Server top excerpt:

last pid: 637; load averages: 0.43, 0.21, 0.12 up 0+00:15:44 16:27:31
34 processes: 2 running, 32 sleeping
CPU states: 5.2% user, 0.0% nice, 56.3% system, 6.7% interrupt, 31.7% idle
Mem: 31M Active, 9168K Inact, 21M Wired, 13M Buf, 433M Free
Swap: 1024M Total, 1024M Free

PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
637 root 3 105 0 3120K 1868K RUN 0:15 0.00% iperf



Let's try the UDP tests again with device polling enabled. Here's the client side.

asa633:/root# iperf -c 172.16.6.2 -u -t 60 -i 5 -b 500M
------------------------------------------------------------
Client connecting to 172.16.6.2, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 9.00 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.6.1 port 54045 connected with 172.16.6.2 port 5001
[ 3] 0.0- 5.0 sec 172 MBytes 289 Mbits/sec
[ 3] 5.0-10.0 sec 173 MBytes 290 Mbits/sec
[ 3] 10.0-15.0 sec 173 MBytes 290 Mbits/sec
[ 3] 15.0-20.0 sec 173 MBytes 290 Mbits/sec
[ 3] 20.0-25.0 sec 173 MBytes 290 Mbits/sec
[ 3] 25.0-30.0 sec 174 MBytes 291 Mbits/sec
[ 3] 30.0-35.0 sec 173 MBytes 291 Mbits/sec
[ 3] 35.0-40.0 sec 173 MBytes 291 Mbits/sec
[ 3] 40.0-45.0 sec 173 MBytes 291 Mbits/sec
[ 3] 45.0-50.0 sec 173 MBytes 291 Mbits/sec
[ 3] 50.0-55.0 sec 170 MBytes 284 Mbits/sec
[ 3] 0.0-60.0 sec 2.02 GBytes 290 Mbits/sec
[ 3] Sent 1478220 datagrams
[ 3] Server Report:
[ 3] 0.0-60.0 sec 1.94 GBytes 277 Mbits/sec 0.056 ms 62312/1478219 (4.2%)
[ 3] 0.0-60.0 sec 1 datagrams received out-of-order

Here's the server side.

poweredge:/root# iperf -s -u -B 172.16.6.2
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 172.16.6.2
Receiving 1470 byte datagrams
UDP buffer size: 41.1 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.6.2 port 5001 connected with 172.16.6.1 port 54045
[ 3] 0.0-60.0 sec 1.94 GBytes 277 Mbits/sec 0.056 ms 62312/1478219 (4.2%)
[ 3] 0.0-60.0 sec 1 datagrams received out-of-order

Compare that to the results from the test without device polling.

[ 3] 0.0-60.3 sec 793 MBytes 110 Mbits/sec 15.712 ms 560027/1125479 (50%)

Because so few packets were dropped, throughput was much higher for UDP.

Here's the client top excerpt:

last pid: 705; load averages: 1.16, 0.59, 0.29 up 0+00:21:27 16:32:33
27 processes: 2 running, 25 sleeping
CPU states: 9.0% user, 0.0% nice, 80.5% system, 10.5% interrupt, 0.0% idle
Mem: 8952K Active, 5684K Inact, 17M Wired, 8928K Buf, 273M Free
Swap: 640M Total, 640M Free

PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
705 root 2 20 0 2764K 1752K ksesig 0:24 89.08% iperf

Server top excerpt:

last pid: 650; load averages: 0.48, 0.25, 0.16 up 0+00:21:01 16:32:48
34 processes: 2 running, 32 sleeping
CPU states: 7.9% user, 0.0% nice, 88.8% system, 0.4% interrupt, 3.0% idle
Mem: 31M Active, 9168K Inact, 21M Wired, 13M Buf, 433M Free
Swap: 1024M Total, 1024M Free

PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
650 root 3 120 0 2944K 1792K RUN 0:25 0.00% iperf

The incredibly low interrupt count explains why so fewer packets were dropped.

The only downside to device polling may be that your NIC might not support it. Check man 4 polling. This is one of the reasons I like to use Intel NICs -- they are bound to be supported and they perform well.

Nisley on Failure Analysis

Since I'm not a professional software developer, the only reason I pay attention to Dr. Dobb's Journal is Ed Nisley. I cited him earlier in Ed Nisley on Professional Engineering and Insights from Dr. Dobb's. The latest issue features Failure Analysis, Ed's look at NASA's documentation on mission failures. Ed writes:

[R]eviewing your projects to discover what you do worst can pay off, if only by discouraging dumb stunts.

What works for you also works for organizations, although few such reviews make it to the outside world. NASA, however, has done a remarkable job of analyzing its failures in public documents that can help the rest of us improve our techniques.


Documenting digital disasters has been a theme of this blog, although my request for readers to share their stories went largely unheeded. This is why I would like to see (and maybe create/lead) a National Digital Security Board.

Here are a few excerpts from Ed's article. I'm not going to summarize it; it takes about 5 minutes to read. These are the concepts I want to remember.

NASA defines the "root" cause of mishap as [a]long a chain of events leading to a mishap, the first causal action or failure to act that could have been controlled systematically either by policy/practice/procedure or individual adherence to policy/practice/procedure.

The root causes of these mishaps (incorrect units, invalid inputs, inverted G-switches) seem obvious in retrospect. How could anyone have possibly made those mistakes?

In addition to the root cause, the MIB Reports also identify a "contributing" cause as [a] factor, event or circumstance which led directly or indirectly to the dominant root cause, or which contributed to the severity of the mishap.


The "chain of events" is symptomatic of disasters. A break in that chain prevents the disaster.

However, the MIB [Mishap Investigation Board] discovered that [t]he Software Interface Specification (SIS) was developed but not properly used in the small forces ground software development and testing. End-to-end testing ... did not appear to be accomplished. (emphasis added)

Lack of end-to-end testing appears to be a common theme with disasters.

Mars, the Death Planet for spacecraft, might not have been the right venue for NASA's then-new "Faster, Better, Cheaper" mission-planning process...

The Mars Program Independent Assessment Team (MPIAT) Report pointed out that overall project management decisions caused the cascading series of failed verifications and tests. One slide of their report showed the MCO and MPL project constraints: Schedule, cost, science requirements, and launch vehicle were established constraints and margins were inadequate. The only remaining variable was risk.

In this context, "Faster" means flying more missions, getting rid of "non-value-added" work, and reducing the cycle time by working smarter rather than harder. "Cheaper" has the obvious meaning: spending less to get the same result. The MCO [Mars Climate Orbiter] and MPL [Mars Polar Lander] missions together cost less than the previous (successful) Mars Pathfinder mission.

The term "Better" has an amorphous definition, which I believe is the fundamental problem. In general, management gets what it measures and, if something cannot be measured, management simply won't insist on getting it.

You can easily demonstrate that you're doing things faster, that you've eliminated "non-value-added" operations, and that you're spending less money than ever before. You cannot show that those decisions are better (or worse), because the only result that really matters is whether the mission actually returns science data. Regrettably, you can measure that aspect of "better" after the fact and, in space, there are no do-overs.
(emphasis added)

The last part is crucial. For digital security, the only result that really matters is whether you preserve confidentiality, integrity, and availability, usually by preventing and/or mitigating compromise. All the other stuff -- "percentage of systems certified and accredited," "percentage of systems with anti-virus applied," "percentage of systems with current patch levels" -- is absolutely secondary. In the Mars mission context, who cares if you build the spacecraft quicker, launch on time, and spend less money, if the vehicle crashes and the mission fails?

Thankfully NASA is taking steps to learn from its mistakes by investigating and documenting these disasters. It's time the digital security world learned something from these rocket scientists.

Using tap0 with Tcpreplay

This thread on the Wireshark mailing list brought up the issue of not being able to use Tcpreplay with the loopback interface on FreeBSD, e.g.:

orr:/root# tcpreplay -i lo0 /data/lpc/1.lpc
sending out lo0
processing file: /data/lpc/1.lpc
Unable to send packet: Address family not supported by protocol family

Here is an alternative: use tap0.

orr:/root# ifconfig tap0
ifconfig: interface tap0 does not exist
orr:/root# dd if=/dev/tap0 of=/dev/null bs=1500 &
[1] 9468
orr:/root# ifconfig tap0 up
orr:/root# ifconfig tap0
tap0: flags=8843 mtu 1500
inet6 fe80::2bd:1dff:fe2d:4d00%tap0 prefixlen 64 scopeid 0x5
ether 00:bd:1d:2d:4d:00
Opened by PID 9468
orr:/root# tcpreplay -i tap0 /data/lpc/1.lpc
sending out tap0
processing file: /data/lpc/1.lpc
^C
Actual: 71 packets (6860 bytes) sent in 6.15 seconds
Rated: 1115.0 bps, 0.01 Mbps/sec, 11.54 pps

In a second window, sniff with Tcpdump or whatever program you want:

orr:/root# tcpdump -n -i tap0 -s 1515
tcpdump: WARNING: tap0: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tap0, link-type EN10MB (Ethernet), capture size 1515 bytes
10:25:16.211443 00:0d:28:6c:f5:4f > 01:00:0c:cc:cc:cd sap aa ui/C
10:25:17.567563 IP 192.168.2.5.2882 > 10.20.2.19.22:
P 1293772727:1293772779(52) ack 478395919 win 64444

I discussed this in my first book and in my network security monitoring class.

Rabu, 20 September 2006

Does SecureWorks-LURHQ Count as Consolidation?

I think it does. Managed network security services is one arena where size is always a factor, and bigger is usually better. With more employees you have more analysts per shift. You have more customers, so you see more of the Internet. With enough customers your view of the Internet begins to resemble a statistically significant sample, from which you can make inferences about the health of the global network.

I thought this Dark Reading story on the merger (the new company will be called SecureWorks -- no more "how do I say LURHQ?") had an interesting quote:

But all of this doesn't mean IBM-ISS isn't on SecureWorks' radar: Prince says SecureWorks' main competitors on the enterprise side are Symantec, VeriSign, and "now IBM." On the commercial side, it will be local telcos and other service providers, he says.

Where is Counterpane? They must be desperate for a buyer. I expect to see more MSSPs combining to form Voltron as time progresses.

Multiple Kernels on FreeBSD

The following is a topic I would enjoy hearing more about. If you have helpful suggestions, please share them as a comment.

Two years ago I described my experiences with building a FreeBSD userland and kernel on one system and installing it on another. I found myself in the same situation recently, where I didn't want to sit around waiting for a couple slow boxes to build themselves custom kernels. I wanted to build the custom kernel on a fast box and use it on the slower boxes. I didn't want to replace the default kernel on any of the boxes. I wanted the new kernel(s) to be additional boot-time options.

This post gave me the answer I needed. Here's how I applied it.

I wanted to build a GENERIC-style kernel, but with security updates applied. First I installed cvsup-without-gui as a package. Next I created this /usr/local/etc/security-supfile file:

*default host=cvsup5.FreeBSD.org
*default base=/usr
*default prefix=/usr
*default release=cvs tag=RELENG_6_1
*default delete use-rel-suffix

*default compress

src-all

This would update my kernel sources and userland to the SECURITY branch effective the time I ran cvsup (next).

kbld# cvsup -g -L 2 /usr/local/etc/security-supfile
Parsing supfile "/usr/local/etc/security-supfile"
Connecting to cvsup5.FreeBSD.org
Connected to cvsup5.FreeBSD.org
Server software version: SNAP_16_1h
Negotiating file attribute support
Exchanging collection information
Establishing multiplexed-mode data connection
Running
Updating collection src-all/cvs
Edit src/UPDATING
Add delta 1.416.2.22.2.3 2006.05.31.22.31.41 cperciva
Add delta 1.416.2.22.2.4 2006.06.14.15.59.27 cperciva
Add delta 1.416.2.22.2.5 2006.07.07.07.25.21 cperciva
Add delta 1.416.2.22.2.6 2006.08.23.22.02.25 cperciva
...edited...
Shutting down connection to server
Finished successfully

Next I created the file GENERIC.SECURITY in /usr/src/sys/i386/conf with the following:

include GENERIC

All that does is make GENERIC.SECURITY the same kernel as GENERIC, except with patches applied. At this point you might think I should just update the GENERIC kernel. I could do that, but I'm using this method because later steps show this system works best for my requirements.

Now I can build the kernel.

kbld# cd /usr/src
kbld# make buildkernel KERNCONF=GENERIC.SECURITY INSTKERNNAME=GENERIC.SECURITY
--------------------------------------------------------------
>>> Kernel build for GENERIC.SECURITY started on Wed Sep 20 19:54:46 EDT 2006
--------------------------------------------------------------
===> GENERIC.SECURITY
mkdir -p /usr/obj/usr/src/sys

--------------------------------------------------------------
>>> stage 1: configuring the kernel
--------------------------------------------------------------
...truncated...
--------------------------------------------------------------
>>> Kernel build for GENERIC.SECURITY completed on Wed Sep 20 20:12:42 EDT 2006
--------------------------------------------------------------

Next I installed the kernel.

kbld:/usr/src# make installkernel KERNCONF=GENERIC.SECURITY INSTKERNNAME=GENERIC.SECURITY
--------------------------------------------------------------
>>> Installing kernel
--------------------------------------------------------------
...edited...
kldxref /boot/GENERIC.SECURITY
kbld:/usr/src#

That's it. I make sure host kbld is exporting the appropriate directories via NFS by creating this /etc/exports file:

/usr -alldirs

That's too loose but this is sufficient for my test network.

Now I move from the kernel builder to a slow system where I would like to make GENERIC.SECURITY available. 192.168.2.103 is kbld, where the new kernel is waiting.

asa633:/root# mount_nfs 192.168.2.103:/usr/src /usr/src
asa633:/root# mount_nfs 192.168.2.103:/usr/obj /usr/obj
asa633:/root# mount
/dev/ad0s1a on / (ufs, local)
devfs on /dev (devfs, local)
/dev/ad0s1f on /home (ufs, local, soft-updates)
/dev/ad1s1d on /nsm (ufs, local, soft-updates)
/dev/ad0s1g on /tmp (ufs, local, soft-updates)
/dev/ad0s1d on /usr (ufs, local, soft-updates)
/dev/ad0s1e on /var (ufs, local, soft-updates)
192.168.2.103:/usr/src on /usr/src (nfs)
192.168.2.103:/usr/obj on /usr/obj (nfs)
asa633:/usr/src# make installkernel KERNCONF=GENERIC.SECURITY INSTKERNNAME=GENERIC.SECURITY
--------------------------------------------------------------
>>> Installing kernel
--------------------------------------------------------------
...edited...
kldxref /boot/GENERIC.SECURITY

How do I get this GENERIC.SECURITY kernel to boot? If I were at the console at boot time, I could say 'boot GENERIC.SECURITY'. Since I am remote, I edit /boot/loader.conf to say

kernel=GENERIC.SECURITY

Now I reboot. After rebooting, I see the new kernel is installed:

asa633:/root# uname -a
FreeBSD asa633.taosecurity.com 6.1-RELEASE-p6 FreeBSD 6.1-RELEASE-p6 #0:
Wed Sep 20 20:02:56 EDT 2006
root@kbld.taosecurity.com:/usr/obj/usr/src/sys/GENERIC.SECURITY i386

Pretty easy. If I want to boot the default kernel, I remove the entry in /boot/loader.conf.

For example, asa633 is usually running the kernel provided by Colin Percival's FreeBSD-Update code:

asa633:/root# uname -a
FreeBSD asa633.taosecurity.com 6.1-SECURITY FreeBSD 6.1-SECURITY #0:
Mon Aug 28 05:21:08 UTC 2006
root@builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC i386

FreeBSD-Update, in fact, very nicely takes care of the latest security problems with Gzip:

asa633:/root# freebsd-update fetch
Fetching updates signature...
Fetching updates...
Fetching hash list signature...
Fetching hash list...
Examining local system...
Fetching updates...
/usr/bin/gunzip...
/usr/bin/gzcat...
/usr/bin/gzip...
/usr/bin/zcat...
Updates fetched

To install these updates, run: '/usr/local/sbin/freebsd-update install'
asa633:/root# freebsd-update install
Backing up /usr/bin/gunzip...
Installing new /usr/bin/gunzip...
Backing up /usr/bin/gzcat...
Recreating hard link from /usr/bin/gunzip to /usr/bin/gzcat...
Backing up /usr/bin/gzip...
Recreating hard link from /usr/bin/gunzip to /usr/bin/gzip...
Backing up /usr/bin/zcat...
Recreating hard link from /usr/bin/gunzip to /usr/bin/zcat...

Easy!

Changing Definitions of Network Security Monitoring

I first defined Network Security Monitoring in print through my contribution to the February 2003 book Hacking Exposed, 4th Edition. Prior to that I defined NSM in a December 2002 SearchSecurity Webcast. NSM probably became more recognized in my first book, where I repeated the same definition by writing "Network security monitoring is the collection, analysis, and escalation of indications and warnings to detect and respond to intrusions."

I emphasized the role of indications and warning (I&W) because my Air Force intelligence background involved training specifically in that discipline. I recommend reading the last link above for additional insight into this approach.

Today, however, I reviewed some Department of Defense documentation that made me take a second look at my NSM definition. (You might say this proves I am not a slave to my prior writings. Then again, you won't ever hear me say a threat and a vulnerability are the same!)

I&W is defined as those intelligence activities intended to detect and report time-sensitive intelligence information on foreign developments that could involve a threat to the United States or allied and/or coalition military, political, or economic interests or to US citizens abroad. It includes forewarning of enemy actions or intentions; the imminence of hostilities; insurgency; nuclear/nonnuclear attack on the United States, its overseas forces, or allied and/or coalition nations; hostile reactions to US reconnaissance activities; terrorists' attacks; and other similar events. Also called I&W. See also information; intelligence.

Note the heavy emphasis on gaining intelligence on threats, namely their capabilities and intentions.

While reading a DoD document, I came across the term attack sensing and warning (AS&W), with which I was only vaguely familiar. AS&W is defined as the detection, correlation, identification and characterization of cyber attacks across a large spectrum coupled with the notification to command and decision makers so that an appropriate response can be developed. Attack sensing and warning also includes attack/intrusion related intelligence collection tasking and dissemination; limited immediate response recommendations; and limited potential impact assessments.

I have a feeling that AS&W might be derived from Army operations. A friend previously part of 1st Information Operations Command worked that unit's AS&W mission.

Looking at the AS&W definition, it seems more appropriate within the context of NSM than I&W. I haven't decided how I'll define NSM in my next book or major paper, but I will keep AS&W at the forefront of my thoughts.

Differentiating Among Assessment Services

Tate Hansen of Clear Net Security provides a great methodology for differentiating among vulnerability assessment and related network security services. Check out his flow chart and then see how your own provider compares.

Review of IPv6 Essentials Posted

Amazon.com just posted my five star review of IPv6 Essentials, 2nd Ed by Sylvia Hagen. From the review:

I read and reviewed IPv6 Network Administration (INA) in August 2005 and Running IPv6 (RI) in January 2006. I gave those books 5 stars, so I had high expectations for "IPv6 Essentials, 2nd Ed" (IE2E). INA and RI are very hands-on, implementation-specific books. IE2E is more concerned with explaining protocols and IPv6 features. In this respect, IE2E is the perfect complement to INA and RI.

My full review mentions IPv6 critiques by Daniel Bernstein and Todd Underwood. I intend to take a closer look at SEcure Neighbor Discovery (SEND) (RFC 3971) and Cryptographically Generated Addresses (CGA) (RFC 3972) after reading about attacks upon stateless autoconfiguration and duplicate address detection, which appear in IPv6 Neighbor Discovery (ND) Trust Models and Threats (RFC 3756). Authentication for DHCP Messages (RFC 3118) can also be a concern, thanks to DHCPv6 Reconfigure Messages.

I also plan to read (.pdf) and watch (Google Video) Van Hauser's Attacking the IPv6 Protocol Suite, nicely summarized here.

Selasa, 19 September 2006

SANS Network IPS Testing Webcast

I'm listening to a SANS Webcast on Trustworthy IPS Testing and Certification. Jack Walsh from the Network Intrusion Prevention section of ICSA Labs spoke for about 45 minutes on his testing system. Jack spent a decent amount of time discussing the Network IPS Corporate Certification Testing Criteria (.pdf) and vulnerabilities set (.xls). The vulnerabilities set was just updated a week ago, after being criticized in July.

At present only three products are ICSA Labs certified, according to the ICSA Web site and this press release. ICSA Lab certification is a pass/fail endeavor; there are no grades.

ICSA does not release the name of the companies whose products fail. Looking at the members of the NIPS Product Developers Consortium, you can make some guesses about who participated.

Vendors pay for testing. They do so by paying for a year-long testing period, during which time they will receive at least one "full battery" of testing. Tests are rerun when the vulnerability set is updated or when then attacks used to exploit vulnerabilities change. Although ICSA Labs publishes the vulnerabilities they test, they do not say specifically how they exploit the vulnerabilities. Jack said they do use Metasploit, Core Impact, and home-grown programs. ICSA Labs relies on running real captured network traffic through a NIPS, during which they inject captured attack traffic.

I found the Webcast informative. I was surprised that Jack was so insistent that NIPS provide "mitigation" for denial of service attacks. I don't consider that an essential element of NIPS activity.

Looking at the vulnerability set, they appear to be dominated by "traditional" vulnerabilities, namely weaknesses in services running on servers. You will not see application-layer vulnerabilities like cross-site scripting, for example.

A competitor to ICSA Labs is NSS, who just announced their NSS Group IPS Testing Methodology V4.0 (060731) (.pdf) and a Certified IPS Products list.

How the FCC Handles Radio Denial of Service

I am a licensed Amateur Radio operator, but I'm about as active as packet radio. Today, though, I read how the Federal Communications Commission handles those who interfere with radio transmissions.

It was a day a lot of radio amateurs in Southern California had been waiting for a long time. On September 18, US District Court Judge R. Gary Klausner sentenced convicted radio jammer Jack Gerritsen, now 70, to seven years imprisonment and imposed $15,225 in fines on six counts -- one a felony -- that included transmitting without a license and willful and malicious interference with radio transmissions. Before sentencing, Gerritsen apologized to the federal government, the FCC and the local Amateur Radio community, which had endured the brunt of Gerritsen's on-air tirades and outright jamming.

Wow -- seven years in prison with a felony conviction. No wonder my Dad used to warn me about broadcasting without a license.

Suggestions for Testing Bypass Switches

I've acquired a number of bypass devices for testing in the TaoSecurity labs. I'd like to know if any of you have requests to know more about these devices. In other words, how would you like me to test them?

The devices in question include the following. Shore Micro SM-2400 Programmable Bypass Switch: This device has TX copper connectors and may support Gigabit Ethernet. Optical Bypass Switch with Heartbeat: This device has SX fiber connectors and supports Gigabit Ethernet. 10/100/1000 Bypass Switch with Heartbeat: This device has TX copper connectors and supports Gigabit Ethernet. Interface Masters Niagara 2295RJ: This device has TX copper connectors and supports Gigabit Ethernet. I find it interesting that it does not require a power supply, but I wonder how it supports a heartbeat without power? Niagara 2282: This is an internal NIC that acts as a bypass switch. It has SX fiber connectors and supports Gigabit Ethernet. Niagara 2280: This is an internal NIC that acts as a bypass switch. It has SX fiber connectors and supports Gigabit Ethernet. I don't see functional differences between this NIC and the previous, but that is a preliminary assessment. So those are the devices. This is how I intend to deploy them for testing.

traffic generator transmitter NIC

|

bypass switch inbound NIC

bypass switch monitor NIC 1 --> sensor NIC 1

bypass switch monitor NIC 2 --> sensor NIC 2

bypass switch outbound NIC

|

traffic generator receiver NIC

For the internal devices, I will have the internal NIC in the sensor feeding a second NIC in the same sensor.

At the moment my main goals are to fully understand how each device works, feature-wise. I plan to do some limited testing this week with the equipment on hand. Next week I plan to use commercial load generators to stress the devices.

Let me know as a comment on TaoSecurity Blog or email to richard [at] taosecurity.com if you have ideas regarding what I should do with these systems.

Teaching Possibilities in Australia

I've been invited to speak at the AusCERT Asia Pacific Information Technology Security Conference in Gold Coast, Australia. The conference takes place Sunday 20 May - Friday 25 May 2007.

I haven't decided if I will accept yet. I'd like to know if any TaoSecurity Blog readers in Australia, New Zealand, or nearby areas would be interested in attending a two (or maybe more) day class either directly before or after my presentation date (which is unknown right now).

I would need a location to host the training, in exchange for which I would provide two free seats for the hosting organization.

Is anyone interested in attending and/or hosting such a class? Please email training [at] taosecurity.com. I have to accept or decline the AusCERT invitation next week.

I am open to suggestions regarding the location of the class (if the Gold Coast is too remote) and the content of the class (Network Security Operations, TCP/IP Weapons School, etc.). Sydney is a possibility since I will fly through SYD on my way to and from BNE. Thank you.

Senin, 18 September 2006

Insider Threat Study

I received a copy of a study announced by ArcSight and conducted by the Ponemon Institute. I mention this for two reasons. One, it highlights issues regarding the meaning of security terms. Two, the content is worth a look.

First, the email I received bore the subject "Are Executives the Cause of Insider Threats?". I wondered if the study examined if executives were the parties with the intentions and capabilities to exploit weaknesses in assets. That's what a threat is, and a study that implied executives (and not corporate minions or IT staff) were the real problem would be noteworthy in its own right.

Near the beginning of the report I read the following:

The survey was sponsored by ArcSight, an enterprise security management company, and queried 461 respondents who are employed in corporate IT departments within U.S.-based organizations.

For purposes of this survey, we define the "insider threat" as the misuse or destruction of sensitive or confidential information, as well as IT equipment that houses this data, by employees, contractors and others.


They're actually talking about attacks caused by insiders, not "insider threats." Working with their language, an insider threat would be "those who misuse or destroy sensitive or confidential information, as well as IT equipment that houses this data."

The report continues:

"Insider threats occur because of human error such as mistakes, negligence, reckless behavior, and sometimes even corporate sabotage."

Not really. Insider threats take advantage of vulnerabilities caused by mistakes and negligence. Insider threats employ reckles behavior (if not truly intending to cause harm) or corporate sabotage (if intending to cause harm) as attack methods.

Our survey sought to answer the following three questions.

1. What are the root causes of insider threats and how do information security practitioners respond to this pervasive IT and business risk?


They actually mean "what are the root causes of vulnerabilities that are exploited by insider threats, and how to infosec practitioners mitigate risks?" To truly address root causes of insider threats, one would analyze the motivations of threats themselves, like greed, malice, etc.

2. What technologies, practices and procedures are employed by organizations to reduce or mitigate insider-related risks?

That's great. Risks is used appropriately.

3. What are the issues, challenges and possible impediments to effectively detecting and preventing insider threats?

I would say "detecting and preventing attacks by insider threats."

The following are the most salient findings in our study: Data breaches go unreported. While we seem to be inundated with reports of data breaches, we may not know the full extent of the problem. More than 78% of respondents said that there has been at least one and possibly more unreported insider-related security breaches within their company.

Wow, that's a lot. Let's look for evidence in the report.

Table 11 reports that over 78% of respondents know of an insider-related security incident that was not publicly disclosed.



Notice Table 11 asks "Do you know of an insider-related incident in your organization (or any other organization in your industry) which was not disclosed to the public or to law enforcement?" (emphasis added)

That 78% figure doesn't mean that "more than 78% of respondents said that there has been at least one and possibly more unreported insider-related security breaches within their company" at all! In fact, there could be zero unreported breaches in the surveyed companies, and all respondents answering "yes" could be pointing to the same incident at someone else's company.

This idea is backed up by the following finding:

Table 7 shows that over 59% of respondents believe that insider-related problems are more likely to occur outside of their departments or organizational units.

So almost 60% of respondents think problems are likely to happen someplace else. That reminds me of surveys that say parents think schools in general are poor, but the school their child attends is fine.

While I think there is some interesting data in the survey report, I would keep my analysis in mind while reading it.