Sabtu, 23 September 2006

FreeBSD Device Polling Results for Gigabit Copper

In my post FreeBSD Device Polling I ran my tests over Gigabit fiber connections. I thought I would repeat the tests for Gigabit copper, connected by normal straight-through cables. (One benefit of Gigabit copper Ethernet NICs is there's no need for crossover cables.)

Although I booted my two test boxes, asa633 and poweredge, with kernels offering polling, neither interface had polling enabled by default. This is asa633's NIC:


em1: flags=8843 mtu 1500
options=b
inet6 fe80::20e:cff:feba:e726%em1 prefixlen 64 scopeid 0x4
inet 172.16.7.1 netmask 0xffffff00 broadcast 172.16.7.255
ether 00:0e:0c:ba:e7:26
media: Ethernet autoselect (1000baseTX )
status: active

This is poweredge's NIC.

em1: flags=8843 mtu 1500
options=b
inet6 fe80::207:e9ff:fe11:a0a0%em1 prefixlen 64 scopeid 0x4
inet 172.16.7.2 netmask 0xffffff00 broadcast 172.16.7.255
ether 00:07:e9:11:a0:a0
media: Ethernet autoselect (1000baseTX )
status: active

First I ran unidirectional TCP tests, from asa633 to poweredge, without polling.

asa633:/root# iperf -c 172.16.7.2 -t 60 -i 5
------------------------------------------------------------
Client connecting to 172.16.7.2, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.7.1 port 58672 connected with 172.16.7.2 port 5001
[ 3] 0.0- 5.0 sec 90.2 MBytes 151 Mbits/sec
[ 3] 5.0-10.0 sec 91.1 MBytes 153 Mbits/sec
[ 3] 10.0-15.0 sec 90.0 MBytes 151 Mbits/sec
[ 3] 15.0-20.0 sec 91.2 MBytes 153 Mbits/sec
[ 3] 20.0-25.0 sec 89.8 MBytes 151 Mbits/sec
[ 3] 25.0-30.0 sec 90.9 MBytes 153 Mbits/sec
[ 3] 30.0-35.0 sec 91.7 MBytes 154 Mbits/sec
[ 3] 35.0-40.0 sec 92.0 MBytes 154 Mbits/sec
[ 3] 40.0-45.0 sec 89.9 MBytes 151 Mbits/sec
[ 3] 45.0-50.0 sec 90.1 MBytes 151 Mbits/sec
[ 3] 50.0-55.0 sec 90.4 MBytes 152 Mbits/sec
[ 3] 55.0-60.0 sec 91.0 MBytes 153 Mbits/sec
[ 3] 0.0-60.0 sec 1.06 GBytes 152 Mbits/sec

Here is what the server saw.

poweredge:/root# iperf -s -B 172.16.7.2
------------------------------------------------------------
Server listening on TCP port 5001
Binding to local address 172.16.7.2
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 4] local 172.16.7.2 port 5001 connected with 172.16.7.1 port 58672
[ 4] 0.0-60.0 sec 1.06 GBytes 152 Mbits/sec

Interrupt levels for both systems was similar to the Gigabit copper results.

Here is the change with polling enabled via 'ifconfig em1 polling'. First, the client.

asa633:/root# iperf -c 172.16.7.2 -t 60 -i 5
------------------------------------------------------------
Client connecting to 172.16.7.2, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.7.1 port 52789 connected with 172.16.7.2 port 5001
[ 3] 0.0- 5.0 sec 79.2 MBytes 133 Mbits/sec
[ 3] 5.0-10.0 sec 76.3 MBytes 128 Mbits/sec
[ 3] 10.0-15.0 sec 80.4 MBytes 135 Mbits/sec
[ 3] 15.0-20.0 sec 123 MBytes 207 Mbits/sec
[ 3] 20.0-25.0 sec 126 MBytes 212 Mbits/sec
[ 3] 25.0-30.0 sec 110 MBytes 185 Mbits/sec
[ 3] 30.0-35.0 sec 89.1 MBytes 149 Mbits/sec
[ 3] 35.0-40.0 sec 77.0 MBytes 129 Mbits/sec
[ 3] 40.0-45.0 sec 76.8 MBytes 129 Mbits/sec
[ 3] 45.0-50.0 sec 103 MBytes 172 Mbits/sec
[ 3] 50.0-55.0 sec 128 MBytes 215 Mbits/sec
[ 3] 55.0-60.0 sec 120 MBytes 201 Mbits/sec
[ 3] 0.0-60.0 sec 1.16 GBytes 166 Mbits/sec

Now the server.

poweredge:/root# iperf -s -B 172.16.7.2
------------------------------------------------------------
Server listening on TCP port 5001
Binding to local address 172.16.7.2
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 4] local 172.16.7.2 port 5001 connected with 172.16.7.1 port 52789
[ 4] 0.0-60.0 sec 1.16 GBytes 166 Mbits/sec

Polling didn't improve the situation much for TCP.

Here are the results for unidirectional UDP tests, without polling.

This is the client.

asa633:/root# iperf -c 172.16.7.2 -u -t 60 -i 5 -b 500M
------------------------------------------------------------
Client connecting to 172.16.7.2, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 9.00 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.7.1 port 60193 connected with 172.16.7.2 port 5001
[ 3] 0.0- 5.0 sec 129 MBytes 217 Mbits/sec
[ 3] 5.0-10.0 sec 129 MBytes 217 Mbits/sec
[ 3] 10.0-15.0 sec 129 MBytes 217 Mbits/sec
[ 3] 15.0-20.0 sec 127 MBytes 212 Mbits/sec
[ 3] 20.0-25.0 sec 129 MBytes 217 Mbits/sec
[ 3] 25.0-30.0 sec 129 MBytes 217 Mbits/sec
[ 3] 30.0-35.0 sec 129 MBytes 217 Mbits/sec
[ 3] 35.0-40.0 sec 129 MBytes 217 Mbits/sec
[ 3] 40.0-45.0 sec 129 MBytes 217 Mbits/sec
[ 3] 45.0-50.0 sec 129 MBytes 216 Mbits/sec
[ 3] 50.0-55.0 sec 129 MBytes 216 Mbits/sec
[ 3] 0.0-60.0 sec 1.51 GBytes 216 Mbits/sec
[ 3] Sent 1102470 datagrams
[ 3] Server Report:
[ 3] 0.0-60.0 sec 787 MBytes 110 Mbits/sec 0.042 ms 541153/1102469 (49%)
[ 3] 0.0-60.0 sec 1 datagrams received out-of-order

Notice the huge drops. Here is the server.

poweredge:/root# iperf -s -u -B 172.16.7.2
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 172.16.7.2
Receiving 1470 byte datagrams
UDP buffer size: 41.1 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.7.2 port 5001 connected with 172.16.7.1 port 60193
[ 3] 0.0-60.0 sec 787 MBytes 110 Mbits/sec 0.042 ms 541153/1102469 (49%)
[ 3] 0.0-60.0 sec 1 datagrams received out-of-order

Here are results with polling enabled.

The client:

asa633:/root# iperf -c 172.16.7.2 -u -t 60 -i 5 -b 500M
------------------------------------------------------------
Client connecting to 172.16.7.2, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 9.00 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.7.1 port 53387 connected with 172.16.7.2 port 5001
[ 3] 0.0- 5.0 sec 163 MBytes 274 Mbits/sec
[ 3] 5.0-10.0 sec 164 MBytes 275 Mbits/sec
[ 3] 10.0-15.0 sec 164 MBytes 275 Mbits/sec
[ 3] 15.0-20.0 sec 164 MBytes 275 Mbits/sec
[ 3] 20.0-25.0 sec 164 MBytes 275 Mbits/sec
[ 3] 25.0-30.0 sec 164 MBytes 275 Mbits/sec
[ 3] 30.0-35.0 sec 163 MBytes 274 Mbits/sec
[ 3] 35.0-40.0 sec 164 MBytes 275 Mbits/sec
[ 3] 40.0-45.0 sec 164 MBytes 275 Mbits/sec
[ 3] 45.0-50.0 sec 164 MBytes 275 Mbits/sec
[ 3] 50.0-55.0 sec 164 MBytes 275 Mbits/sec
[ 3] 0.0-60.0 sec 1.92 GBytes 275 Mbits/sec
[ 3] Sent 1401731 datagrams
[ 3] Server Report:
[ 3] 0.0-60.0 sec 1.86 GBytes 267 Mbits/sec 0.023 ms 40962/1401730 (2.9%)
[ 3] 0.0-60.0 sec 1 datagrams received out-of-order

Now the server.

poweredge:/root# iperf -s -u -B 172.16.7.2
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 172.16.7.2
Receiving 1470 byte datagrams
UDP buffer size: 41.1 KByte (default)
------------------------------------------------------------
[ 3] local 172.16.7.2 port 5001 connected with 172.16.7.1 port 53387
[ 3] 0.0-60.0 sec 1.86 GBytes 267 Mbits/sec 0.024 ms 40962/1401730 (2.9%)
[ 3] 0.0-60.0 sec 1 datagrams received out-of-order

Like the copper tests, polling really helps with UDP performance.

Given that polling can be enabled and disabled at will via ifconfig, I would like to see 'options DEVICE_POLLING' added to the GENERIC FreeBSD kernel.

0 komentar:

Posting Komentar