Jumat, 30 Juni 2006

Ten Days Left for Cheaper USENIX Security Registration

Those of you who read the Atom or RSS feeds for this blog have been missing my personalized USENIX Security 2006 banner ad, visible to my Blogger readers. In fact, some of you might have no idea that I, Richard Bejtlich, write these words, thanks to the various people who copy and reproduce my blog postings without regard to my authorship!

In any case, there are ten days left for early registration for USENIX Security in Vancouver, BC. I will teach a brand new, two day course called TCP/IP Weapons School (TWS) on 31 July and 1 August 2006.

This will be a fun course. Let me make your expectations perfectly clear, however: the primary purpose of this course is to teach TCP/IP and packet-level analysis. The intended audience is junior and intermediate security personnel. We will work our way up the TCP/IP stack over the two day course, using security tools at each layer to provide sample traffic for analysis.

If you walk up to me in class and say "I know all of these tools. This isn't cool," I will boot you from class! This is not an "uber-l33t-h@x0r-t00lz" course. Still, I am trying to add tools from off the beaten path to keep things interesting.

I will probably create a FreeBSD VM with all or most of the tools I use in the slides. Students will be free to try those tools, although I may omit the layer 2 attacks. I do not wish to see MAC spoofing, flooding, and so forth disrupting the USENIX network. I plan to provide all of the traces analyzed in class, however. You will want to be sure your laptop is running Ethereal/Wireshark so you can follow along.

Assuming the class goes well, I hope to offer it elsewhere -- including to private groups.

Signs of Desperation from Duronio Defense Team

It sounds to me like the Duronio defense team has nothing left in its tank, so it's attacking Keith Jones directly. The latest reporting, UBS Trial: Defense Suggests Witness Altered Evidence, shows how ridiculous the defense team sounds:

"So when you talked about putting pieces of the puzzle together, you were missing three-quarters of the pieces for the [central file server] alone?"" [defense attorney] Adams asked.

"The puzzle pieces I had to put together formed the picture I needed," Jones replied. "If the puzzle was of a boat, then I had enough pieces to form the picture of the boat."

Adams countered, "But you might not see all the other boats around it."

Jones replied, "But the second boat won't get rid of the first boat. It's simple mathematics that when you add data, you don't subtract data. There was nothing in that data set that could remove the data I already had."


It sounds like Keith has more testifying in store for next week. Stay tuned.

Slides from FIRST 2006 Posted

Today I spoke briefly at the 18th Annual FIRST Conference in Baltimore, MD. Thanks to those who waited to see me fill the very last speaking slot on the very last day of the conference, before an extended holiday weekend. A few of you asked for my slides, so here they are -- The Network-Centric Incident Response and Forensics Imperative.

Tuning Snort Article in Sys Admin Magazine

Keep an eye on your local news stands or mail box for the August 2006 issue of Sys Admin magazine. They published an article I wrote titled Tuning Snort. I describe simple steps one should take with Snort to reduce the number of unwanted alerts. I used a beta of Snort 2.6.0 when writing the article a few months ago.

Kamis, 29 Juni 2006

Jones Withstands Defense Attorneys

I've been covering the Duronio trial in which my friend Keith Jones is testifying as the government's star forensic witness. Today's story describes how Keith explained his findings while being attacked by defense attorneys. This excerpt is priceless:

At one point, [defense attoryney] Adams laid out a scenario in which someone could have created a backdoor in the UBS system, and then deleted it before a backup was done to capture it. When he asked Jones if he, personally, could do such a thing, Jones replied, "I could do a lot of things. That's why I'm hired to do the investigation."

Bamm! Nice response Jones.

It has been crucial to the prosecution's case that Jones is not a self-proclaimed "hacker." This report shows how the defense pursued Karl Kasper, aka "John Tan," ex-@Stake, ex-L0pht "hacker," for signing official documents as "John Tan" instead of using his real name. UBS hired @Stake to perform forensics before bringing Foundstone onto the case, thereby getting Keith involved.

All the wanna-be hacker kiddies should remember that grown-ups don't trust the opinions of "hackers" in courts of law.

Incidentally, I don't think Keith is a CISSP; at least he is not listed in the organization's member directory.

Update: Keith told me he is a CISSP. He must be a stealth one like me.

Binary Upgrade of FreeBSD 6.0 to 6.1

Several months ago I posted how I used Colin Percival's freebsd-update program to perform a binary upgrade from FreeBSD 5.4 to 6.0 remotely over SSH. Thanks to Colin's latest work, I was able to successfully perform a binary upgrade from FreeBSD 6.0 to 6.1 remotely over SSH.

hacom:/root/upgrade# uname -a
FreeBSD hacom.taosecurity.com 6.0-SECURITY FreeBSD 6.0-SECURITY #0:
Tue Apr 18 08:56:09 UTC 2006
root@builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC i386

hacom:/root# fetch http://www.daemonology.net/freebsd-upgrade-6.0-to-6.1/upgrade-6.0-to-6.1.tgz
upgrade-6.0-to-6.1.tgz 0% of 4706 kB
hacom:/root# sha256 upgrade-6.0-to-6.1.tgz
SHA256 (upgrade-6.0-to-6.1.tgz) = 29075fc5711e0b20d879c69d12bbe5414c1c56d597c8116da7acc0d291116d2f
hacom:/root# tar -xzvf upgrade-6.0-to-6.1.tgz
x upgrade
x upgrade/upgrade.sh
x upgrade/6.1-index
x upgrade/6.0-index
hacom:/root# cd upgrade
hacom:/root/upgrade# ./upgrade.sh^M^M
Examining system... done.

The following components of FreeBSD seem to be installed:
kernel|generic world|base world|dict world|doc world|manpages

The following components of FreeBSD do not seem to be installed:
kernel|smp src|base src|bin src|contrib src|crypto src|etc src|games
src|gnu src|include src|krb5 src|libexec src|lib src|release src|rescue
src|sbin src|secure src|share src|sys src|tools src|ubin src|usbin
world|catpages world|games world|info world|proflibs^M

Does this look reasonable (y/n)? y

Examining system (this will take a bit longer)... done.

The following files from FreeBSD 6.0 have been modified since they were
installed, but will be deleted or overwritten by new versions:
/.cshrc /root/.cshrc /usr/share/man/whatis

The following files from FreeBSD 6.0 have been modified since they were
installed, and will not be touched:
/etc/hosts /etc/manpath.config /etc/master.passwd /etc/motd /etc/passwd
/etc/pwd.db /etc/shells /etc/spwd.db /etc/ttys /var/db/locate.database
/var/log/sendmail.st

The following files from FreeBSD 6.0 have been modified since they were
installed, and the changes in FreeBSD 6.1 will be merged into the
existing files:
/etc/group

Does this look reasonable (y/n)? y

Preparing to fetch files... done.
Fetching 1729 patches....10....20....30....40....edited...1720.... done.
Applying patches... done.
Fetching 433 files....10....20....30....40....50....60...edited...done.
Decompressing and verifying... done.
Attempting to automatically merge configuration files... done.

The following changes, which occurred between FreeBSD 6.0 and FreeBSD
6.1, have been merged into /etc/group:
--- merge/old/etc/group Thu Jun 29 07:03:59 2006
+++ merge/new/etc/group Thu Jun 29 07:04:00 2006
@@ -41,5 +41,6 @@
student8:*:1012:
student9:*:1013:
student10:*:1014:
student11:*:1015:
richard:*:1016:
+audit:*:77:
Does this look reasonable (y/n)? y

Installing new kernel into /boot/GENERIC... done.
Moving /boot/kernel to /boot/kernel.old... done.
Moving /boot/GENERIC to /boot/kernel... done.
Removing schg flag from existing files... done.
Installing new non-kernel files... done.
Removing left-over files from FreeBSD 6.0... done.
To start running FreeBSD 6.1, reboot.
hacom:/root/upgrade# reboot

hacom# freebsd-update fetch
Fetching updates signature...
Fetching updates...
Fetching hash list signature...
Fetching hash list...
Examining local system...
Fetching updates...
/boot/kernel/smbfs.ko...
/usr/libexec/sendmail/sendmail...
/usr/sbin/ypserv...
Updates fetched

To install these updates, run: '/usr/local/sbin/freebsd-update install'
hacom# freebsd-update install
Backing up /boot/kernel/smbfs.ko...
Installing new /boot/kernel/smbfs.ko...
Backing up /usr/libexec/sendmail/sendmail...
Installing new /usr/libexec/sendmail/sendmail...
Backing up /usr/sbin/ypserv...
Installing new /usr/sbin/ypserv...
hacom# reboot

hacom:/home/richard$ uname -a
FreeBSD hacom.taosecurity.com 6.1-RELEASE FreeBSD 6.1-RELEASE #0:
Sun May 7 04:32:43 UTC 2006
root@opus.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC i386

I like it. Easy, fast, no compiling and it works. Kudos to Colin!

Selasa, 27 Juni 2006

Great Firewall of China Uses TCP Resets

This blog post about the Great Firewall of China by Cambridge University researchers is fascinating:

It turns out [caveat: in the specific cases we’ve closely examined, YMMV] that the keyword detection is not actually being done in large routers on the borders of the Chinese networks, but in nearby subsidiary machines. When these machines detect the keyword, they do not actually prevent the packet containing the keyword from passing through the main router (this would be horribly complicated to achieve and still allow the router to run at the necessary speed). Instead, these subsiduary machines generate a series of TCP reset packets, which are sent to each end of the connection. When the resets arrive, the end-points assume they are genuine requests from the other end to close the connection — and obey. Hence the censorship occurs.

So China is censoring its citizens using ten-year-old technology. How long before they upgrade?

Update: Tom Ptacek shows this story is old news. Great historical insights Tom!

Jones Connects with Jury

Keith Jones is connecting with his jury, according to the latest Information Security article on the Duronio trial:

Jones, trying to explain the program to the jury, said to think of a Looney Tunes cartoon where there's an alarm clock attached to a bundle of dynamite. The alarm clock is the trigger, he told the laughing jury, while the dynamite and resulting explosion make up the payload.

This excerpt tells me two facts. (1) Jones is using terminology the jury can understand. (2) The jury is listening to him. I'm looking forward to reading about the defense's cross-examination, which should be happening now.

Know Your Tools

In the network forensics portion of my Network Security Operations class I cover a variety of reasons to validate that one's tools operate as expected. I encountered another example of this today while capturing network traffic from a wireless adapter.

I explained several months ago how I use the ndis0 interface with a Linksys WPC54G adapter. This is a wrapper for the Windows driver packaged with the NIC.

Here I am pinging another wireless host.

$ ping -c 3 192.168.2.31
PING 192.168.2.31 (192.168.2.31): 56 data bytes
64 bytes from 192.168.2.31: icmp_seq=0 ttl=128 time=71.342 ms
64 bytes from 192.168.2.31: icmp_seq=1 ttl=128 time=95.017 ms
64 bytes from 192.168.2.31: icmp_seq=2 ttl=128 time=15.499 ms

--- 192.168.2.31 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss

No problems, right? Now I start Tcpdump in another window, and ping again. First, the ping results.

$ ping -c 3 192.168.2.31
PING 192.168.2.31 (192.168.2.31): 56 data bytes
64 bytes from 192.168.2.31: icmp_seq=0 ttl=128 time=44.392 ms
64 bytes from 192.168.2.31: icmp_seq=0 ttl=128 time=45.865 ms (DUP!)
64 bytes from 192.168.2.31: icmp_seq=1 ttl=128 time=66.001 ms
64 bytes from 192.168.2.31: icmp_seq=1 ttl=128 time=66.273 ms (DUP!)
64 bytes from 192.168.2.31: icmp_seq=2 ttl=128 time=88.457 ms

--- 192.168.2.31 ping statistics ---
3 packets transmitted, 3 packets received, +2 duplicates, 0% packet loss
round-trip min/avg/max/stddev = 44.392/62.198/88.457/16.152 ms

What? Why the dupes? Here is what Tcpdump saw:

$ sudo tcpdump -n -i ndis0 -s 1515 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ndis0, link-type EN10MB (Ethernet), capture size 1515 bytes
09:37:26.226020 IP 192.168.2.5 > 192.168.2.31: ICMP echo request, id 45571, seq 0, length 64
09:37:26.268487 IP 192.168.2.5 > 192.168.2.31: ICMP echo request, id 45571, seq 0, length 64
09:37:26.270302 IP 192.168.2.31 > 192.168.2.5: ICMP echo reply, id 45571, seq 0, length 64
09:37:26.271772 IP 192.168.2.31 > 192.168.2.5: ICMP echo reply, id 45571, seq 0, length 64
09:37:27.227215 IP 192.168.2.5 > 192.168.2.31: ICMP echo request, id 45571, seq 1, length 64
09:37:27.292627 IP 192.168.2.5 > 192.168.2.31: ICMP echo request, id 45571, seq 1, length 64
09:37:27.293116 IP 192.168.2.31 > 192.168.2.5: ICMP echo reply, id 45571, seq 1, length 64
09:37:27.293409 IP 192.168.2.31 > 192.168.2.5: ICMP echo reply, id 45571, seq 1, length 64
09:37:28.228061 IP 192.168.2.5 > 192.168.2.31: ICMP echo request, id 45571, seq 2, length 64
09:37:28.316227 IP 192.168.2.5 > 192.168.2.31: ICMP echo request, id 45571, seq 2, length 64
09:37:28.316428 IP 192.168.2.31 > 192.168.2.5: ICMP echo reply, id 45571, seq 2, length 64
09:37:28.316718 IP 192.168.2.31 > 192.168.2.5: ICMP echo reply, id 45571, seq 2, length 64
^C
12 packets captured
38 packets received by filter
0 packets dropped by kernel

I sniffed traffic on 192.168.2.31, and that box did not see nor send duplicates.

I had no idea what was happening. Then I remembered a recent Undeadly.org story about compromising Windows systems through their wireless drivers. I realized my ndis0 interface is just a wrapper for the potentially lousy Windows driver shipped with the wireless NIC.

I have a second idea. Perhaps Tcpdump should not be in promiscuous mode when capturing wireless traffic? I've encountered issues with this on Windows XP, namely Ethereal/Wireshark recommends disabling promiscuous mode when capturing wireless traffic. Let's see what happens if I ping again while sniffing with -p.

$ ping -c 3 192.168.2.31
PING 192.168.2.31 (192.168.2.31): 56 data bytes
64 bytes from 192.168.2.31: icmp_seq=0 ttl=128 time=447.891 ms
64 bytes from 192.168.2.31: icmp_seq=1 ttl=128 time=105.004 ms
64 bytes from 192.168.2.31: icmp_seq=2 ttl=128 time=22.260 ms

--- 192.168.2.31 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 22.260/191.718/447.891/184.264 ms

Looks good. Here's Tcpdump's view.

$ sudo tcpdump -n -i ndis0 -s 1515 -p icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ndis0, link-type EN10MB (Ethernet), capture size 1515 bytes
09:42:00.415428 IP 192.168.2.5 > 192.168.2.31: ICMP echo request, id 49411, seq 0, length 64
09:42:00.863206 IP 192.168.2.31 > 192.168.2.5: ICMP echo reply, id 49411, seq 0, length 64
09:42:01.416462 IP 192.168.2.5 > 192.168.2.31: ICMP echo request, id 49411, seq 1, length 64
09:42:01.521373 IP 192.168.2.31 > 192.168.2.5: ICMP echo reply, id 49411, seq 1, length 64
09:42:02.417306 IP 192.168.2.5 > 192.168.2.31: ICMP echo request, id 49411, seq 2, length 64
09:42:02.439481 IP 192.168.2.31 > 192.168.2.5: ICMP echo reply, id 49411, seq 2, length 64
^C
6 packets captured
38 packets received by filter
0 packets dropped by kernel

There it is. So, if I don't want to see duplicate traffic, I should disable promiscuous mode.

There's one more wrinkle, though. If I ping a wired host from this wireless host, I don't see duplicates.

$ ping -c 3 192.168.2.12
PING 192.168.2.12 (192.168.2.12): 56 data bytes
64 bytes from 192.168.2.12: icmp_seq=0 ttl=64 time=4.044 ms
64 bytes from 192.168.2.12: icmp_seq=1 ttl=64 time=1.060 ms
64 bytes from 192.168.2.12: icmp_seq=2 ttl=64 time=0.987 ms

--- 192.168.2.12 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.987/2.030/4.044/1.424 ms

Now Tcpdump's view:

$ sudo tcpdump -n -i ndis0 -s 1515 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ndis0, link-type EN10MB (Ethernet), capture size 1515 bytes
09:44:27.934368 IP 192.168.2.5 > 192.168.2.12: ICMP echo request, id 53763, seq 0, length 64
09:44:27.938290 IP 192.168.2.12 > 192.168.2.5: ICMP echo reply, id 53763, seq 0, length 64
09:44:28.934994 IP 192.168.2.5 > 192.168.2.12: ICMP echo request, id 53763, seq 1, length 64
09:44:28.935969 IP 192.168.2.12 > 192.168.2.5: ICMP echo reply, id 53763, seq 1, length 64
09:44:29.935846 IP 192.168.2.5 > 192.168.2.12: ICMP echo request, id 53763, seq 2, length 64
09:44:29.936732 IP 192.168.2.12 > 192.168.2.5: ICMP echo reply, id 53763, seq 2, length 64
^C
6 packets captured
10 packets received by filter
0 packets dropped by kernel

Weird.

For one last idea I tested capture using the native wi0 driver and an older 802.11b SMC NIC. Here I ping while sniffing in promiscuous mode:

$ ping -c 3 192.168.2.31
PING 192.168.2.31 (192.168.2.31): 56 data bytes
64 bytes from 192.168.2.31: icmp_seq=0 ttl=128 time=95.359 ms
64 bytes from 192.168.2.31: icmp_seq=1 ttl=128 time=16.461 ms
64 bytes from 192.168.2.31: icmp_seq=2 ttl=128 time=39.406 ms

--- 192.168.2.31 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 16.461/50.409/95.359/33.136 ms

No problem. Tcpdump's view:

$ sudo tcpdump -n -i wi0 -s 1515 icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on wi0, link-type EN10MB (Ethernet), capture size 1515 bytes
09:46:57.049509 IP 192.168.2.5 > 192.168.2.31: ICMP echo request, id 62723, seq 0, length 64
09:46:57.144750 IP 192.168.2.31 > 192.168.2.5: ICMP echo reply, id 62723, seq 0, length 64
09:46:58.050287 IP 192.168.2.5 > 192.168.2.31: ICMP echo request, id 62723, seq 1, length 64
09:46:58.066660 IP 192.168.2.31 > 192.168.2.5: ICMP echo reply, id 62723, seq 1, length 64
09:46:59.051137 IP 192.168.2.5 > 192.168.2.31: ICMP echo request, id 62723, seq 2, length 64
09:46:59.090452 IP 192.168.2.31 > 192.168.2.5: ICMP echo reply, id 62723, seq 2, length 64
^C
6 packets captured
181 packets received by filter
0 packets dropped by kernel

The issue must be the NIC driver.

This affects the captures I posted when I tested SinFP. Those duplicates must have been introduced by my NIC driver, Gomor.

The bottom line is you have to know your tools.

Senin, 26 Juni 2006

Details on Freenode Incident

If you're looking for details on the Freenode incident, check out Regular Ramblings. This single Slashdot post claims Ettercap was involved. I was online at the time as well.

Cluelessness at Harvard Law Review

Articles like Immunizing the Internet, or: How I Learned To Stop Worrying and Love the Worm (.pdf) in the June 2006 (link will work shortly) Harvard Law Review make me embarrassed to be a Harvard graduate. This is the central argument:

[C]omputer networks, particularly the Internet, can be thought of as having immune systems that are strengthened by certain attacks. Exploitation of security holes prompts users and vendors to close those holes, vendors to emphasize security in system development, and users to adopt improved security practices. This constant strengthening of security reduces the likelihood of a catastrophic attack -- one that would threaten national or even global security. In essence, certain cybercrime can create more benefits than costs, and cybercrime policy should take this concept into account.

Apparently Harvard lawyers do not take economics classes. If they did (or paid attention) they would know of Frédéric Bastiat's parable of the broken window. The story demonstrates that crime, warfare, and other destructure behavior does not benefit society, since it shifts resources from productive behavior towards repair, recovery, and other defensive activities.

The HLR article continues:

Cybercrime is also different from other crime because it is amenable to innovative law enforcement approaches that exploit its unique underlying psychology. The objective of a bank robbery is to obtain money. Terrorists usually wish to maximize damage. Cybercrime, however, often provides no financial gain; many cyberattacks seem to originate from a desire for fame and attention or fun and challenge. Hackers often cause little to no permanent damage to the systems they successfully penetrate. This is true even of many high-profile cyber-attacks, in which damage initially appears to be widespread.

Wow, was this article published in 1996 or 2006? "No financial gain?" "Little to no permanent damage?" Welcome to the modern world, HLR. What would you consider permanent damage -- loss of life? Everything else can be repaired, even blasts by 2,000 pound bombs. Money spent on incident response and recovery, future lost revenue from decreased customer trust, insurance payments, spending on infrastructure -- all of this could be avoided in a world without "beneficial cybercrime."

Am I being too harsh? I don't think so. This is Harvard we're talking about, not Bunker Hill Community College.

Update: HLR should read Meet the Hackers.

Sabtu, 24 Juni 2006

This Is No Jokey

This book cover always elicits a laugh.



The idea that "hacking" is for "dummies" always bothered me. Is that all it takes to 0wn a system? Even a dummy could do it? Yes, that is a real book, with a second edition en route.

Today, I see this.



As we used to say when teaching at Foundstone, "this is no jokey." Are they kidding me? Who is the dummy here -- the person who is writing the rootkits or the person who buys this real book expecting to remove a rootkit? It's definitely not the former. For the latter, maybe the removal section is just this advice:

  1. Reformat hard drive.

  2. Reinstall from trusted media.

  3. Repeat as necessary.


Honestly, the number of people who could even try to recover from a real rootkit installation number in the dozens. Who is supposed to buy this new book? What is really in it? I don't plan to review it -- my reading list is already a mile deep and my wish list is almost as high.

Got My Mac Mini

I may have waited seventeen months, but I bought a used PowerPC G4 Mac Mini through eBay. I'm running the Debian PowerPC port on it. Why? It's so darn simple. Download and burn .iso, boot in Mac Mini. Easy. I couldn't do that with FreeBSD. The only wrinkle I encountered involved trying to manually create the partition table. I repeatedly received an error (which I have since forgot), so I let Debian create the partition for me. Here is what it set up:

macmini:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/hda3 72G 3.9G 65G 6% /
tmpfs 252M 0 252M 0% /dev/shm
macmini:~# fdisk -l /dev/hda
/dev/hda
# type name length base ( size ) system
/dev/hda1 Apple_partition_map Apple 63 @ 1 ( 31.5k) Partition map
/dev/hda2 Apple_Bootstrap untitled 1954 @ 64 (977.0k) NewWorld bootblock
/dev/hda3 Apple_UNIX_SVR2 untitled 153281251 @ 2018 ( 73.1G) Linux native
/dev/hda4 Apple_UNIX_SVR2 swap 3018219 @ 153283269 ( 1.4G) Linux swap

Block size=512, Number of Blocks=156301488
DeviceType=0x0, DeviceId=0x0

Why the PowerPC and not the Intel model? Diversity. Diversity equals survivability on the Internet. I buy Dr. Dan Geer's argument, and I want this box to survive the next target-of-opportunity worm or unstructured threat. I realize a structured threat will find G4 assembly programming a slightly higher obstacle than Intel assembly, but that might buy me some time. In any case, I was able to retire a much larger, noisier, slower, and electricity-hungry HP Visualize B2000 workstation by buying the Mini -- without changing how I do business. The PA-RISC box ran Debian too. Beautiful.

New Review of Extrusion Detection Posted



Tony Stevenson wrote a very thorough review of my newest book, Extrusion Detection: Security Monitoring for Internal Intrusions. Tony really seems to understand this book, unlike the author of a recent review for Information Security magazine who completely missed the point of Extrusion. Tony writes in his review in Windows IT Library:

While it is true that his latest book can be read in isolation from the previous one, I agree with Bejtlich when he says, "in many ways, Extrusion Detection is an attempt to extend The Tao to the addressing of internal threats."

By reading both books, and by rigorously applying the strategies that are described within them, it becomes possible to significantly increase the odds in your favor of not having your company's systems violated, either from an external threat or from an internally generated attack.

Jumat, 23 Juni 2006

A Real Logic Bomb

Logic bomb is a term often used in the media, despite the fact that almost all reporters (there are notable exceptions) have no clue what it means. Well, now we can look at a real one, thanks to forensics work by Keith Jones. He found a real logic bomb while doing forensics on the United States v. Duronio case. I worked the very beginning of this case while Keith and I were both at Foundstone. My small part involved trying to figure out how to restore images of AIX machines from tape. I even bought an AIX box on eBay for experimentation.

You can read about Keith's testimony in this Information Week article. This is the "logic bomb" Keith recovered:



One of the neat aspects of this case is its age: over four years. The media and elsewhere are abuzz with stories of "insider threats," but this has been a problem for a very long time. Congratulations to Keith for testifying on such an important case. If the jury has a clue, the defendant doesn't have a chance.

Update: This story specifically examines the code in question.

Rabu, 21 Juni 2006

Sguil Makes 2006 Top 100 Security Tools List

Fyodor of Nmap fame has posted the results of his 2006 survey of security tools. Fyodor posted the results at his new site SecTools.org. On page 4 you'll find Sguil listed as number 85 out of 100. Unfortunately, BASE beat out Sguil at number 82. Another personal regret is seeing Argus listed after BASE at number 83. The next time Fyodor asks for suvery participation, I will have to respond!

Although the top 100 results are useful, some of the sub-categorization makes little sense. Sguil is listed in the Traffic Monitoring Tools subsection, along with Solar Winds and Nagios (?!?). The Intrusion Detection category lists BASE but not Sguil, along with Fragroute and Fragrouter (?!?). Bizarre.

Regardless, I recommend security pros familiarize themselves with all of the tools in the top 100. It makes for great discussions during job interviews, either as the employer or prospective employee.

Selasa, 20 Juni 2006

Three Weeks Left for Early USENIX Registration

Three weeks remain for early registration for USENIX Security in Vancouver, BC. I will teach a brand new, two day course called TCP/IP Weapons School (TWS) on 31 July and 1 August 2006. Early registration ends 10 July.

Are you a junior security analyst or an administrator who wants to learn more about TCP/IP? Are you afraid to be bored in routine TCP/IP classes? TWS is the class you need to take! TWS is an excellent introduction to TCP/IP for those who are not ready for my Network Security Operations (NSO) class.

I have no plans at the moment to publicly teach TWS anywhere else in 2006. If you might want a private class, please contact us via training at taosecurity dot com. I've updated my services brochure (.pdf) to reflect the latest course offerings, in case you need something nice to read.

Senin, 19 Juni 2006

Bejtlich Cited in Information Security Magazine

I had forgotten about these comments, but Mike Mimoso was kind enough to cite me in his article Today's Attackers Can Find the Needle:

"What hackers are realizing is that there are so many ways to get information out of an enterprise. As people get wise to them, hackers are adapting," says Richard Bejtlich, a former captain for the Air Force CERT and founder of consultancy TaoSecurity. He cautions businesses to focus on egress filtering as a means to monitor packets that leave your network. "Pay attention to what is leaving your company," Bejtlich says.

Help with Site Redesigns

I built the existing TaoSecurity.com and Bejtlich.net Web sites with with Nvu. I would like to redesign both sites, but I am not sure how to proceed. I approached one company and they told me they design sites using Wordpress. Another uses Joomla. I am not comfortable using PHP given some of the recent security problems I've seen. I'm not sure I want/need a database on the back end either.

I have a feeling that I could use a nice style sheet from Open Source Web Design and continue to use Nvu to generate static HTML. Does anyone have any comments on this?

IA Newsletter Article Posted

The Defense Technical Information Center houses a group called the Information Assurance Technology Analysis Center. IATAC publishes the IA Newsletter. I recently learned that an article I wrote, Network Security Monitoring: Beyond Intrusion Detection, was published in Volume 8, No. 4 (.pdf). I wrote it as a response to an earlier article called The Future of Network Intrusion Detection in Volume 7, No. 3 (.pdf). This earlier article preached the common idea that intrusion prevention systems are the future of network intrusion detection. Read my article for an alternative opinion.

Sabtu, 17 Juni 2006

Three Pre-Reviews

Three generous publishers sent me three books to review this week. The first is Osborne's Hacking Exposed: Web Applications, 2nd Ed by Joel Scambray, Mike Shema, and Caleb Sima. I reviewed the first edition four years ago and loved it. The first edition was 386 pages, and the second is 520. Although each book has 13 chapters, only a few have the same name. I expect the involvement of a new co-author and many contributors have made this book relevant and worth reading.

The second is No Starch's Nagios: System and Network Monitoring by Wolfgang Barth. I am looking forward to reading this book. I have never seriously tried to get Nagios working, but I plan to try while reading this book. System and network monitoring is a perfect complement to network security monitoring.

The third book was unexpected, but welcome. It's Syngress' Winternals Defragmentation, Recovery, and Administration Field Guide by a slew of authors. I wasn't planning to read this book because I do not use any commercial Winternals tools. However, I do use the free Sysinternals Windows tools. Many popular tools are covered in this new book.

Now that my first public Network Security Operations class has successfully concluded, I plan to find time again to read and review books.

Selasa, 13 Juni 2006

Holy Cow, I'm Going to SANS

I just signed up to attend the SANS Log Management Summit, 12-14 July 2006 in Washington, DC. I think this is a great opportunity to hear some real users and experts talk about log management. Given that it's located near me, I decided I could afford to pay my own way to this conference. Is anyone else attending? If yes, register by tomorrow for the cheapest rates.

Jumat, 09 Juni 2006

Why Discard Your Brand?

Sometimes you have to make the best of a bad situation, with no warning. Good-bye Ethereal, hello Wireshark. Gerald Combs, original author and primary Ethereal developer, left his job at Network Integration Services, Inc. and joined CACE Technologies. Unfortunately, NIS owns the Ethereal trademark, and Mr. Combs wasn't able to take it with him. He also lost administrative rights to the servers hosting Ethereal.com, so he can't post news of the name change there. So, nearly eight years after the first public release, Ethereal is dead. Long live Wireshark -- especially with 1.0 expected very soon.

Certification & Accreditation Re-vitalization

Thanks to the newest SANS NewsBites (link will work shortly), I learned of the Certification & Accreditation Re-vitalization Initiative launched by the Chief Information Officer from the office of the Director of National Intelligence. According to this letter from retired Maj Gen Dale Meyerrose, the C&A process is too costly and slow, due to "widely divergent standards and controls, the lack of a robust set of automated tools and reliance upon manual review." He wants to "move from a posture of risk aversion to one of risk management, from a concept of information secuirty at all costs to one of getting the right information to the right people at the right time with some reasonable assurance of timeliness, accuracy, authenticity, security, and a host of other attributes."

That all sounds well and good, but it misses the key problem with C&A -- it doesn't prevent intrusions. It may be seen as a necessary condition for "securing" a system (which is not really possible anyway), but it is in no way sufficient. The forum set up to foster discussion of this initiative contains an insightful thought: Why do we have C&A at all? It's unfortunately that Gen Meyerrose didn't acknowledge that C&A doesn't provide much in the way of "security" at all, but that would admit that .gov and .mil have spent billions to no end. Woops.

Kamis, 08 Juni 2006

Dan Geer on Converging Physical and Digital Security

Dan Geer published an interesting article in the May/June 2006 issue of IEEE Privacy and Security. He questions the utility of converging physical and digital security "within a common reporting structure." In brief:

This observer says convergence is a mirage. The reason is time. Everything about digital security has time constants that are three orders of magnitude different from the time constants of physical security: break into my computer in 500 milliseconds but into my house in 5 to 10 minutes...

That is true, but the value of compromising a system doesn't necessarily come from just getting a root shell. This is especially true when organized crime, corporate espionage, and foreign intelligence activities are involved. Achieving the goals of each of those groups usually takes more than a few minutes, with the first taking the least time and the last the most. Nevertheless, Dan is probably still right. What he says later is even more compelling:

Human-scale time and rate constants underlie the law enforcement model of security. The crime happens and the wheels of detection, analysis, pursuit, apprehension, jurisprudence, and, perhaps, penal servitude... law enforcement generally has all the time in the world, and its opponent, the criminal, thus must commit the perfect crime to cleanly profit from that crime.

In the digital world, crime must be prevented; once committed, it's likely never ameliorable -- data is never unexposed, for example. It's not the criminal who must commit the perfect crime but rather the defender who must commit the perfect defense.

Time is the reason.

Consequently, the physical world strategies of law enforcement are of limited value in the digital sphere. Law enforcement officials (or the military) are not our natural allies or even mentors.


At first I accepted this argument. Then I thought more closely about it. Time has nothing to do with this argument. Preventing crime is the key. The analog world example makes it sound acceptable that a crime has occurred. The digital world example makes it sound unacceptable that a crime has occurred -- "data is never unexposed, for example." Well, death is never reversed if a murder is committed. For horrible crimes like murder, as with the digital world, in the analog world "crime must be prevented; once committed, it's likely never ameliorable."

Geer doesn't see this, but he reaches a conclusion for the digital world that is already happening in the analog:

[The] only answer is preemption. Preemption requires intelligence. Intelligence requires surveillance. If, as digital security people, we have any natural allies or even mentors, they're to be found in the intelligence model of security, not the law enforcement model where this talk of "convergence" has itself converged.

And there we are -- London's Cameras:

British authorities have sought to reassure the public that no effort will be spared to prevent further atrocities. For that promise to become a reality, however, London needs to move more from after-the-event analysis to before-the-event anticipation.

Intelligence is one way to prevent risks from occurring, to the extent that intelligence can identify threats and direct counter-threat activities. Removing vulnerabilities is another way to prevent risks from occurring, but that is far more difficult in most circumstances.

Tracking Exploits

I received a link to this press release today. Unlike many press releases, this one contained interesting news. It reported that a new security company called Exploit Prevention Labs (XPL) just released their first Exploit Prevalence Survey™, which ranks five client-side exploits used to compromise Web surfers. This seems similar to US-CERT Current Activity, although that report jumbles together many different news items and doesn't name specific exploits. According to the press release

The results of the monthly Exploit Prevalence Survey are derived from automated reports by users of Exploit Prevention Labs’ SocketShield anti-exploit software (free trial download at http://www.explabs.com), who have agreed to have their SocketShield installations report all suspected exploit attempts back to the researchers at Exploit Prevention Labs.

This reminds me of Microsoft's Strider HoneyMonkey project, which uses bots to crawl the Web looking for malicious sites. XPL insteads relies on real users visiting the same sites.

In any case, I look forward to the next report from XPL and I hope they apply some sort of rigor to their analysis. I wonder if the sites they visit ever end up in one of the popular blacklists? Also, where do you download exploits as they are released, now that FrSIRT VNS costs money?

Answering Penetration Testing Questions


Some of you have written regarding my post on penetration testing. One of you sent the following questions, which I thought I should answer here. Please note that penetration testing is not currently a TaoSecurity service offering, so I'm not trying to be controversial in order to attract business.

  • What do you feel is the most efficient way to determine the scope of a pen test that is appropriate for a given enterprise? Prior to hiring any pen testers, an enterprise should conduct an asset assessment to identify, classify, and prioritize their information resources. The NSA-IAM includes this process. I would then task the pen testers with gaining access to the most sensitive information, as determined by the asset assessment. Per my previous goal (Time for a pen testing team of [low/high] skill with [internal/external] access to obtain unauthorized [unstealthy/stealthy] access to a specified asset using [public/custom] tools and [complete/zero] target knowledge.) one must decide the other variables before hiring a pen testing team.

  • What do you feel is the most efficient way to determine which pen tester(s) to use? First, you must trust the team. You must have confidence (and legal assurances) they will follow the rules you set for them, properly handle sensitive information they collect, and not use information they collect for non-professional purposes. Second, you must select a team that can meet the objectives you set. They should have the knowledge and tools necessary to mirror the threat you expect to face. I will write more on this later. Third, I would rely on referrals and check all references a team provides.

  • Do you feel there is any significant value in having multiple third parties perform a pen test? This issue reminds me of the rules requiring changing of financial auditors on a periodic basis. I believe it is a good idea to conduct annual pen tests, with one team in year one and a second team in year two. At the very least you can have two experiences from which to draw upon when deciding who should return for year three.

  • Have you had any significant positive/negative experiences with specific pen testers? I once monitored a client who hired a "pen tester" to assess the client's network. One weekend while monitoring this client, I saw someone using a cable modem run Nmap against my client. The next Monday my client wanted to know why I hadn't reported seeing the "pen test". I told my client I didn't consider a Nmap scan to be a "pen test". I soon learned the client had paid something like $5000 for that scan. Buyer beware!

  • Do you have any additional recommendations as to how to choose a pen tester? Just today I came across what looks like the industry's "first objective technical grading system for hackers and penetration testers" -- at least according to SensePost. This is really exciting, I think. They describe their Combat Grading system this way: Participants are tasked to capture the flag in a series of exercises carefully designed to test the depth and the breadth of their skill in various diverse aspects of computer hacking. Around 15 exercises are completed over the course of two days, after which each participant is awarded a grade reflecting their scores and relative skill levels in each of the areas tested. Each exercise is completely technical in nature. This sounds very promising.

  • Do you have any literature that you can recommend in regard to pen
    testing?
    I have a few books nearby, namely Penetration Testing and Network Defense (not read yet) and Hack I.T. (liked it, but 4 years old). The main Hacking Exposed series discusses vulnerability assessment, which gets you halfway through a pen test.


If I had the time and money I would consider attending SensePost training, which looks very well organized and stratified. They are being offered at Black Hat Training, which as usual seems very expensive. Good, but expensive.

Selasa, 06 Juni 2006

Notes from Techno Security 2006

Today I spoke at three Techno Security 2006 events. I started the day discussing enterprise network instrumentation basic and advanced topics. I ended the day on a panel discussion with Russ Rogers, Marcus Ranum, and Johnny Long, moderated by Ron Gula. My wife and daughter and I also shared lunch with Kevin Mandia and Julie Darmstadt, both of whom I worked with at Foundstone.

This was my second Techno Security conference. I want to record a few thoughts from this conference, especially after hearing Marcus speak yesterday and after joining today's panel discussion.

Yesterday Marcus noted that the security industry is just like the diet industry. People who want to lose weight know they should eat less, eat good food, and exercise regularly. Instead, they constantly seek the latest dieting fad, pill, plan, or program -- and wonder why they don't get the results they want!

Marcus spent some time discussing money spent on security. He says we are "spending rocket science dollars but getting faith healer results." He quoted a March 2005 document by Peter Kuper (.pdf) analyzing the security vendor scene. Kuper claims that the 700 companies estimated to exist in 2005 will compete for $16 billion in revenues in 2008. That's an average of $22,857,143 per company -- not enough to sustain most players. When the three "big boys" -- Symantec, Cisco, and McAfee -- are removed, that leaves only $11.5 billion for the remaining 697 companies, or only $16,499,283 per company; that's even worse. Kuper and Marcus believe all security companies are going to end up being owned by Symantec, Cisco, McAfee, or Microsoft, or will go out of business.

Finally, I've been following the SecurityMetrics.org mailing list thread caused by Donn Parker's article and my blog posts. I've discussed the risk equation both in this blog and in my books, so you may wonder why I even mention it if I feel that measuring risk is basically worthless? The answer is simple. The risk equation is like the OSI model. In practical applications, both are worthless. No one runs OSI protocols, but everyone talks about "layer 3," "layer 4," and so on. So, the terms are helpful, but the implementation fails.

(By implementation, I mean no one runs OSI protocols like CLNP. IS-IS might be an exception, although exceptionally rare.) [Note to self: prepare for deluge of posts saying "We run IS-IS!", even though I've never seen it.]

Minggu, 04 Juni 2006

Follow-Up to Donn Parker Story

My earlier post is being debated on the private Security Metrics mailing list. I posted the following tonight:


Chris Walsh wrote:

> Alrighty.
>
> It's time for a Marines vs. Air Force slapdown!

I should have anticipated that someone on this list would read my blog!

I do not agree with all of Donn's points, and I state in my post some
of his ideas are weak. I would prefer Donn defend himself in person.

However, I am going to stand by this statement:

"As security professionals I agree we are trying to reduce risk, but
trying to measure it is a waste of time."

I agree with Donn that a risk measurement approach has not made us
more secure. That does not mean nothing can be measured. It also
does not mean that measurements are worthless.

Removing the double negatives, I am saying that some things can be
measured, and measurements can be worthwhile.

Rather than spending resources measuring risk, I would prefer to see
measurements like the following:

1. Time for a pen testing team of [low/high] skill with
[external/internal] access to obtain unauthorized access to a
specified asset using [public/custom] tools and [zero/complete] target
knowledge.

Note this measurement contains variables affecting the time to
successfully compromise the asset.

2. Time for a target's intrusion detection team to identify said
intruder (pen tester), and escalate incident details to the incident
response team.

3. Time for a target's incident response team to contain and remove
said intruder, and reconstitute the asset.

These are the operational sorts of problems that matter in the real
world. These are only three small ideas -- not a comprehensive
approach to the problem set.

Sincerely,

Richard

PS: Go Air Force. :)

Nessus 3.0.3 on FreeBSD

Several times last year I talked about using Nessus on FreeBSD. Last night I finally got a chance to install and try Nessus 3.0.3 on FreeBSD. Here's how I did it.

First I downloaded Nessus 3.0.3 as a package for FreeBSD 6.x (called Nessus-3.0.3-fbsd6.tbz). I added the package:

orr:/root# pkg_add -v Nessus-3.0.3-fbsd6.tbz
Requested space: 16570324 bytes, free space: 4394956800 bytes in /var/tmp/instmp.YdVsPF
Running pre-install for Nessus-3.0.3..
extract: Package name is Nessus-3.0.3
extract: CWD to /usr/local
extract: /usr/local/nessus/lib/nessus/plugins/synscan.nes
extract: /usr/local/nessus/lib/nessus/plugins/12planet_chat_server_path_disclosure.nasl
...edited...
extract: /usr/local/nessus/bin/nasl
extract: /usr/local/nessus/bin/nessus
extract: /usr/local/nessus/bin/nessus-fetch
extract: /usr/local/nessus/bin/nessus-bug-report-generator
extract: /usr/local/nessus/bin/nessus-mkcert-client
extract: /usr/local/nessus/bin/nessus-mkrand
extract: /usr/local/nessus/sbin/nessus-add-first-user
extract: /usr/local/nessus/sbin/nessus-check-signature
extract: /usr/local/nessus/sbin/nessus-adduser
extract: /usr/local/nessus/sbin/nessus-chpasswd
extract: /usr/local/nessus/sbin/nessus-rmuser
extract: /usr/local/nessus/sbin/nessus-mkcert
extract: /usr/local/nessus/sbin/nessus-update-plugins
extract: /usr/local/nessus/sbin/nessusd
extract: /usr/local/nessus/var/nessus/nessus-services
extract: /usr/local/nessus/var/nessus/nessus_org.pem
extract: /usr/local/etc/rc.d/nessusd.sh
extract: CWD to .
Running mtree for Nessus-3.0.3..
mtree -U -f +MTREE_DIRS -d -e -p /usr/local >/dev/null
Running post-install for Nessus-3.0.3..
Running post-install for Nessus-3.0.3..
nessusd (Nessus) 3.0.3. for FreeBSD
(C) 1998 - 2006 Tenable Network Security, Inc.

Processing the Nessus plugins...
[##################################################]

All plugins loaded

- Please run /usr/local/nessus/sbin/nessus-add-first-user to add an admin user
- Register your Nessus scanner at http://www.nessus.org/register/ to obtain
all the newest plugins
- You can start nessusd by typing /usr/local/etc/rc.d/nessusd.sh start
Attempting to record package into /var/db/pkg/Nessus-3.0.3..
Package Nessus-3.0.3 registered in /var/db/pkg/Nessus-3.0.3

Next I added a user:

orr:/root# /usr/local/nessus/sbin/nessus-add-first-user
Using /var/tmp as a temporary file holder

Add a new nessusd user
----------------------


Login : bejnessus
Authentication (pass/cert) [pass] :
Login password :
Login password (again) :

User rules
----------
nessusd has a rules system which allows you to restrict the hosts
that bejnessus has the right to test. For instance, you may want
him to be able to scan his own host only.

Please see the nessus-adduser(8) man page for the rules syntax

Enter the rules for this user, and hit ctrl-D once you are done :
(the user can have an empty rules set)

Login : bejnessus
Password : ***********
DN :
Rules :

Is that ok ? (y/n) [y] y
user added.
Thank you. You can now start Nessus by typing :
/usr/local/nessus/sbin/nessusd -D

Next I registered using the code emailed to me:

orr:/root# /usr/local/nessus/bin/nessus-fetch --register codegoeshere
Your activation code has been registered properly - thank you.
Now fetching the newest plugin set from plugins.nessus.org...
Your Nessus installation is now up-to-date.
If auto_update is set to 'yes' in nessusd.conf, Nessus will
update the plugins by itself.

Finally I started the Nessus daemon.

orr:/root# /usr/local/etc/rc.d/nessusd.sh start
Nessus
orr:/root# sockstat -4
USER COMMAND PID FD PROTO LOCAL ADDRESS FOREIGN ADDRESS
root nessusd 13116 4 tcp4 *:1241 *:*
root sendmail 434 4 tcp4 127.0.0.1:25 *:*
root sshd 428 4 tcp4 *:22 *:*
root syslogd 312 6 udp4 *:514 *:*

When I finished I removed the executable bit from the nessusd.sh script so it would not execute on boot. This is because I don't need it on boot, especially since it takes over a minute to load all the plugins.

orr:/root# chmod -x /usr/local/etc/rc.d/nessusd.sh

To start nessusd when the execute bit is not set, I do the following:

orr:/root# sh /usr/local/etc/rc.d/nessusd.sh start
Nessus

Note the default /usr/local/nessus/etc/nessus/nessusd.conf contains the following:

# Automatic plugins updates - if enabled and Nessus is registered, then
# fetch the newest plugins from plugins.nessus.org automatically
auto_update = yes
# Number of hours to wait between two updates
auto_update_delay = 24

I changed this to say

auto_update = no

because I prefer to update the plugins manually.

orr:/root# /usr/local/nessus/sbin/nessus-update-plugins

Nessus now provides a separate GUI client called NessusClient. I tried to install it this way:

orr:/usr/local/src# tar -xzvf NessusClient-1.0.0.RC5.tar.gz
x NessusClient-1.0.0.RC5/
x NessusClient-1.0.0.RC5/.root-dir
...edited...
x NessusClient-1.0.0.RC5/TODO
x NessusClient-1.0.0.RC5/VERSION
orr:/usr/local/src# cd NessusClient-1.0.0.RC5
orr:/usr/local/src/NessusClient-1.0.0.RC5# ./configure
creating cache ./config.cache
checking host system type... i386-unknown-freebsd6.0
...edited...
creating doc/NessusClient.1
creating include/config.h
orr:/root/NessusClient-1.0.0.RC5# make
...edited...
prefs_scope_tree.o(.text+0x434): In function `scopetree_rename':
prefs_dialog/prefs_scope_tree.c:179: undefined reference to `prefs_context_update'
prefs_scope_tree.o(.text+0x9c6): In function `scopetree_delete':
prefs_dialog/prefs_scope_tree.c:376: undefined reference to `prefs_context_update'
prefs_scope_tree.o(.text+0xab6):prefs_dialog/prefs_scope_tree.c:415: undefined reference to
`prefs_context_update'
prefs_scope_tree.o(.text+0xc65):prefs_dialog/prefs_scope_tree.c:500: more undefined references to
`prefs_context_update' follow
*** Error code 1

Stop in /usr/local/src/NessusClient-1.0.0.RC5/nessus.
*** Error code 1

Stop in /usr/local/src/NessusClient-1.0.0.RC5.

Rats. Luckily I found this post which suggested a fix using Gmake. After starting with a fresh extraction of NessusClient-1.0.0.RC5, I ran ./configure, gmake, and gmake install. Everything worked.

/usr/bin/install -c -m 755 /root/NessusClient-1.0.0.RC5/bin/NessusClient /usr/local/bin
test -d /usr/local/bin || /usr/bin/install -c -d -m 755 /usr/local/bin
/usr/bin/install -c -m 755 nessusclient-mkcert /usr/local/bin
/usr/bin/install -c -m 755 ssl/nessus-mkrand /usr/local/bin
installing man pages ...
/usr/bin/install -c -c -m 0444 doc/NessusClient.1 /usr/local/man/man1/NessusClient.1
/usr/bin/install -c -c -m 0444 doc/nessusclient-mkcert.1
/usr/local/man/man1/nessusclient-mkcert.1
/usr/bin/install -c -c -m 0444 doc/nessus-mkrand.1 /usr/local/man/man1/nessus-mkrand.1

I could now start the client:

orr:/home/richard$ NessusClient



I selected File -> Scan Assistant to create a "demo" Task, with "demo" scope, and "localhost" as target.

I then was prompted for my username and password to connect to the nessusd server.



Once connected, Nessus began scanning localhost.



When done I had a report.



These are the basics of running Nessus 3.0.3 with NessusClient on FreeBSD. I used the defaults for everything to get my results. An alternative would be to use Nessus 2.2.8, which is in the ports tree.

For more information, consider attending Nessus Training by Tenable Network Security.

Jumat, 02 Juni 2006

Excellent Articles in Newest NWC

I wanted to briefly mention three great articles in the newest Network Computing magazine:

All three are free and fairly informative. I hear a lot of buzz about leasing hardware and software. Are you turning to leasing instead of buying? If so, what are you leasing, and why?

Risk-Based Security is the Emperor's New Clothes

Donn Parker published an excellent article in the latest issue of The ISSA Journal titled Making the Case for Replacing Risk-Based Security. This article carried a curious disclaimer I had not seen in other articles:

This article contains the opinions of the author, which are not necessarily the opinions of the ISSA or the ISSA Journal.

I knew immediately I needed to read this article. It starts with a wonderful observation:

What are we doing wrong? Is the lack of support for adequate security linked to our risk-based approach to security? Why can't we make a successful case to management to increase the support for information security to meet the needs? Part of the answer is that management deals with risk every day, and it is too easy for them to accept security risk rather than reducing it by increasing security that is inconvenient and interferes with business.

I would argue that management decides to "accept security risk" because they cannot envisage the consequences of security incidents. I've written about this before.

However, Donn Parker's core argument is the following:

CISOs have tried to justify spending resources on security by claiming that they can manage and reduce security risks by assessing, reporting, and controlling them. They try to measure the benefits of information security "scientifically" based on risk reduction. This doesn't work... I propose that intangible risk management and risk-based security must be replaced with practical, doable security management with the new objectives of due diligence, compliance consistency, and enablement.

I agree. Here is a perfect example of the problem:

One CISO told me [Parker] that he performs risk assessment backwards. He says that he already knows what he needs to do for the next five years to develop adequate security. So he creates some risk numbers that support his contention. Then he works backwards to create types of loss incidents, frequencies, and impacts that produce those numbers. He then refines the input and output to make it all seem plausible. I suggested that his efforts are unethical since his input data and calculations are all fake. He was offended and said that I didn't understand. The numbers are understood by top management to be a convenient way to express the CISO's expert opinion of security needs.

This is my question: what makes these shenanigans possible? Remember the risk equation (Risk = Threat X Vulnerability X Asset Value) and consider these assertions:

  • Hardly anyone can assess threats.

  • Few can identify vulnerabilities comprehensively.

  • Some can measure asset value.


As a result, there is an incredible amount of "play" in the variables of the risk equation. Therefore, you can make the results anything you want -- just as the example CISO shows.

It is tough enough to assign values to threats and vulnerabilities, even if time froze. In the real world, threats are constantly evolving and growing in number, while new vulnerabilities appear in both old and new software and assets on a daily basis. A network that looked like it held a low risk of compromise on Monday could be completely prone to disaster on Tuesday when a major new vulnerability is found in a core application.

Parker's alternative includes the following:

Due diligence: We can show management the results of our threat and vulnerability analysis (using examples and scenarios) by giving examples of the existence of the ulnerabilities and solutions that others have employed (not including estimated intangible probabilities and impacts). Then we can show them easily researched benchmark comparisons of the state of their security relative to other well-run enterprises and especially their competitors under similar circumstances. We then show them what would have to be done to adopt good practices and safeguards to assure that they are within the range of the other enterprises.

Bottom line: be as good as the next guy.

Compliance: We are finding that the growing body of security compliance legislation such as SOX, GLBA, and HIPAA and the associated personal and corporate liability of managers is rapidly becoming a strong and dominant security motivation...(The current legislation is poorly written and has a sledgehammer effect as written by unknowing legislative assistants but will probably improve with experience, as has computer crime legislation.)

Bottom line: compliance has turned out to be the major incentive I've seen for security initiatives. I am getting incident response consulting work because clients do not want to go to jail for failing to disclose breaches.

Enablement: It is easily shown in products and services planning that security is required for obvious and competitive purposes and from case studies, such as the Microsoft experience of being forced by market and government pressures to build security into their products after the fact.

Bottom line: this is the weakest argument of the three, and maybe why it is last. Microsoft may be feeling the heat, but it took five years and the situation is still rough. Oracle is now under fire, but how long will it take for them to take security seriously? And so on.

I think Donn Parker is making the right point here. He is saying the Emperor has no clothes and the legions of security firms providing "risk assessments" are not happy. Of course they're not -- they can deliver a product that has bearing on reality and receive money for it! That's consequence-free consulting. Try doing that in an incident response scenario where failure to do your job means the intruder remains embedded in a client's infrastructure.

As security professionals I agree we are trying to reduce risk, but trying to measure it is a waste of time. I am sad to think organizations spend hundreds of thousands of dollars on pricey risk assessments and hardly any money on real inspection of network traffic for signs of intrusions. The sorts of measurements I recommend are performance-based, as I learned in the military. We determine how good we are by drilling and exercising capabilities, preferably against a simulated enemy. We don't write formulas guestimating our defense posture.

This is not the last I have to say on this issue, but I hope to be boarding a flight soon. I commend The ISSA Journal for publishing an article that undermines a pillar of their approach to security. I bet (ISC)2 will also love Donn's approach. :)