Kamis, 30 Juni 2005

Feds Adopt IPv6 by June 2008?

I read OMB: IPv6 by June 2008 today, which says:

"The federal government will transition to IP Version 6 (IPv6) by June 2008, said Karen Evans, the Office of Management and Budget’s administrator of e-government and information technology.

'Once the network backbones are ready, the applications and other elements will follow,' she said today while testifying before the House Government Reform Committee."

Riiight. Be prepared to see this slip to, oh, maybe never. The Federal government is also supposed to be securing its systems, and its report card is still lousy. Agencies have also known since Homeland Security Presidential Directive 12 (Aug 04) that they needed to implement smart cards, but problems are anticipated in meeting the deadlines.

Marcus Ranum pointed me towards a talk by Bill Cheswick (.ppt) on IPv6 transition realities. It mentions several problems that might prevent IPv6 adoption, like unreasonable demands on routers, hosts which can pick a new IPv6 address for every connection, and other issues.

Bleeding Snort Spyware Listening Post Initial Results

I mentioned a few new projects at Bleeding Snort two weeks ago. Some initial results of the Spyware Listening Post are posted. Check it out -- it's about one page of information.

Rabu, 29 Juni 2005

Nvu 1.0 Released

Anyone who's visited TaoSecurity.com or Bejtlich.net has probably stared in awe at the wonder of the Web composition skills inherent in each site. No? Well, I think they are an improvement over the 1996-era, made-with-vi HTML I created by hand. I used a program called Nvu (pronounced "N-view") to lay out the tables for each site. Yesterday Nvu 1.0 was released, and today the www/nvu FreeBSD port was updated.

I found Nvu doesn't produce perfect HTML, so I might use a program like Tidy to clean up the pages. The latest version of Tidy is also in the ports tree as www/tidy-devel. One day I will hire a Web developer to create modern Web pages for each site.

"IDS Is Dead" Prophet Misunderstands "Sniffing"

Many of you will remember two years ago quotes by Gartner analyst John Pescatore, such as this in Infoworld:

"We think IDS is dead. It’s failed to provide enterprise value," Pescatore says.

Now this security expert has written more words of wisdom is response to an apparent increase in reconnaissance for port 445 TCP. In More Port 445 Activity Could Mean Security Trouble, Pescatore writes:

"An apparent increase in scanning activity may signal an impending malicious-code attack exploiting a critical Windows vulnerability."

Fair enough -- but check out this gem from the next page:

"The apparent increase in 'sniffing' on Port 445 is a serious concern for enterprise security managers, because it may indicate an impending mass malicious-code attack."

Since when is remote reconnaissance considered "sniffing"? Sniffing is a term reserved for inspecting traffic either on the wire or passed via RF. The word implies having a degree of access to an enterprise completely unrelated to conducting port scans.

Of course, drones at Computerworld repeated the misuse of terms by saying

"An increase in sniffing activity on a communications port associated with a software vulnerability disclosed by Microsoft Corp. this month may be the signal of an impending attack designed to exploit the flaw, according to an alert from Gartner Inc."

Regular blog readers know I am sensitive to the misuse of security terms, since it degrades communication and adds to the general level of confusion. I do not know what motivated an outfit like Gartner to apply "sniffing" to the scanning activity in question.

Initial Thoughts on Visible Ops

I just finished listening to a Webcast offered by Tripwire titled Security Compliance: Revving Up for Regs with a Unified Strategy. To be honest, I don't think the presenters used their time appropriately, and I think the material was not conveyed very well. I listened, however, because I have learned of a book by Tripwire co-founder Gene Kim called Visible Ops. Visible Ops is a four-step methodology to implement the IT Infrastructure Library (ITIL). Tripwire describes ITIL as a framework "for assuring effective, verifiable, repeatable IT change and system configuration management processes."

The Visible Ops four step process is:

  1. Electrify the fence and modify first response.

  2. Catch & release and find fragile artifacts.

  3. Establish repeatable builds.

  4. Establish a repeatable build library.


This Computerworld rticle from last year provides a good explanation and introduction to these ideas.

The Visible Ops authors donated the results of their research to the Information Technology Process Institute (ITPI).

More information on Visible Ops is available through Tripwire. Thank you to Ron Gula for informing me of Visible Ops. Ron has a white paper explaining how his company's products help customers implement this framework and thereby improve their security and performance.

During the Webcast I was reminded of the new ISO/IEC 17799:2005 standard just released. Related information is posted at ISO 17799 News. I also heard that NIST 800-53 includes a mapping of its guidelines against the new ISO 17799, DoD Instruction 8500.2 (.pdf), DCID 6/3, GAO Federal Information Systems Controls Audit Manual (FISCAM, .pdf), and NIST 800-26.

To hear the NIST perspective on these standards, straight from Dr. Ron Ross himself, check out his recent presentation (.ppt) to my local ISSA chapter.

IPFW Rules on VPN CFG

I already published the IPFW rules I'm using to defend my sensors, so I figured I would add the IPFW rules I'm using on my VPN concentrator / firewall / gateway (CFG). I relied on the FreeBSD Handbook examples heavily, as the placement of certain sections is crucial when the CFG is also NAT box.

In these rules, interface xl0 is the interface facing the "Internet" while fxp0 faces a private internal network. Host bourque is a remote sensor with IP 192.168.2.7.

Since this entire setup exists in a lab, the 192.168.2.0/24 addresses are considered "public" addresses.

#!/bin/sh

pub="xl0"
pri="fxp0"
cmd="ipfw -q add "
ks="keep-state"
skip="skipto 500"
vpncfg_ip="192.168.2.7"
bourque_ip="192.168.2.10"
nameserver="192.168.2.1"
ok_tcp_out="22,80"
ok_udp_out="53,123"

ipfw -q -f flush

$cmd 002 allow all from any to any via $pri
$cmd 003 allow all from any to any via lo0

$cmd 100 divert natd ip from any to any in via $pub
$cmd 101 check-state

# Authorized outbound traffic
$cmd 120 $skip udp from any to any $ok_udp_out out via $pub $ks
$cmd 121 $skip tcp from any to any $ok_tcp_out out via $pub setup $ks
$cmd 122 $skip icmp from any to any out via $pub $ks
# ISAKMP
$cmd 123 allow udp from me to $bourque_ip 500 out via $pub $ks
# IPSec ESP
$cmd 124 allow esp from me to $bourque_ip out via $pub $ks
# ICMP
$cmd 125 allow icmp from me to any out via $pub $ks

# Deny all inbound traffic from non-routable reserved address spaces
#$cmd 300 deny all from 192.168.0.0/16 to any in via $pub #RFC 1918 private IP
#$cmd 301 deny all from 172.16.0.0/12 to any in via $pub #RFC 1918 private IP
#$cmd 302 deny all from 10.0.0.0/8 to any in via $pub #RFC 1918 private IP
#$cmd 303 deny all from 127.0.0.0/8 to any in via $pub #loopback
#$cmd 304 deny all from 0.0.0.0/8 to any in via $pub #loopback
#$cmd 305 deny all from 169.254.0.0/16 to any in via $pub #DHCP auto-config
#$cmd 306 deny all from 192.0.2.0/24 to any in via $pub #reserved for docs
#$cmd 308 deny all from 224.0.0.0/3 to any in via $pub #Class D & E multicast

# Authorized inbound traffic
$cmd 400 allow tcp from 192.168.2.5 to me 22 in via $pub setup $ks
# ISAKMP
$cmd 401 allow udp from $bourque_ip any to me 500 in via $pub $ks
# IPSec ESP
$cmd 402 allow esp from $bourque_ip any to me in via $pub $ks
# ICMP
$cmd 403 allow icmp from $bourque_ip any to me in via $pub $ks

$cmd 450 deny log ip from any to any

# This is skipto location for outbound stateful rules
$cmd 500 divert natd ip from any to any out via $pub
$cmd 510 allow ip from any to any

For now I've disabled the RFC 1918 address blocking section, as it's not an issue where this VPN CFG is located.

Selasa, 28 Juni 2005

Forwarding Nameserver with BIND 9

I know all of the djbdns fans will attack me, but I set up a forwarding nameserver with the built-in BIND 9.3.1 version packaged with FreeBSD 5.4. I did give djbdns the old college try using the ports tree, but I had trouble getting daemontools and scvscan working in the time I allotted for the project. I was able to get BIND working strictly as a forwarding server using the following steps.

First I created a rndc.key file using rndc-confgen.

janney:/etc/namedb# rndc-confgen -a
wrote key file "/etc/namedb/rndc.key"

I created a /etc/named/rndc.conf file and copied the contents of /etc/namedb/rndc.key into rndc.conf, along with the entries shown below:

options {
default-server localhost;
default-key "rndc-key";
};

server localhost {
key "rndc-key";
};

key "rndc-key" {
algorithm hmac-md5;
secret "OBSCURED";
};

I then modified /etc/namedb/named.conf in the following ways.

listen-on { 127.0.0.1; 192.168.3.7;};

forward only;

forwarders {
192.168.2.1;
};

The first line tells BIND where to listen. The second tells BIND to only forward DNS requests. The third line tells BIND where to forward requests.

So what's the purpose of this setup? I am running BIND on a central system to which various remote sensors connect. All of them will be configured to ask DNS requests of this central system through an IPSec tunnel. None will make DNS requests on the client networks. This reduces the traffic caused by the sensor on the client network.

I had trouble setting up BIND using the configuration I outlined before. Specifically, BIND did not recognize the controls directive:

janney:/etc/namedb# named -g
28-Jun-2005 17:07:57.969 starting BIND 9.3.1 -g
28-Jun-2005 17:07:57.970 found 2 CPUs, using 2 worker threads
28-Jun-2005 17:07:57.986 loading configuration from '/etc/namedb/named.conf'
28-Jun-2005 17:07:57.987 /etc/namedb/named.conf:27: unknown option 'controls'
28-Jun-2005 17:07:57.991 loading configuration: failure
28-Jun-2005 17:07:57.991 exiting (due to fatal error)

I have no idea why this happened. Once I removed the controls section, everything worked. This is what I used for controls:

controls {
inet 127.0.0.1 allow { localhost; } keys { rndc-key; };
};

Comments on why this failed are appreciated.

Portsnap and Squid

At BSDCan this year I listed to Kris Kennaway describe the FreeBSD package cluster (.pdf). He said he uses a caching Web proxy to optimize retrieval of source code when building packages. This makes an incredible amount of sense. Why download the same archive repeatedly from a remote site when you can download it once, and transparently let other clients retrieve the archive from the Web cache?

I decided I needed to emulate this sort of environment for several of my FreeBSD systems. I use Colin Percival's excellent portsnap to keep my FreeBSD ports tree up-to-date. If one of my systems retrieves the necessary updates through a Web cache, the other systems can get the same files from the Web cache. That saves Colin bandwidth and me time.

I set up Squid using the www/squid port. The only changes I made to the /usr/local/etc/squid/squid.conf file are listed below.

http_port 192.168.3.7:3128
icp_port 0
acl our_networks src 10.1.0.0/16 192.168.3.0/24
http_access allow our_networks

The first line tells Squid to listen to the internal private interface of a dual-home VPN concentrator / firewall / gateway (CFG). I didn't want the external interface of the CFG offering a Web proxy to the world. I set the standard Squid port, 3128 TCP. The second line shuts down the ICP service on port 3130 UDP, since I don't use it. Line three sets up an ACL and defines the networks I allow to talk to the Squid proxy. I tell Squid to allow the IP addresses of remote systems connecting via IPSec tunnel (addressed in 10.1.0.0/16 space), and systems on the internal network provided by the VPN CFG (addressed in 192.168.3.0/24 space). The last line enables the ACL.

Before Squid can operate, I tell it to build its cache directories by running 'squid -z' as root.

Finally I edit my /etc/rc.conf file with this entry:

squid_enable="YES"

With this line added, I can use the /usr/local/etc/rc.d/squid.sh shell script to 'start' or 'stop' or 'restart' Squid. Prior to this I did not know that I needed to modify /etc/rc.conf before the squid.sh script would work.

Once Squid was running and listening on port 3128 TCP, I had a system (192.168.3.12) use the proxy through portsnap. First I set the environment variable which tells fetch to use a proxy.

setenv http_proxy 192.168.3.7:3128

When I ran 'portsnap fetch', I could see the program working through Squid to get the files it needed. Here is an excerpt from the /usr/local/squid/logs/access.log file.

1119988542.131 142 192.168.3.12 TCP_MISS/200 668 GET http://portsnap.daemonology.net/t/
001b134d2c8210e43d2cad5072c8c78a6d21a576c7e14f46e751ba7b5d2474c7
- DIRECT/72.21.59.250 text/plain


This is a TCP_MISS because this request is new to Squid.

Later I ran portsnap on two other systems, 192.168.3.11 and 10.1.2.1, and saw different Squid results.

1119988702.146 95 192.168.3.11 TCP_MEM_HIT/200 677 GET http://portsnap.daemonology.net/t/
001b134d2c8210e43d2cad5072c8c78a6d21a576c7e14f46e751ba7b5d2474c7
- NONE/- text/plain

1119989557.080 99 10.1.2.1 TCP_MEM_HIT/200 678 GET http://portsnap.daemonology.net/t/
001b134d2c8210e43d2cad5072c8c78a6d21a576c7e14f46e751ba7b5d2474c7
- NONE/- text/plain

In both cases, Squid provided the request files from its cache, and didn't need to request the files from daemonology.net.

I plan to use this system to let remote sensors perform all of their updates through a central location, my VPN CFG system. That one box will run the Squid proxy and retrieve all necessary files once.

Senin, 27 Juni 2005

Simple IPFW Rules to Defend Sensors

I'm considered deploying the following rule set on a new batch of network security monitoring sensors running the FreeBSD IPFW firewall. I'm running the IPSec tunnel scenario I outlined earlier to carry packets between the sensor and a VPN concentrator / firewall / gateway (VPN CFG) running FreeBSD.

My goal is to limit who the sensor can talk to, and to limit who the sensor accepts connections from. In this case, I'm telling the sensor to speak only with the VPN CFG and a specified DNS server. I leave the option of adding additional permitted systems, such as a trusted host that is allowed to SSH directly to the sensor for maintenance purposes.

Here is the rule set I plan to run on the sensors. 192.168.2.10 is the sensor management IP. 192.168.2.7 is the VPN CFG management IP. 192.168.2.1 is the nameserver.

#!/bin/sh

int="fxp0"
cmd="ipfw -q add "
mgt_ip="192.168.2.10"
vpncfg_ip="192.168.2.7"
nameserver="192.168.2.1"

ipfw -q -f flush

$cmd 00500 check-state

# Allow connections initiated by remote systems

# SSH from specified hosts
$cmd 01000 allow tcp from $vpncfg_ip any to $mgt_ip 22 in via $int keep-state

# ISAKMP
$cmd 01100 allow udp from $vpncfg_ip any to $mgt_ip 500 in via $int keep-state

# IPSec ESP
$cmd 01200 allow esp from $vpncfg_ip to $mgt_ip in via $int keep-state

# ICMP
$cmd 01300 allow icmp from $vpncfg_ip to $mgt_ip in via $int keep-state

# Allow connections initiated by local system

# SSH to VPNCFG
$cmd 02000 allow tcp from $mgt_ip any to $vpncfg_ip 22 out via $int keep-state

# ISAKMP
$cmd 02100 allow udp from $mgt_ip any to $vpncfg_ip 500 out via $int keep-state

# IPSec ESP
$cmd 02200 allow esp from $mgt_ip any to $vpncfg_ip any out via $int keep-state

# ICMP
$cmd 02300 allow icmp from $mgt_ip any to $vpncfg_ip out via $int keep-state

# DNS resolution
$cmd 02400 allow udp from $mgt_ip any to $nameserver 53 out via $int keep-state

# Default deny all
$cmd 03000 deny log all from any to any

Does anyone have any comments?

Nessus Registered Feed for Consultants

Yesterday I described my experience registering with Tenable Network Security to access their Registered Feed. I said "security consultants using Nessus must pay an annual $1200 fee to access the Direct Fee. Free use of the Tenable plugins is only allowed on one's own network."

This first part was correct, but the second part was not. It turns out that Tenable approves use of the Registered Feed (with the seven day plugin lag) if the consultant signs Tenable's commercial agreement. I downloaded, signed, and faxed the document to Tenable. I just received back a copy signed by Tenable. This means I can now use the Registered Feed plugins to scan networks I do not own.

If I want the most current plugins (without the seven day lag) I should still sign up for a Direct Feed and pay $1,200 per year. My original interest in using Nessus involved quick assessments as part of incident response remediation activities. The Registered Feed is sufficient in my mind for that purpose. Should a client contract me to perform a thorough vulnerability assessment, I plan to pay Tenable the $1,200 needed to access their Direct Feed.

Thanks to Ron Gula, who read my earlier blog entry and offered clarification on the licensing issues.

Minggu, 26 Juni 2005

Trying Nessus Registered Feed

I described installing Nessus earlier , and last year I talked about the new Nessus license system. Since I was installing Nessus on a server strictly for scanning my own lab network, I decided to see what was involved with obtaining the Tenable Security Registered Feed.

When I first installed Nessus, I received this warning:

Loading the plugins... 204 (out of 2225)
------------------------------------------------------------------------------
You are running a version of Nessus which is not configured to receive
a full plugin feed. As a result, your security audits might produce incomplete
results.

To obtain a full plugin feed, you need to register your Nessus scanner
at the following URL :

http://www.nessus.org/register/

I manually checked the contents of the /usr/local/lib/nessus/plugins directory just after installing the security/nessus-plugins FreeBSD package to count the number of NASL scripts. There were indeed 2225.

Next I ran /usr/local/sbin/nessus-update-plugins on my Nessus server to see if it would retrieve any additional plugins, without registering. It did.

nessus-update-plugins -v
x ./
x ./12planet_chat_server_xss.nasl
x ./3com_nbx_voip_netset_detection.nasl
x ./3com_switches.nasl
x ./404_path_disclosure.nasl
x ./4553.nasl
...edited...
x ./zyxel_http_pwd.nasl
x ./zyxel_pwd.nasl
ls /usr/local/lib/nessus/plugins | wc -l
2301

I registered for the Registered Feed and made note of this provision of the license:

"This Agreement permits you to use the Plugins to detect vulnerabilities only on your system or network. If you intend to use the Plugins to detect vulnerabilities on the systems or networks belonging to third parties (eg: if you are a consultant or a Managed Security Services Provider) then click here for the consultants and MSSPs license agreement."

A look at the consultant and MSSP license on the referenced page revealed a section important to me:

"Tenable grants to you a...license...(i) to download the Plugins made available to you through the Registered Plugin Feed during the term of this Agreement and (ii) to use the Plugins in conjunction with Registered Scanners obtained directly from www.nessus.org or www.tenablesecurity.com to detect vulnerabilities only on your system or network or on the system or network of a third party for which you perform scanning services, auditing services, incident response servers, vulnerability assessment services or other security consulting services. You may only use the Plugins in conjunction with the number of Registered Scanners for which you have obtained directly from www.nessus.org or www.tenablesecurity.com and paid the applicable annual subscription fee."

This means that security consultants using Nessus must pay an annual $1200 fee to access the Direct Fee. Free use of the Tenable plugins is only allowed on one's own network. The rationale behind this approach was explained in this nessus mailing list thread from January 2005. Anyone with questions about that should read the FAQ.

After I registered I received a code via email. I ran nesuss-fetch to activate my account and then the update script.

janney:/root# nessus-fetch --register codegoeshere
Your activation code has been registered properly - thank you.
janney:/root# nessus-update-plugins -v
x ./
x ./04webserver.nasl
x ./12planet_chat_server_path_disclosure.nasl
x ./12planet_chat_server_plaintext_password.nasl
...edited...
x ./zyxel_http_pwd.nasl
x ./zyxel_pwd.nasl
janney:/root# ls /usr/local/lib/nessus/plugins | wc -l
8164

That's quite a difference! Should any clients approach me to perform vulnerability assessment services, I will order the Direct Feed if I plan to use Nessus.

Trying Snort VRT Rules and Oinkmaster

Last week I finally registered with Snort.org to gain access to the rules created by the Sourcefire VRT. The process was really simple, especially now that security/oinkmaster is in the FreeBSD ports tree. I describe the experience from the perspective of running Sguil, but the general concepts apply to anyone using Snort.

After registering with Snort.org, logging in, and clicking the "Get Code" button at the bottom of the User Preferences page, I added the code to my oinkmaster.conf file.

url = http://www.snort.org/pub-bin/oinkmaster.cgi/codegoeshere/
snortrules-snapshot-2.3.tar.gz

Then I ran Oinkmaster in the /nsm/rules/testing directory on my Sguild server.

allison:/root# oinkmaster -v -o /nsm/rules/testing
Loading /usr/local/etc/oinkmaster.conf
Adding file to ignore list: local.rules.
Adding file to ignore list: deleted.rules.
Adding file to ignore list: snort.conf.
Found gzip binary in /usr/bin
Found tar binary in /usr/bin
Downloading file from http://www.snort.org/pub-bin/oinkmaster.cgi/codegoeshere/
snortrules-snapshot-2.3.tar.gz...
--18:45:57-- http://www.snort.org/pub-bin/oinkmaster.cgi/codegoeshere/
snortrules-snapshot-2.3.tar.gz
=> `/tmp/oinkmaster.5846XLP3r9/url.s8OALJAggP/snortrules.tar.gz'
Resolving www.snort.org... done.
Connecting to www.snort.org[199.107.65.177]:80... connected.
HTTP request sent, awaiting response... 200 OK
...edited...
18:46:00 (500.29 KB/s) - `/tmp/oinkmaster.5846XLP3r9/url.s8OALJAggP/
snortrules.tar.gz' saved [766903]

Archive successfully downloaded, unpacking... done.
Setting up rules structures... done.
Processing downloaded rules...
disabled 0, enabled 0, modified 0, total=3166
Setting up rules structures... done.
Comparing new files to the old ones... done.
Updating rules... done.

[***] Results from Oinkmaster started 20050626 18:46:25 [***]
...truncated...

I noticed the following added to the rules files, like x11.rules.

-> Added to x11.rules (17):
# Copyright 2001-2005 Sourcefire, Inc. All Rights Reserved
#
# This file may contain proprietary rules that were created, tested and
# certified by Sourcefire, Inc. (the "VRT Certified Rules") as well as
# rules that were created by Sourcefire and other third parties and
# distributed under the GNU General Public License (the "GPL Rules"). The
# VRT Certified Rules contained in this file are the property of
# Sourcefire, Inc. Copyright 2005 Sourcefire, Inc. All Rights Reserved.
# The GPL Rules created by Sourcefire, Inc. are the property of
# Sourcefire, Inc. Copyright 2002-2005 Sourcefire, Inc. All Rights
# Reserved. All other GPL Rules are owned and copyrighted by their
# respective owners (please see www.snort.org/contributors for a list of
# owners and their respective copyrights). In order to determine what
# rules are VRT Certified Rules or GPL Rules, please refer to the VRT
# Certified Rules License Agreement.

The old copyrights are gone.

-> Removed from x11.rules (2):
# (C) Copyright 2001-2004, Martin Roesch, Brian Caswell, et al.
# All rights reserved.

Now that the rules in /nsm/rules/testing are updated, I perform a quick sanity check to see if they work with my snort.conf and version of Snort.

snort -T -c /usr/local/etc/snort.conf
Running in IDS mode

Initializing Network Interface xl0

--== Initializing Snort ==--
Initializing Output Plugins!
Decoding Ethernet on interface xl0
Initializing Preprocessors!
Initializing Plug-ins!
Parsing Rules file /usr/local/etc/snort.conf

+++++++++++++++++++++++++++++++++++++++++++++++++++
Initializing rule chains...
...edited...
2699 Snort rules read...
2699 Option Chains linked into 193 Chain Headers
0 Dynamic rules
+++++++++++++++++++++++++++++++++++++++++++++++++++
...edited...
--== Initialization Complete ==--

,,_ -*> Snort! <*-
o" )~ Version 2.3.3 (Build 14)
'''' By Martin Roesch & The Snort Team: http://www.snort.org/team.html
(C) Copyright 1998-2004 Sourcefire Inc., et al.


Snort sucessfully loaded all rules and checked all rule chains!
...edited...
Snort exiting

Now that I know Snort will run with the new rules, I copy them to the directories on the Sguil server corresponding to the rules used on a sensor. I also copy them to the sensor itself after creating an archive of the new rules.

Once I unpack the new rules on the sensor, I try running 'snort -T' again to double-check the validity of the rules. If the rules pass (and they should, being a copy of what I just validated), I shut down the old Snort process and start a new one.

Nessus on FreeBSD

I'm rebuilding my laptop, and I needed to install Nessus. I prefer to install FreeBSD applications using pre-built packages whenever possible. I tried adding the nessus-2.2.4_1.tbz package but got this error when I started the nessus client.

Ooops ...
This nessus version has no gui support. You need to give nessus the
arguments SERVER PORT LOGIN TRG RESULT as explained in more detail
using the --help option.

The package built by the FreeBSD cluster does not include GTK. If the system on which the package is built does not have GTK installed, Nessus will only support the CLI. GTK is not listed as a build or run dependency:

janney:/usr/ports/security/nessus$ make pretty-print-build-depends-list
This port requires package(s) "" to build.
janney:/usr/ports/security/nessus$ make pretty-print-run-depends-list
This port requires package(s) "" to run.

Here is an example of a port that has build and run dependencies:

janney:/usr/ports/security/nessus$ cd ../nessus-plugins
janney:/usr/ports/security/nessus-plugins$ make pretty-print-build-depends-list
This port requires package(s) "nessus-2.2.4_1 nessus-libnasl-2.2.4
nessus-libraries-2.2.4" to build.
janney:/usr/ports/security/nessus-plugins$ make pretty-print-run-depends-list
This port requires package(s) "nessus-2.2.4_1 nessus-libnasl-2.2.4
nessus-libraries-2.2.4 nmap-3.81 pcre-5.0" to run.

I installed gtk20 using the pre-built package before trying to install Nessus again.

janney:/home/richard$ pkg_info | grep gtk
gtk-2.6.7 Gimp Toolkit for X11 GUI (current stable version)

Now that GTK was installed, I was able to install Nessus using the security/nessus port. I used the 'make package' option. When I was done I had the following in my /usr/ports/security/nessus directory.

-rw-r--r-- 1 root wheel 318432 Jun 26 07:30 nessus-gtk2-2.2.4_1.tbz

This version of Nessus is not the same as the alternative without GTK support.

-rw-r--r-- 1 root wheel 213634 Jun 22 21:17 nessus-2.2.4_1.tbz

It looks like if you want Nessus with GTK, you'll have to build it yourself using the ports tree, on a system with GTK already installed.

Jumat, 24 Juni 2005

Three Pre-Reviews

I promise to start reading and reviewing books again, once my independent work schedule permits it. Until then, I would like to let you know about three new books I received.

The first is one I specifically requested, and I think it is important reading for anyone developing security and networking appliances. Network Systems Design Using Network Processors: Intel 2XXX Version by Douglas E. Comer, published by Prentice Hall, looks like the definitive work on the Intel IXP2xxx network processor. Computer professionals will see fewer security and networking appliances built on commodity platforms as network processors and related hardware offload certain fuctions.

I also received Windows Server 2003 Security: A Technical Reference by Roberta Bragg, published by Addison-Wesley. This is a really hefty book that appears to exceed the material in her earlier Hardening Windows Systems. I guarantee I will not read this book cover-to-cover (it's 1176 pages!) but it should be a helpful reference. I found it funny that the first sentence on the back cover, right next to the Windows logo, reads as follows:

"If you're a working Windows administrator, security is your #1 challenge."

The last book is Protect Your Windows Network: From Perimeter to Data, by Jesper Johansson and Steve Riley, published by Addison-Wesley. This book appears to cover both host and network infrastructure security, but from a Windows perspective. Author Johansson is Microsoft's Senior Program Manager for Security Policy and author Riley is Senior Program Manager in Microsoft's Security Business and Technology unit.

Thoughts on Security Degrees

Since our CISSP discussion has been thought-provoking, I imagine this might be interesting too. Last night I taught a lesson on network security monitoring to a graduate level forensics class at George Washington University. Earlier this week my friend Kevin Mandia asked me to step in when he was unavailable to teach. I spent 2 1/2 hours describing NSM theory, techniques, and tools, and concluded with a Sguil demo.

I do not have any formal degree involving computer security. I have considered pursuing an advanced degree. It would be incredible to work with Vern Paxson, for example. I am not sure how useful another degree would be for me, at this point.

Computer security practitioners are often self-taught. This morning while perusing The Economist I came across the ultimate story of a successful self-taught technician. Those in the medical community may know the story that "Professor Christiaan Barnard performed the first human heart transplant." I learned in The Economist that Hamilton Naki, a self-trained and non-degree holder, performed half of the operation.

According to The Guardian, Mr. Naki led a team that spent 48 hours removing the donor's heart, and then placed it in Dr. Barnard's hands. Mr Naki learned to transplant organs by watching, then doing. He surpassed the technical skill of the trained physicians at his hospital, and Dr. Barnyard enlisted his help for the ground-breaking 1967 transplant operation.

A search for "Naki" at the South African hospital Web site that speaks glowingly of Dr. Barnard yields zero hits. It seems the same secrecy that kept Mr. Naki from receiving any credit inside his native country still persists, at least at the hospital where he worked for nearly 40 years on minimal pay and with no formal recognition.

What do you think about security degrees? Can you recommend any programs?

Update: It turns out that Hamilton Naki did not work with Dr. Barnard on the first human transplant. The 16 July 2005 issue of the Economist states:

"A source close to Mr Naki once asked him where he was when he first heard about the transplant. He replied that he had heard of it on the radio. Later, he apparently changed his story...

[H]is role was gradually embellished in post-apartheid, black-ruled South Africa. By the end, he himself came to believe it."

That's a shame.

Contrabandwidth

I read a short article by Kate Palmer in Foreign Policy magazine about evading country-imposed Internet filters. Ms. Palmer writes:

"According to the OpenNet Initiative (ONI), a research organization devoted to tracking blocked Web sites, black market access to filtered pages in Saudi Arabia runs anywhere from $26 to $67 per Web site."

Good grief! Can't these people get a shell account with OpenSSH and proxy their Web requests? I see a market opportunity here.

CardSystems Breach Follow-up

Anyone looking for additional details on the CardSystems Solutions intrusion may find Bruce Schneier's blog good reading. He notes that CardSystems was apparently not in compliance with Payment Card Industry (PCI) security guidelines, although on National Public Radio CardSystems' CEO said his company was in compliance. Phil Hollows has written multiple blog entries on the breach, one which correctly points out that compliance with an audit does not equal security.

Kamis, 23 Juni 2005

Thesis Cites Tao

I was happy to hear that Bjarte Malmedal's thesis for his Master of Science in Information Security cites network security monitoring theory from my first book The Tao of Network Security Monitoring: Beyond Intrusion Detection. Bjarte cites my work to justify why a single packet inspection and collection tool or system does not sufficiently provide security awareness. His thesis, Using Netflows for Slow Port Scan Detection, argues that Argus session records can be used to detect stealthy reconnaissance. (Thanks to Jeffrey 'jf' Lim for correcting my earlier version of this story.) This is not particularly new, as Tom Ptacek points out. I think my first book makes the same point. I just thought it was cool to see my work cited elsewhere. :)

Bleeding Snort Starts snort.conf Collection

I read an announcement yesterday that the Bleeding Snort project has started recommending snort.conf files. I posted the following comment at Bleeding Snort:


Hello,

I think this sample snort.conf project is a great idea.

One concern I have is the general reliance on output_database to insert Snort alerts into databases. output log_unified and output alert_unified have been available for around four years, but many snort.conf files and configuration guides still insist on using output database.

For example, the snort.conf addition that I recommend in my Sguil installation guide uses

output log_unified: filename snort.log, limit 128

Not using Barnyard can be a real performance killer. If the Snort process and the database are on separate systems, especially across Internet space, Snort will definitely drop packets as it tries to insert alerts.

Thank you,

Richard


I do not understand why people insist on deploying Snort without Barnyard, FLoP, the recently resurrected Mudpit, or another output spool reader. When Snort processes a packet, and needs to insert an alert into the database, Snort blocks while processing the insert. Snort is not multi-threaded. If your database inserts are slower than the ability of Snort to keep up with packet processing, you will drop packets. If your Snort process and DB are different boxes, and the link goes down, Snort will have major problems.

Barnyard and other spool readers make a huge difference. Snort writes its output to disk. Barnyard reads the output and takes care of the inserts to the DB.

Decoupling that process allows Snort to run as fast as possible, and the system becomes more tolerant of delays or breaks in the line between the sensor and DB.

Only in late 2004 did the SHADOW Snort distribution make Barnyard the default output processing system. This guide still avoids Barnyard. I won't name any other installation guides that rely on output_database, but there are plenty others out there.

Rabu, 22 Juni 2005

Marcus Ranum Interview at SecurityFocus

I'd like to thank Federico Biancuzzi for interviewing Marcus Ranum at SecurityFocus. The interview is brilliant in my opinion. Unfortunately, I learned of the interview by an ignorant Slashdot story that completely missed the points Marcus makes in the article. Can anyone recommend an alternative to Slashdot that has a lower number of idiotic stories, but still keeps up with technology current events?

Anyway, here is my favorite excerpt:

"Do you see any new, interesting, or promising path for network security?

Nope! I see very little that's new and even less that's interesting. The truth is that most of the problems in network security were fairly well-understood by the late 1980's. What's happening is that the same ideas keep cropping up over and over again in different forms. For example, how many times are we going to re-invent the idea of signature-based detection? Anti-virus, Intrusion detection, Intrusion Prevention, Deep Packet Inspection - they all do the same thing: try to enumerate all the bad things that can happen to a computer. It makes more sense to try to enumerate the good things that a computer should be allowed to do.

I believe we're making zero progress in computer security, and have been making zero progress for quite some time."

I highly recommend everyone read and ponder this interview.

(IN)SECURE Magazine Online

Mirko Zorz of Help Net Security was kind enough to notify me of his organization's new online magazine, (IN)SECURE. Based on perusing the first two issues, this looks like a fairly professional-quality magazine. I found a mix of strategic and technical advice in both issues, with short book reviews, software deployment discussions, configuration guidance, and other security information. Check it out -- it's free!

Selasa, 21 Juni 2005

CISSP: Any Value?

A few of you wrote me about this post by Thomas Ptacek in response to my recent CISSP exam post. Tom has one of the best minds in the security business, and I value his opinions. Here are my thoughts on the CISSP and an answer to Tom's blog. (I did not realize Tom has despised the CISSP for so long!)

On page 406 of my first book I wrote:

"I believe the most valuable certification is the Certified Information Systems Security Professional (CISSP). I don't endorse the CISSP certification as a way to measure managerial skills, and in no way does it pretend to reflect technical competence. Rather, the essential but overlooked feature of the CISSP certification is its Code of Ethics...

This Code of Ethics distinguishes the CISSP from most other certifications. It moves security professionals who hold CISSP certification closer to attaining the true status of 'professionals.'"

In my book I compared the CISSP Code of Ethics to the National Society of Professional Engineers (NSPE) Code of Ethics for Engineers, which I first wrote about two years ago.

The second point of the NSPE code is "Perform services only in areas of their competence." This is similar to the following CISSP code excerpt:

"Provide diligent and competent service to principals."

My book made this comment:

"I find the second point especially relevant to security professionals. How often are we called upon to implement technologies or policies with which we are only marginally proficient? While practicing computer security does not yet bear the same burden as building bridges or skyscrapers, network engineers will soon face responsibilities similar to physical engineers."

Given this background, from where does the CISSP's value, if any, derive? I believe the answer lies in the values one wants to measure. First, the CISSP and other "professional" certifications are not designed to convey information about the holder to other practitioners. Rather, certifications are supposed to convey information to less informed parties who wish to hire or trust the holder. The hiring party believes that the certifying party (like ISC2) has taken steps to ensure the certification holder meets the institution's standards.

Second, I would argue the CISSP is not, or at least should not, be designed or used to test technical competence. Certifications like the CCNA are purely technical, and I believe they do a good job testing technical competence. The CCNA has no code of ethics. I severely doubt the ability of anyone without hands-on Cisco experience to cram for the CCNA and pass. Even many of those who attend a boot camp with little or no previous hands-on experience usually fail.

Third, there is nothing wrong with stating what would seem obvious. Tom reduces his argument against the CISSP Code of Ethics to the title of his blog entry: "Don't Be Evil." I agree, and I do not see the problem with expanding on that idea as the CISSP's Code of Ethics does.

So, what is wrong with the CISSP? I previously posted thoughts on credible certifications as described by Peter Stephenson and Peter Denning. Here are Stephenson's criteria, with my assessment of the CISSP. Keep in mind I think the CISSP should be a certification reflecting security principles, not technical details.

  • It is based upon an accepted common body of knowledge that is well understood, published and consistent with the objectives of the community applying it. No. The CISSP CBK looks barely acceptable on the surface, but in practice it fails miserably to reflect issues security professionals actually handle.

  • It requires ongoing training and updating on new developments in the field. Partially. The CISSP CPE requirements ensure holders need to receive training prior to renewal, but I am not sure this equals exposure to new developments. If you attend Tom's Black Hat talk, you get 16 Continuing Professional Education (CPE) credits! :)

  • There is an an examination (the exception is grandfathering, where extensive experience may be substituted). Yes.

  • Experience is required. Yes. Experience is required for the CISSP, mainly in response to this 2002 story of a 17-year-old receiving his CISSP.

  • Grandfathering is limited to a brief period at the time of the founding of the certification. I am not sure why this matters, other than Stephenson needed to justify his involvement in the CIFI forensics certification.

  • It is recognised in the applicable field. Well, the CISSP is certainly recognized. Unfortunately it is often mis-recognized as a technical cert, when it should be strictly a symbol of adherence to professional conduct.

  • It is provided by an organization or association operating in the interests of the community, usually non-profit, not a training company open to independent peer review. Partially. I began to worry when I saw ISC2 offer $2500 review seminars, and now they have the Official (ISC)2 Guide to the CISSP Exam, pictured above. I am not convinced this element matters that much anyway, as I think Cisco's certification program is excellent.


I think the root of the problem is the concept that the CISSP somehow measures technical competence. The CISSP in no way measures technical skills. Rather, it should measure knowledge of security principles. It does not meet that goal, either. At this point we are left with a certification that only provides a code of ethics. That brings us back to my original point.

From a practical point of view, I obtained my CISSP four years ago to help pass corporate human resource departments who screen resumes. Back then I had two choices when looking for employment. I could either work through a friend who knew my skills, or I could submit a resume to a company with an HR department. Rather than rely completely on the former, I decided to keep the latter as an option. Getting through HR departments usually required a CISSP certification.

Does this mean I will renew my CISSP when it expires? I am not sure. If I see improvements in the certification, such that it reflects security principles, I may. If it continues to fail in that respect, I probably will not.

What are your plans? Why or why not do you pursue the CISSP?

Jumat, 17 Juni 2005

CardSystems Solutions Intrusion Exposes 40 Million Credit Cards

I am stunned by the scale of this story, and I expect to hear it get worse. Yesterday MasterCard International issued a statement that said

"MasterCard International reported today that it is notifying its member financial institutions of a breach of payment card data, which potentially exposed more than 40 million cards of all brands to fraud, of which approximately 13.9 million are MasterCard-branded cards.

MasterCard International's team of security experts identified that the breach occurred at Tuscon-based CardSystems Solutions, Inc., a third-party processor of payment card data."

This AP story mentions "the security breach involves a computer virus that captured customer data for the purpose of fraud" and MasterCard "did not know how a virus-like computer script that captured customer data got into CardSystems' network, which MasterCard said was infiltrated by an unauthorized individual."

The same AP story reports that CardSystems did not expect MasterCard to report the news:

"'We were absolutely blindsided by a press release by the association,' CardSystems' chief financial officer, Michael A. Brady, told The Associated Press when reached on his cell phone."

CardSystems own press release implies they identified the fraud by saying the following:

"CardSystems Solutions, Inc., identified a potential security incident on Sunday, May 22nd. On Monday, May 23rd, CardSystems contacted the Federal Bureau of Investigation. Subsequently, the VISA and MasterCard Card Associations were notified to alert them of a possible security incident."

While researching this event, I found a story from over two years ago that sounds very similar:

"Information was stolen from more than 2.2 million MasterCard International accounts and approximately 3.4 million Visa USA cardholder accounts, according to those companies.

The theft occurred when the system of a company that processes credit card transactions for merchants was broken into.

Neither Visa nor MasterCard would identify the company that was hacked, nor would they provide information on how the theft occurred, citing security concerns."

I imagine MasterCard learned from that event and decided to go public now as a form of damage control.

I agree with this comment in the latter part of the MasterCard press release:

"While Congress continues to consider data breach notification standards, MasterCard urges them to enact wider application of Gramm-Leach-Bliley, the act that includes provisions to protect consumers' personal financial information held by financial institutions.

Currently, GLBA only applies to financial institutions providing services to consumers, including MasterCard. MasterCard urges Congress to extend that application to also include any entity, such as third party processors, that stores consumer financial information, regardless of whether or not they interact directly with consumers."

(ISC)2 Affiliated Local Interest Groups

As soon as I complained about the ISC2 CISSP survey yesterday, I received an email from (ISC)2 about their new Affiliated Local Interest Group pilot program. Mark Wilson, president of my local ISSA-NoVA chapter, mentioned that our group will be one of the few invited to the ALIG program. We have yet to know what this really means, but I will keep you informed.

Kamis, 16 Juni 2005

Encrypted Laptop Hard Drives

Yesterday someone asked me what I thought about encrypted laptop hard drives. I believe he was referring to this recent Seagate press release. The new Seagate Momentus Full Disk Encryption (FDE) product should ship this winter and will provide OS-independent disk encryption. This Extreme Tech article references technology by 4c Entity to encrypt the drive.

(ISC)2 Conducting CISSP Exam Survey

Last month I reported a friend's experiences with the CISSP exam. This week I received an email from (ISC)2 regarding a survey of the CISSP exam. It reads in part:

"(ISC)2 would like to extend to you the opportunity to provide key input into the content of the CISSP® examination. With assistance from Schroeder Measurement Technologies, Inc., (ISC)2’s services entity,(ISC)2 is conducting a CISSP job analysis study through an online survey. The purpose of the job analysis study is to ensure the currency of future CISSP examinations.

As a CISSP certificate holder, we are asking you to participate in the survey. *Your responses are valued and essential*. We ask that you set aside 20 to 30 minutes of your time no later than Thursday, July 14, 2005 to complete the online survey."

Once I started taking the survey, I saw these guidelines.

"A comprehensive list of important job tasks performed by an Information Systems Security Professional is presented on the following pages. Please provide your ratings to the tasks in relation to the practice of Information Systems Security Professionals at your work site."

I was initially excited by the prospect of ISC2 using survey results to revamp the terrible CISSP exam... until I started looking at the survey. Here are a few screen captures. To the right of each item are radio buttons saying "Not Performed, Of No Importance, Of Little Importance, Moderately Important, Very Important, Extremely Important."



This first section presumably asks if these technologies are important. Is this the way an exam should be written? The next screen shot is even worse.



What am I supposed to do here, say a Value Added Network (VAN?) is "Moderately Important" while a hub is "Of Little Importance"?

I looked at one more section, shown below, before giving up.



This survey is a disaster. The CISSP certification should be about security principles. ISC2 should take a look at a wonderful book like Ross Anderson's Security Engineering to figure out what matters. Asking me about hubs or CHAP or the PSTN is foolish. Whatever results ISC2 thinks it gets from this survey will not improve the certification. Again, the only value CISSP retains is its Code of Ethics.

Gartner Survey Ranks Threats

I found the article Corporates focus on basics for IT security defences by John Leyden to be interesting. He reports on a survey presented by Gartner at their recent IT Security Summit. Gartner's survey found that IT staff ranked threats as follows:

1. Viruses and Worms
2. Outside Hacking or Cracking
3. Identity Theft and Phishing
4. Spyware
5. Denial of Service
6. Spam
7. Wireless and Mobile Device Viruses
8. Insider Threats
9. Zero Day Threats
10. Social Engineering
11. Cyber-Terrorism

I am disappointed to see social engineering ranked so low. I am glad cyber-terrorism is dead last. I am surprised to see outside hacking ranked so high, even though I agree it should be a top three priority.

Here is the list I would create (if I had to call these "threats;" many of these are not "threats." I rank these "problems" or issues using a mixture of likelihood and damage inflicted. I do not agree with all the categories presented, but here is my best assessment.

1. Viruses and Worms
2. Outside Hacking or Cracking
3. Spyware
4. Denial of Service
5. Insider Threats
6. Identity Theft and Phishing
7. Social Engineering
8. Zero Day Threats
9. Spam
10. Wireless and Mobile Device Viruses
11. Cyber-Terrorism

Also according to John, "More than half the respondents said they preferred buying 'best-of-breed' products from multiple technology providers while a third of those quizzed preferred integrated security suites, a preference catered for by a growing list of firms selling integrated security appliances."

By the way, I contacted Gartner about covering the summit for this blog and they completely ignored me. Thanks guys! So much for "new media" and the "blogosphere."

FreeBSD Post-Installation Tasks

Last night I installed FreeBSD 5.4 on my Dell PowerEdge 2300 server. Immediately following the installation, these are the tasks I performed. These are the same post-installation tasks I perform, in the same order, on every FreeBSD system I build.

1. When I install FreeBSD, I create a user and give him the /bin/sh shell. I used Linux before I used FreeBSD, and I remain more familiar with bash. Therefore, I install the most recent package available. I do this using the PACKAGESITE environment variable. Notice how pkg_add satisfies dependencies automatically.

$ su -
Password:
janney# setenv PACKAGESITE
ftp://ftp2.freebsd.org/pub/FreeBSD/ports/i386/packages-5-stable/Latest/
janney# pkg_add -r bash
Fetching ftp://ftp2.freebsd.org/pub/FreeBSD/ports/i386/packages-5-stable/
Latest/bash.tbz... Done.
Fetching ftp://ftp2.freebsd.org/pub/FreeBSD/ports/i386/packages-5-stable/
All/libiconv-1.9.2_1.tbz... Done.
Fetching ftp://ftp2.freebsd.org/pub/FreeBSD/ports/i386/packages-5-stable/
All/gettext-0.14.4_1.tbz... Done.
janney# rehash

I need the rehash command so root's shell can find bash, or any newly installed program. I now use chsh to my user's shell from /bin/sh to /usr/local/bin/bash. Thanks to erson from Sweden for the tip!

$ chsh -s /usr/local/bin/bash
Password:
chsh: user information updated

Now I install freebsd-update to facilitate fixing any kernel and OS security vulnerabilities.

janney# pkg_add -r freebsd-update
Fetching ftp://ftp2.freebsd.org/pub/FreeBSD/ports/i386/packages-5-stable/
Latest/freebsd-update.tbz... Done.
Fetching ftp://ftp2.freebsd.org/pub/FreeBSD/ports/i386/packages-5-stable/
All/bsdiff-4.2.tbz... Done.
janney# rehash
janney# cp /usr/local/etc/freebsd-update.conf.sample /usr/local/etc/freebsd-update.conf
janney# mkdir /usr/local/freebsd-update
janney# freebsd-update fetch
Fetching public key...
Fetching updates signature...
Fetching updates...
Fetching hash list signature...
Fetching hash list...
Examining local system...
Fetching updates...
/usr/bin/gunzip...
/usr/bin/gzcat...
/usr/bin/gzip...
/usr/bin/zcat...
/usr/include/machine/cpufunc.h...
/usr/sbin/tcpdump...
Updates fetched

To install these updates, run: '/usr/local/sbin/freebsd-update install'

janney# freebsd-update install
Backing up /usr/bin/gunzip...
Installing new /usr/bin/gunzip...
Backing up /usr/bin/gzcat...
Recreating hard link from /usr/bin/gunzip to /usr/bin/gzcat...
Backing up /usr/bin/gzip...
Recreating hard link from /usr/bin/gunzip to /usr/bin/gzip...
Backing up /usr/bin/zcat...
Recreating hard link from /usr/bin/gunzip to /usr/bin/zcat...
Backing up /usr/include/machine/cpufunc.h...
Installing new /usr/include/machine/cpufunc.h...
Backing up /usr/sbin/tcpdump...
Installing new /usr/sbin/tcpdump...

All of these updates affected the userland. No changes to the kernel were made. If kernel changes were involved, I would have to reboot to have them take effect.

I continue with portaudit. This program checks installed packages for security vulnerabilities. portaudit compares the installed packages against a database it downloads.

janney# pkg_add -r portaudit
Fetching ftp://ftp.freebsd.org/pub/FreeBSD/ports/i386/packages-5.4-release/
Latest/portaudit.tbz... Done.

===> To check your installed ports for known vulnerabilities now, do:

/usr/local/sbin/portaudit -Fda

janney# rehash
janney# portaudit -Fda
auditfile.tbz 100% of 25 kB 79 kBps
New database installed.
Database created: Thu Jun 16 09:10:15 EDT 2005
0 problem(s) in your installed packages found.

Next I install portsnap to update my ports tree. I don't install the ports tree on systems I build to be appliances. On general purpose servers, however, I like having the ports tree available. A current ports tree is needed if you want to use portupgrade (described later) to assess and update installed packages.

janney# pkg_add -r portsnap
Fetching ftp://ftp2.freebsd.org/pub/FreeBSD/ports/i386/packages-5-stable/
Latest/portsnap.tbz... Done.
Fetching ftp://ftp2.freebsd.org/pub/FreeBSD/ports/i386/packages-5-stable/
All/freebsd-sha256-20050310.tbz... Done.
janney# rehash
janney# cp /usr/local/etc/portsnap.conf.sample /usr/local/etc/portsnap.conf
janney# portsnap fetch
Fetching public key... done.
Fetching snapshot tag... done.
Fetching snapshot metadata... done.
Fetching snapshot generated at Wed Jun 15 20:51:48 EDT 2005:
2cae03da4bde1d1eb260ce3e6eb237f014d930245442fe100% of 34 MB 469 kBps 00m00s
Extracting snapshot... done.
Verifying snapshot integrity...
Fetching snapshot tag... done.
Fetching snapshot metadata... done.
Updating from Wed Jun 15 20:51:48 EDT 2005 to Thu Jun 16 06:39:30 EDT 2005.
Fetching 4 metadata patches... done.
Applying metadata patches... done.
Fetching 0 metadata files... done.
Fetching 33 patches.....10....20....30. done.
Applying patches... done.
Fetching 5 new ports or files... done.
janney# portsnap extract
/usr/ports/.cvsignore
/usr/ports/CHANGES
/usr/ports/LEGAL
/usr/ports/MOVED
/usr/ports/Makefile
/usr/ports/Mk/bsd.autotools.mk
/usr/ports/Mk/bsd.emacs.mk
/usr/ports/Mk/bsd.gcc.mk
...edited...
Building new INDEX files... done.

Next I install portupgrade. This is the best way I've found to keep packages up-to-date.

janney# pkg_add -r portupgrade
Fetching ftp://ftp2.freebsd.org/pub/FreeBSD/ports/i386/packages-5-stable/
Latest/portupgrade.tbz... Done.
Fetching ftp://ftp2.freebsd.org/pub/FreeBSD/ports/i386/packages-5-stable/
All/ruby-1.8.2_3.tbz... Done.
...edited...
Fetching ftp://ftp2.freebsd.org/pub/FreeBSD/ports/i386/packages-5-stable/
All/ruby18-bdb1-0.2.2.tbz... Done.

I run portversion to quickly see what packages need updating. I will take care of that later.

janney:/root# rehash
janney:/root# portversion -v -l "<"
[Rebuilding the pkgdb in /var/db/pkg ... - 32 packages foun.................... done]
[Updating the portsdb in /usr/ports ... - 13089 port entries found
.........1000.........2000.........3000.........4000.........5000.........6000........
.7000.........8000.........9000.........10000.........11000.........12000.........
13000 ..... done]
expat-1.95.8 < needs updating (port has 1.95.8_3)
pkgconfig-0.15.0_1 < needs updating (port has 0.17.2)
png-1.2.8_1 < needs updating (port has 1.2.8_2)
portupgrade-20041226_3 < needs updating (port has 20041226_4)
xorg-server-6.8.2 < needs updating (port has 6.8.2_2)
xterm-200_2 < needs updating (port has 202)

I edit root's .cshrc as follows to change the prompt.

# set prompt = "`/bin/hostname -s`# "
set prompt = "%m:%/# "

The prompt will now look like this.

janney:/root#

I make a similar edit to my user prompt in the .profile file for my user's bash shell/.

PS1='`hostname -s`:$PWD$ '; export PS1

The prompt will now look like this.

janney:/home/richard$

Finally I run the sockstat command to see if there are any listening services for which I cannot account. This box is running NFS by design, so there are more listening services that usual.

janney# sockstat -4
USER COMMAND PID FD PROTO LOCAL ADDRESS FOREIGN ADDRESS
richard sshd 56174 5 tcp4 192.168.2.7:22 192.168.2.5:55803
root sshd 56171 5 tcp4 192.168.2.7:22 192.168.2.5:55803
root sendmail 408 4 tcp4 127.0.0.1:25 *:*
root sshd 402 4 tcp4 *:22 *:*
root nfsd 326 3 tcp4 *:2049 *:*
root mountd 324 4 udp4 *:782 *:*
root mountd 324 5 tcp4 *:797 *:*
root rpcbind 257 9 udp4 *:111 *:*
root rpcbind 257 10 udp4 *:686 *:*
root rpcbind 257 11 tcp4 *:111 *:*
root syslogd 244 6 udp4 *:514 *:*

If I need to recompile the kernel, I take that step next. On most systems I do not have to recompile the kernel.

From here I begin adding packages and other customizations to make this system perform its specific role.

Rabu, 15 Juni 2005

Bleeding Snort Innovations

Several interesting projects are taking shape at Bleeding Snort, described as "the aggregation point for Snort signatures and research." The spyware Blackhole DNS project collects domain names identified with spyware and provides a hosts file pointing to localhost for each. Matt Jonkman now wants to extend the idea to create the Spyware Listening Post.

Rather than have a domain like 1000funnyvideos.com point to localhost (127.0.0.1), the Spyware Listening Post proposes resolving the host to an IP address operated by the SLP project. The SLP will measure the requests to gather intelligence on spyware. This is an interesting idea and I look forward to seeing how it develops.

Bleeding Snort also houses the Snort Test Suite. Nothing appears to have been released, but it would be cool to see them coordinate with Turbo Snort Rules.

Finally, I found a funny thread in the bleeding-sigs mailing list. Essentially a commercial vendor complained about a change in the Bleeding Snort rule set:

"These new SSH signatures brought down all of our customer's Snort installations because that SSH_PORTS variable is not in the default snort.conf file."

Why did that happen?

"The AWCC [the vendor's product] now downloads signatures from bleeding-edge automatically, I'm sure there are other tools that do the same."

Good grief -- what a poor design decision. A commercial vendor retrieves and runs rules on a customer-deployed system "automatically?" How difficult is it to perform even a basic test of the rules to ensure they don't break something, before deploying on production boxes? That's embarrassing. Consider this minor breakage a lesson in good engineering, as Mike Poor confirms.

Selasa, 14 Juni 2005

OpenSolaris Lives

The OpenSolaris Project is alive. Ashlee Vance provides the most intelligent summary of the project that I've read. Something cool you can do immediately is browse the source using a Web front-end to CVS. This is really useful if you want to understand how the OS is assembled. A common criticism of this release is the lack of a downloadable .iso or similar distribution. You must start with Solaris Express: Community Release, Build 16 or newer, then follow the release notes. This is not as user-friendly as the new Fedora Core 4 release announced today. (Note on the announcement -- those Fedora/Red Hat people are so witty!)

I plan to try each once I get some free time.

HTTP Request Smuggling

You may have seen this on Slashdot, but Garth Somerville sent me this link to a paper titled HTTP Request Smuggling (HRS) by Watchfire. You may remember Watchfire as the company that bought Web application security vendor Sanctum. Essentially HRS relies on sending conflicting values or malformed input in HTTP headers. Just as we saw years ago with IDSs, bad results happen when one product interprets commands on way and another product sees the world in a different way. I was pleased to see the Squid proxy server already addressed any problems back in April in two advisories.

The answer is strict HTTP parsing, but rest assured many products will continue to let malformed protocols pass. This is another case where a small set of commands or input should be allowed, and everything else should be denied. The Intrusion Prevention System (IPS) model of "deny some, allow everything else" will fail here.

Comments on Israeli Intellectual Property Theft Stories

Thanks to Jason Anderson of Lancope for making me aware of a large case of intellectual property theft in Israel. This 29 May story explains how Israeli programmer Michael Haephrati was hired to create Trojan Horses for private investigation companies. Those PI firms then deployed the programs to target companies via "email attachments." The PIs sold what they found to competitors of the targets. For more details, I recommend Richard Steinnon's blog.

I found a detail in this story very interesting:

"The Trojan sent images and documents to FTP servers in Israel, Germany and the US, court documents reveal."

Regular blog readers know what that means. Any victim company practicing Network Security Monitoring could have complete records of the FTP traffic that carried documents or files stolen by the Trojan Horses. NSM practitioners would know when the activity started, what systems were victims, and when the last outbound connection took place. Depending on the form of the FTP transfers and the capture of full content data, NSM pros might even know exactly what was stolen.

Those running a defensible network might have deployed FTP proxies that carry all outbound FTP traffic. That outbound FTP proxy would have logged all of the files that were carried outbound. Of course the file names might have nothing to do with the documents stolen from hard drives, but a record of illegal activity would still exist.

I consider watching outbound activity to be practicing extrusion detection. Supposedly stopping outbound activity is called extrusion prevention, and I already see vendors using these terms. Richard Steinnon prefers the term "intellectual property protection" (IPP). I think IPP is a form of extrusion something, but the idea of IPP assumes that what is being sent outbound has any IP value. For example, I would like to see outbound bot net command and control traffic, even if the bot net owner never touches any sensitive files on my internal victim systems.

Minggu, 12 Juni 2005

Bejtlich at Techo Security Conference

If you're in Myrtle Beach, SC for the 2005 Techno Security Conference, stop by and say hello. I should be at the 3:00 pm Monday book signing, and I will be speaking on behalf of Tenable Security at 7:00 pm Monday. I hope to squeeze in a Monday afternoon visit to managed security vendor LURHQ while I am here as well.

This is my first Techno Security Conference, but I don't plan to see any talks other than those by Ron Gula and Marcus Ranum tomorrow morning. The conference organizers told me this is the 7th such event, and they have over 1,000 attendees. The vendor exhibits and program seems very host-based forensics-centric. It seems that every associates the word "forensics" with host-based evidence, with few exceptions.

I am sensitive to this situation as I devote several chapters in my new book Extrusion Detection to network-centric incident response and forensics. I intend for these chapters to supplement existing excellent works that take a traditional host-centric view of both disciplines. I am also acutely aware of network-centric IR and forensics as I continue to improve my new Network Security Operations class.

So why do I consider network IR and forensics to be important? In my experience, quite often investigators don't know where to begin the IR or forensics process. Security staff have indicators that their enterprise is compromised, but they are not sure where to look. To compound a bad situation, consider the consequences of poking around potentially compromised hosts. Not only are you potentially alerting the intruder to your investigation. You are also potentially damaging or destroying important host-based evidence.

Therefore, I like to start with network evidence when conducting IR and forensics, and use network-based evidence (NBE) to learn where I should focus my host-based IR and forensics work. Analyzing NBE never touches sensitive victim hosts, and NBE can often be captured without revealing the collection process to the intruder.

Kamis, 09 Juni 2005

Multiple New Pre-Reviews

I've been too busy to read as I transition to being an independent consultant. Once I have a few business and related issues on track, I will begin scheduling time for reading again. I have a huge reading list as usual. A few books not on the list, but which merit attention, include the following.

Last month one of the books I pre-reviewed was a Windows title by O'Reilly. Here is another: Learning Windows Server 2003 by Jonathan Hassell. This book looks like it will help me with the Windows Server 2003 Trial Software I mentioned last month. Jonathan's book looks very thorough and I hope to get to it in a reasonable amount of time.

O'Reilly sent me 802.11 Wireless Networks: The Definitive Guide, 2nd Ed by Matthew Gast. Given by the quality and quantity of Matthew's wireless blogging for O'Reilly, I think this book should be great. It even covers 802.11X, which is supported through hostapd and wpa_supplicant on FreeBSD 6.0.

Syngress sent Network+: Study Guide & Practice Exams: Exam N10-003 by Robert J. Shimonski, Laura E. Hunter, and Norris L. Johnson. This is a hefty tome with the information needed to pass the CompTIA Network+ exam. I don't anticipate ever needing this certification, but I hear it is a good entry-level networking certification. True?

Finally, Syngress also mailed Cisco PIX Firewalls by Charles Riley, Umer Khan, and Michael Sweeney. This book is revised to address Cisco PIX Security Appliance Software Version 7.0. I have limited PIX experience so this is another book that should be a helpful reference.

Rabu, 08 Juni 2005

Article on IPS Evaluations

Thanks to Ronaldo Vasconcellos for pointing me towards What to ask when evaluating intrusion prevention systems. This is an interview with Bob Walder of the NSS Group. I agree with the conclusion of the article:

"I can't stress enough the need for a thorough bake-off in your own network. It's likely to be very different from a test lab environment and may throw up some very interesting challenges for the vendors."

I provided inputs to an IPS test done by a partner company. I would be happy to conduct thorough IDS, IPS, or firewall testing for your environment too! Send an email to richard at taosecurity dot com if you are interested.

Selasa, 07 Juni 2005

FreeBSD Ports Tree Breaks 13,000 Ports, and Other FreeBSD News

This week the FreeBSD ports tree broke the 13,000 mark. The tree has added about 2000 ports per year for the past four years. This graph shows the number of ports added per year since 1995. Just last six months ago I blogged about passing the 12,000 mark.

For those of you not familiar with the FreeBSD ports tree, it's a set of files and directories bundled with FreeBSD that allows easy software installation from source code. The ports tree is a wrapper of sorts, built by FreeBSD users, that makes any necessary tweaks or modifications needed to get source code to compile and install on FreeBSD. The ports tree also automatically resolves dependencies, and can be kept up-to-date very well. For those not wishing to install from source, packages are available. I almost use packages exclusively on my laptop, for example.

Speaking of the ports tree, there are several notable additions. First is John Curry's SANCP, usually bundled with Sguil. SANCP is a session data probe which can work alone or with Sguil. Thanks to Paul Schmehl for the new port. Expect to see Sguil ports from Paul in the near future.

A second useful new port is Nick Rogness' snort-inline. This is a must for anyone running snort-inline on FreeBSD. It uses snort-2.3.0-RC1.tar.gz, which has support for FreeBSD. My testing shows that code works while later mainline Snort versions do not work properly with FreeBSD. Expect to see full integration of FreeBSD support in future Snort distributions.

Finally from FreeBSD land, we have word of release plans for FreeBSD 6.0. We are told we may see FreeBSD 6.0 RELEASE on 15 August. I would expect to see the new release in early to mid-September, given past form. Here is an important excerpt from Scott Long's post:

"[T]he plan is for 6.0 to be a modest replacement for 5.x. We do plan on a 5.5 release in September to tie up the branch and help people move to 6.0/6.1, but 6.x is truly just a much improved 5.x at this point.

For those with bosses who are fainting at the thought of there being a 7-CURRENT around the corner and 5-STABLE coming to a close, please keep in mind that migrating from 5.x to 6.x is trivial and is worthwhile. However, we need to do the branch now so that we can keep things like SMPVFS under control and produce a high-quality series of releases with it.

For those who have already adopted 5.x and cannot spend the time/money to migrate again, RELENG_5 will still have secteam support into at least 2007 (going by their normal formula), and I expect
there to be normal feature and bug-fix commits to it for at least another year from now."

I've been running FreeBSD 5.x since early 2003, but the 5.x tree only became stable with 5.3 RELEASE last fall. It looks like the 5.x tree is acting like the 3.x tree did between successful 2.x and 4.x versions. Is FreeBSD starting to resemble Star Trek movies?

If you'd like to test the latest 6-CURRENT snapshots, you can download them.

Testing New Rules with TurboSnortRules.org

On Sunday I wrote about TurboSnortRules.org. Today I saw a post to snort-users asking if anyone had rules to detect W32.Mytob.DL@mm. One response recommended checking Bleeding Snort new rules. Looking there I found WORM_Mytob rules in a Web-browsable CVS format. Very nice.

I read the first rule and decided to see what TurboSnortRules.org had to say. I submitted the first rule after removing the classtype field, as TSR doesn't support it. Here was the response after a few minutes of waiting.



This looks like a good rule from a speed perspective; it is slightly faster than the average RME for most of the stock Snort rule sets.

VigilantMinds Customer Security Systems Manager Brian Dinello sent an email in response to my first story on TSR. As I learn what I can share about upcoming project developments, I will post word here.

Senin, 06 Juni 2005

DIY Security with Open Source

This morning I received word of a new SANS Webcast titled What Works in Intrusion Detection Systems. The introductory paragraph for the announcement starts with these two sentences:

"The days of do-it-yourself security using free software have passed. There is broad understanding among CIOs and CISOs that an effective cyber security program cannot be implemented without commercial technology and services."

As you might expect I strongly disagree with this claim. I was disappointed to see these sentiments expressed in an announcement about IDS sponsored by Sourcefire! The introduction appears to be standard SANS boilerplate, however. You can see the same paragraph in the SANS What Works in Intrusion Prevention: Using Multi-Function Low-Cost Appliances and What Works in Business Transaction Integrity Monitoring announcements, among others.

I find it sad that SANS would advocate this anti-open source stance. I never saw SANS teach commercial products at my first SANS conference in 1999, nor at the first SANSFIRE track I attended in 2001, nor in the intrusion detection tracks I attended in 2000 and taught in 2002 and 2003.

I believe there are places inside the enterprise where open source may not be as suited or as capable as proprietary software. Some people cannot live without Microsoft Active Directory. Mounting directories over NFS isn't quite the same as using Microsoft's protocols. In some security applications proprietary solutions are more full-featured. CORE IMPACT comes to mind. However, I believe most small to medium, and even many large, enterprises could operate securely using open source tools.

In fact, many proprietary products exist only because they need to compensate for deficiencies in other commercial software. For example, products like anti-virus, which are a requirement on Microsoft Windows, are a band-aid on top of a broken configuration and deployment model. I see absolutely no need to run anti-virus on UNIX desktops.

Who agrees or disagrees? Who is using a majority of open source tools to secure their enterprise? Who absolutely couldn't live without one or more commercial applications? If you need those proprietary apps, why? Is support the main issue? Thank you.

Minggu, 05 Juni 2005

Test Your Snort Rules at TurboSnortRules.org

I missed the announcement in the Bleeding Snort forums last month of TurboSnortRules.org, a project supported by security vendor VigilantMinds. The idea is to submit a custom rule to see how it stacks up against other Snort rules in terms of "Relative Measure of Efficiency". Looking at the chart below, you see various RMEs for different Snort rule sets.

The important port is to notice how a rule like this BACKDOOR WinCrash 1.0 Server Active is considered "very slow" (probably due to PCRE matches), with a RME over 4 on Snort 2.2.0, compared to something like (IPS) ::MS-SQL Worm propagation attempt, with a RME around 1.4 on Snort 2.2.0. There's also a performance Wiki with speed tips.

I think sites like this are a great idea and I thank VigilantMinds for helping Snort users understand the speed effects of the rules they write. I don't really care how accurate it is at this point -- it's great just to know that a rule you write is much slower or much faster than an verage rule for a particular Snort rule set.

Jumat, 03 Juni 2005

New Bejtlich.net Launched

Here's a quick note for anyone who cares -- content no longer at TaoSecurity.com has been modified and moved to Bejtlich.net. There's kinks to iron out at both sites, but I should have those fixed during the next week.

Counterfeiters Kill Subway Stamps

I had no idea losers were selling Subway stamps on eBay or just plain counterfeiting them. Now Subway will end the program. This is a good example of reacting to a changing threat environment. When the stamp program was started in the 1980s, I imagine the majority of the users were honest and the technology to mass-produce look-alike stamps wasn't accessible to most people. Throw in high-quality printers and unscrupulous employees who steal and sell stamps, and we end up with the current situation. Perhaps Subway will institute some sort of electronic rewards card to replace the stamp system?