Kamis, 30 November 2006

Thoughts on Vista

To mark the launch of Microsoft Windows Vista, CSO Online asked me to write this article. The editor titled it "Security In Microsoft Vista? It Could Happen." I think I took a balanced approach. Let me know what you think. I was pleased to see my FreeBSD reference survived the editor's review!

Selasa, 28 November 2006

FreeBSD 7.0 Snapshot with SCTP

I've been busy playing with various protocols in preparation for TCP/IP Weapons School in about two weeks. Recently I saw this post by Randall Stewart indicating that Stream Control Transmission Protocol (SCTP) had been added to FreeBSD CURRENT. I poked around in src/sys/netinet/ and found various SCTP files dated 3 Nov 06.

Rather than update a FreeBSD 6.x system to 7.0, I decided to look for the latest FreeBSD snapshot. Sure enough, I found the latest 7.0 snapshot was dated 6 Nov 06. I downloaded the first .iso and installed it into a VMware Server VM. The kernel was compiled on 5 Nov 06:

$ uname -a
FreeBSD freebsd70snap.taosecurity.com 7.0-CURRENT-200611
FreeBSD 7.0-CURRENT-200611 #0: Sun Nov 5 19:31:17 UTC 2006
root@almeida.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC i386

I found the SCTP files I was looking for, too.

$ cd /usr/src/sys/netinet
$ ls -al *sctp*
-rw-r--r-- 1 root wheel 11869 Nov 3 10:23 sctp.h
-rw-r--r-- 1 root wheel 83862 Nov 3 14:48 sctp_asconf.c
-rw-r--r-- 1 root wheel 2884 Nov 3 10:23 sctp_asconf.h
-rw-r--r-- 1 root wheel 62791 Nov 3 10:23 sctp_auth.c
-rw-r--r-- 1 root wheel 9440 Nov 3 10:23 sctp_auth.h
-rw-r--r-- 1 root wheel 58467 Nov 3 10:23 sctp_bsd_addr.c
-rw-r--r-- 1 root wheel 2370 Nov 3 10:23 sctp_bsd_addr.h
-rw-r--r-- 1 root wheel 30071 Nov 3 10:23 sctp_constants.h
-rw-r--r-- 1 root wheel 39292 Nov 4 03:45 sctp_crc32.c
-rw-r--r-- 1 root wheel 2149 Nov 3 10:23 sctp_crc32.h
-rw-r--r-- 1 root wheel 14856 Nov 3 10:23 sctp_header.h
-rw-r--r-- 1 root wheel 163684 Nov 3 10:23 sctp_indata.c
-rw-r--r-- 1 root wheel 3965 Nov 3 10:23 sctp_indata.h
-rw-r--r-- 1 root wheel 140398 Nov 4 03:19 sctp_input.c
-rw-r--r-- 1 root wheel 2301 Nov 3 10:23 sctp_input.h
-rw-r--r-- 1 root wheel 12179 Nov 3 12:21 sctp_lock_bsd.h
-rw-r--r-- 1 root wheel 2474 Nov 3 12:21 sctp_os.h
-rw-r--r-- 1 root wheel 2882 Nov 3 12:21 sctp_os_bsd.h
-rw-r--r-- 1 root wheel 261210 Nov 3 10:23 sctp_output.c
-rw-r--r-- 1 root wheel 5216 Nov 3 10:23 sctp_output.h
-rw-r--r-- 1 root wheel 149450 Nov 4 00:39 sctp_pcb.c
-rw-r--r-- 1 root wheel 15352 Nov 3 10:23 sctp_pcb.h
-rw-r--r-- 1 root wheel 7221 Nov 3 10:23 sctp_peeloff.c
-rw-r--r-- 1 root wheel 2158 Nov 3 10:23 sctp_peeloff.h
-rw-r--r-- 1 root wheel 28138 Nov 3 10:23 sctp_structs.h
-rw-r--r-- 1 root wheel 48751 Nov 4 03:19 sctp_timer.c
-rw-r--r-- 1 root wheel 3311 Nov 3 10:23 sctp_timer.h
-rw-r--r-- 1 root wheel 25951 Nov 3 10:23 sctp_uio.h
-rw-r--r-- 1 root wheel 128287 Nov 3 18:04 sctp_usrreq.c
-rw-r--r-- 1 root wheel 15869 Nov 3 10:23 sctp_var.h
-rw-r--r-- 1 root wheel 146141 Nov 3 18:04 sctputil.c
-rw-r--r-- 1 root wheel 9301 Nov 3 10:23 sctputil.h

However, the GENERIC kernel does not contain support for SCTP. It must be compiled in, which I did using the following method (based on my earlier post).

freebsd70snap# pwd
/usr/src/sys/i386/conf

freebsd70snap# cat SCTP
include GENERIC
options SCTP

freebsd70snap# cd /usr/src
freebsd70snap# make buildkernel KERNCONF=SCTP INSTKERNNAME=SCTP
freebsd70snap# make installkernel KERNCONF=SCTP INSTKERNNAME=SCTP

freebsd70snap# echo "kernel=SCTP" > /boot/loader.conf
freebsd70snap# cat /boot/loader.conf
kernel=SCTP

freebsd70snap# reboot

After reboot the new kernel was running.

$ uname -a
FreeBSD freebsd70snap.taosecurity.com 7.0-CURRENT-200611
FreeBSD 7.0-CURRENT-200611 #0: Tue Nov 28 22:09:44 EST 2006
root@freebsd70snap.taosecurity.com:/usr/obj/usr/src/sys/SCTP i386

The next step is to try to get something working with SCTP. More on that later, hopefully!

Jumat, 24 November 2006

Digital Security Lessons from Ice Hockey

I'm struck by the amount of attention we seem to be paying to discovering vulnerabilities and writing exploits. I call this "offensive" work, in the sense that the fruits of such labor can be used to attack and compromise targets. This work can be justified as a defensive activity if we accept the full disclosure argument that truly bad guys already know about these and similar vulnerabilities, or that so-called responsible disclosure motivates vendors to fix their software. This post isn't about the disclosure debate, however. Instead, I'm wondering what this means for those of us who don't do offensive work, either due to lack of skills or opportunity/responsibility.

It occurred to me today that we are witnessing the sort of change that happened to the National Hockey League in the late 1960s and early 1970s. During that time the player pictured at left, Bobby Orr, changed the game of ice hockey forever. For those of you unfamiliar with hockey, teams field six players: one goalie, who guards the net; two defensemen, who try to stop opposing players; and three forwards (one center and two wings), who try to score goals.

Prior to Orr, defensemen almost never took offensive roles. (Forwards didn't pay much attention to defense, either. Only in 1978 did the Selke Trophy, for best defensive forward, start being awarded.) When Orr began playing, he wasn't satisfied to control the puck in his defensive end and then hand it off to one of his forwards. He jumped into the play, sometimes carrying the puck end-to-end, finishing by scoring himself. Twice in his ten year career he even lead the league in scoring -- scoring more goals than forwards. He didn't neglect his defensive duties, either. He was named league best defensement eight years straight.

What does this mean for digital security? It's easy to identify the forwards in our game. They discover and write exploits. Some of them can play defense, while others cannot. Many of us are traditional defensemen. We know how to impede the opposing team, and we know enough offense to understand how the enemy forwards operate. A few of us are goalies. Aside from clearing the zone or maybe making a solid pass to a forward, goalies have near-zero ability to score goals. (Yes, I remember Ron Hextall.) That's the nature of their position -- they can't skate to the other end of the ice!

Anyone who plays a sport will probably recognize the term "well-rounded." Being well-rounded means knowledge and capability in offense and defense. I think it applies very well to ice hockey and basketball, less so to soccer, somewhat well to baseball, and not at all to football. I see well-roundedness as the proper trait for the general security practitioner, i.e., the sort of person who expects to work in a variety of roles during a career. This is the ice hockey model.

I do not recommend following what might be called the [American] football model. Football players are exceptionally specialized and usually ineffective when told to play out of position. (Could you imagine the kicker playing on the defensive line, or the center as a wide receiver?)

Returning to the hockey model, remember that there are three positions, with varying degrees of offensive and defensive responsibilities. Goalies focus almost exclusively on defense, but they try to make smart plays that lead to break-outs. Defensemen concentrate on defense but should contribute offensively where possible. Forwards concentrate on offense, but help the defensemen as well. How does this model apply to my position in digital security? I consider myself a defenseman, but I'm trying to develop my offensive skills. (At the very least, better knowledge of offensive tools and techniques helps me better defend against them.) I have no interest in being a goalie. Being a forward would be exciting, but I'm not sure I'll have an opportunity or job responsibility to fully develop those skills.

I suppose it's even possible to become a coach or trainer (like skating guru Laura Stamm). You don't have to actually play the game, but you quickly become irrelevant if you lose touch with the game.

Does the extreme specialization of the football model apply? I think it may for large consultancies (or perhaps for the security market as a whole). In a large consultancy, you can be the "Web app guy" or the "incident response gal" and make a living. Outside of that environment, perhaps at a general security job for a company, you're expected to be good at almost everything.

I've written before that it's unreasonable to be good at everything, despite the unrealistic desire of CIOs to hire so-called "multitalented specialists." I recommend choosing to be a goalie, defenseman, forward, or coach/trainer. Be solid in your core responsibilities, but remember Bobby Orr's example.

How do you fit into my hockey model?

Another Prereview

Recently I posted thoughts on a few security books on my shelf. Today I received an absolutely gigantic new book called The Art of Software Security Assessment: Identifying and Avoiding Software Vulnerabilities by Mark Dowd, John McDonald, and Justin Schuh. This is a 1200-page book on discovering vulnerabilities in all sorts of software. I plan to read it along with similar books over the next month or so.

Books on how to break software in order to make it better seem to be the hottest titles on the market. This is exactly the sort of book I would expect most vendors to dislike, although titles like Hunting Security Bugs, published by shows some vendors realize that if they don't test their software first, some attacker in Bucharest will do it for them.

Rabu, 22 November 2006

Three Seven-Book Lists for Novice, Intermediate, Advanced Readers

I continue to receive feedback and questions on my No Shortcuts post. One of you prompted me to write three new Amazon.com Lists, organized thus:

For the civilians out there, that's novice, intermediate, and advanced. :) I listed seven books for each category to keep things manageable. One of the problems I encountered with the advanced list, especially, is that coding becomes a big part of the equation when one starts to consider "advanced" topics. I tried including "placeholder" books to give you the idea that you need coding background to make good use of a book like Unix Network Programming, Volume 1: The Sockets Networking API, 3rd Ed.

Please let me know if you find these lists helpful. Please remember that reading these 21 books in order will not take you from newbie to guru. Rather, these are books I think will help at each stage of your progression. I am also not claiming to be a guru by having selected seven advanced books. For example, I need to get more acquainted with coding in order to branch out into other areas of digital security.

Pre-reviews and Comments

Several publishers have sent me new books recently, and I have one comment to make about an older book. I'll start with books that look good, but which I don't plan to read. The first is Linux Administration Handbook, 2nd Ed by Evi Nemeth, Garth Snyder, Trent R. Hein. There's no doubt this is a great general-purpose system administration book for Linux. I gave the 3rd edition of the Unix version three stars almost five years ago (and I'm hoping this 4th edition comes to fruition).

The Linux book describes Red Hat Enterprise, Fedora Core, SuSE, Debian, and Ubuntu. If the book covered Slackware and Gentoo instead of SuSE, I think it would have been perfect. I'm guessing RHEL is close enough to Fedora, and Debian to Ubuntu, to allow extra coverage of more diverging distros like Slackware and Gentoo? I plan to use this book as a reference, but I don't plan to read and review it. I suggest you buy it if you're looking for a comprehensive Linux reference that doesn't waste time with installation screenshots or descriptions of how to use KDE and Gnome. Another book I like but which I don't plan to read is Network Security Tools by Nitesh Dhanjani and Justin Clarke. This is an older book (April 2005), but I only recently rediscovered it. This book reminds me of
Building Open Source Network Security Tools
by Mike Schiffman, which I liked. NST describes how to write Nessus and Nikto plug-ins, dissectors and plug-ins for Ettercap, and how to extend Hydra and Nmap. There's a chapter on Metasploit, but it is somewhat overtaken by events because the 3.x framework uses Ruby instead of Perl. NST also explains how to extend PMD, how to build your own Web, SQL, and exploit scanner, and how to write tools with Libpcap (0.8.3) and Libnet (1.1.2.1).

NST is a great book, but it requires a good knowledge of C and a desire to work with these tools in a development capability. I don't possess the requisite coding skills, but I may turn to this book in the future if I want to learn more about extending these tools. Next is Network Security Hacks, 2nd Ed by Andrew Lockhart. I liked the 1st Ed which I read and reviewed in June 2004. Since I see my review of the 1st Ed on the Amazon.com page for the 2nd Ed, I won't be able to submit a review for this book. The 2nd Ed looks about 50% longer than the 1st Ed.

I was also pleased to see the discussion of Sguil had been updated for Sguil 0.6.1. However, Sguil's integration of SANCP for session data collection was ignored. After being a Sguil advocate for almost four years, writing books and articles (some of which are freely available), I am puzzled that some people who choose to write about Sguil still don't grasp the significance of the data we collect. This recent Daily Dave thread was depressing. People really collect full content data in production on busy networks? Shocking! The first book in this post that I plan to read and review is The Art of Software Security Testing: Identifying Software Security Flaws by Chris Wysopal, Lucas Nelson, Dino Dai Zovi, and Elfriede Dustin. This book is less than 300 pages but it looks very interesting. I plan to review it with a set of books on finding bugs and vulnerabilities. It's encouraging to see these sorts of titles appearing, written for software developers and not for hacker wanna-bees. The next book is WarDriving and Wireless Penetration Testing by Chris Hurley and friends. This is another team-written book, which tend to scare me when published by Syngress. I wasn't too impressed by the earlier WarDriving book (reviewed here), but I plan to give this new one a try. I'm really looking forward to Wi-Foo II next year. The last book is Network Security Assessment by Steve Manzuik and friends. This is another "team book," but it looks good. I'm surprised anyone is talking about vulnerability management these days. That's so 2002! (Please recognize I'm joking.)

Remember, you can see books that I'm waiting to acquire by checking my Amazon.com Wish List. If you're a publisher, please keep in mind I restrict my reading to books on that list. Under extraordinary circumstances I might read something else, but I generally focus on books that address a specific interest. Thank you.

Selasa, 21 November 2006

No Shortcuts to Security Knowledge

Today I received a curious email. At first I thought it was spam, since the subject line was "RE: Help!", and I don't send emails with that subject line. Here is an excerpt:

I cannot afford nor have the time to take a full collage course on the topic of network security but I would like to be as knowlageable about it as yourself and be able to protect my computer and others regarding this matter. If I was willing to pay you would you take the time to teach me what you know and/or point me in the direction I would need to learn what you know about network security? Please advise what course I would need to take to accomplish your skill of network security?

In my opinion, it seems like this question seeks to learn some sort of "hidden truth" that I might possess, and acquire it in record time. The reality is that there are really no shortcuts to learning as complex a topic as digital security. I have been professionally involved with this topic for almost ten years, yet I consider myself halfway to the level of skill and proficiency I would prefer to possess. In another ten years I'll probably still be halfway there, since the threats and vulnerabilities and assets will have continued to evolve!

If you want to "know what I know," a good place to start is by reading one or more of my books. I recommend starting with Tao, then continuing with Extrusion and finishing with Forensics. Chapter 13 from Tao explicitly addresses the issue of security analyst training and development.

My company research page lists over a dozen documents I've written, and this blog is a record of almost four years of thoughts on digital security.

For books outside of my own, my top ten books of the last ten years contains some of the best books on digital security. My reading page shows books I recommend in five categories. I also show the books waiting to be read on my shelf, but I wouldn't consider an appearance there to be an endorsement unless I offer a favorable Amazon.com review. Please note my recommended lists do not include books from 2006 (and maybe 2005), but I plan to write a "best of" list at the end of this year. I'll update the recommendations lists if I have time.

In addition to reading, I highly recommend becoming familiar with the majority of the security tools listed by Fyodor. It also helps to specialize (at least in the beginning) in one of the five categories I show on my reading page.

I tend to split my time between Weapons and Tactics and Telecommunications, although I plan to continue developing my Scripting and Programming skills. I do some System Administration by building and operating network sensors and supporting systems (like databases), but I am not the sort of sys admin who supports users. I try to stay out of devoted Management and Policy work, although I try not to be ignorant.

I could probably say a lot more on this topic, but the bottom line is that there are no shortcuts to security knowledge. I hope this free post has been helpful.

Senin, 20 November 2006

Security, A Human Problem

I don't play Second Life or any video games these days. If I had the time I would play Civ IV. Neverthless, virtual worlds like SL are becoming increasingly interesting, as demonstrated by today's attack of the killer rings (pictured at left), also known as a "grey goo" attack.

This comment in the accompanying Slashdot post explains that it's possible for a rogue user to exploit vulnerabilities in Second Life and introduce code that peforms a sort of denial of service attack on the game. The attack occurs when game participants decide to interact with the gold rings shown in the thumbnail from this site. It's similar to human penetration testers leaving USB tokens or CD-ROMs at a physical world place of business and waiting for unsuspecting employees to see what's on them.

This story illustrates two points. First, it demonstrates that client-side attacks remain a human problem and less of a technical problem. Second, I expect at some point these virtual worlds will need security consultants, just like the physical world. I wonder if someone could write a countermeasure at the individual player level for these sorts of attacks?

Update: Here's a YouTube video.

Jumat, 17 November 2006

Further Thoughts on SANS Top 20

It seems my earlier post Comments on SANS Top 20 struck a few nerves, e.g. this one and others.

One comment I'm hearing is that the latest Top 20 isn't "just opinion." Let's blast that idea out of the water. Sorry if my "cranky hat" is on and I sound like Marcus Ranum today, but Marcus would probably agree with me.

First, I had no idea the latest "Top 20" was going to be called the "SANS Top-20 Internet Security Attack Targets" until I saw it posted on the Web. If that isn't a sign that the entire process was arbitrary, I don't know what is. How can anyone decide what to include in a document if the focus of the document isn't determined until the end?

Second, I love this comment:

Worse still, Richard misses the forest completely when he says that “… it’s called an ‘attack targets’ document, since there’s nothing inherently ‘vulnerable’ about …”. It doesn’t really matter if it’s a weakness, action item, vulnerability or attack. If it’s something you should know about, it belongs in there. Like phishing, like webappsec, and so on. Don’t play semantics when people are at risk. That’s the job of cigarette and oil companies.

This shows me the latest Top 20 is just a "bad stuff" document. I can generate my own bad stuff list.

Top 5 Bad Things You Should Worry About

  • Global warming

  • Lung cancer

  • Terrorists

  • Not wearing seat belts

  • Fair skin on sunny days


I'm not trying to make a point using a silly case. There's a real thought here. How many of these are threats? How many are vulnerabilities? (Is the difference explicit?) How many can you influence? How many are outside your control? How did they end up on this list? Does the ranking make any difference? Can we compare this list in 2006 with a future list in 2007?

Consider the last point for a minute. If the SANS Top 20 were metric-based, and consisted of a consistent class of items (say vulnerabilities), it might be possible to compare the lists from year to year. You might be able to delve deeper and learn that a class of vulnerabilities has slipped or disappeared from the list because developers are producing better code, or admins are configuring products better, or perhaps threats are exploiting other vectors.

With these insights, we could shift effort and resources away from ineffective methods and focus our attention on tools or techniques that work. Instead, we're given a list of 20 categories of "something you should know about." How is that actionable? Is anyone going to make any decisions based on what's in the Top 20? I doubt it.

Third, I'm sure many of you will contact me to say "don't complain, do something better." Well, people already are. If you want to read something valuable, pay attention to the Symantec Internet Threat Report. I am hardly a Symantec stooge; I like their approach.

I will point out that OWASP is trying to work in the right direction, but their single category ("Web Applications") is one of 20 items on the SANS list.

I realize everyone is trying to do something for the good of the community. Everyone is a volunteer. My issue is that the proper focus and rigor would result in a document with far more value.

Kamis, 16 November 2006

Another Reason for Privileged User Monitoring

No sooner did I write about a CEO gone bad do I read this: Ex-IT Chief Busted for Hacking:

Stevan Hoffacker, formerly director of IT and VP of technology for Source Media, was arrested at his home yesterday on charges of breaking into the email system that he once managed.

According to the FBI and the U.S. Attorney for the Southern District of New York, Hoffacker hacked into his former company's messaging server, eavesdropped on top executives' emails about employees' job status, and then warned the employees that they were about to lose their positions.


I doubt there's any real "hacking" involved here. Hoffacker probably retained access or leveraged knowledge of configuration errors to access these systems.

The FBI did not say exactly how Hoffacker broke into the mail system, but it noted that the former IT exec had access to the passwords for the email accounts of other Source Media employees.

Of course, if Hoffacker was an "ex-IT chief," he wasn't a "true insider." He was an "ex-insider," who should have had all of the (hopefully) nonexistent access granted to an outsider. True, he had knowledge of the systems not possessed by the typical outsider, but just because I created systems for previous employers doesn't mean I can waltz onto their networks now.

Although I am bringing up the "inside threat" again, please don't forget that you probably have external intruders from all over the globe inside your organization now. While privileged user monitoring and insider threat deterrence, detection, and ejection are important, keep in mind the parties who are already abusing your corporate assets.

Bejtlich on Tenable Webinar Friday 10 AM EST

In less than 12 hours I will be speaking on the next Tenable Webinar. Please register here. Ron Gula wrote the foreword for my book and he always has something interesting to say about digital security. I expect he will have some good questions for me!

Bejtlich Amazon Book Review RSS Feed

This is a brief note to let you know that Amazon.com is now publishing an RSS feed of my book reviews. I'm not sure exactly how new this is, but I've been looking for it.

I have a stack of books about exploit development and security tools that I hope to review as a group before the end of the year. I'm currently at 52 books reviewed for the year, and adding those 7 would make 59. I have several books on miscellaneous topics waiting as well, so we might see 60 reviews or more by year's end.

Now it would be cool to see Amazon.com publish RSS feeds for reviews of specific books, so I could keep track of customer feedback on titles of interest.

Update: It looks like you can also subscribe to lists like my Wish List, which is cool.

Common Security Mistakes

I received an email asking me to name common enterprise security mistakes and how to avoid them. If I'm going to provide free advice via email, I'd rather just post my thoughts here. This is my answer:


  1. Failure to maintain a complete physical asset inventory

  2. Failure to maintain a complete logical connectivity and data flow diagram

  3. Failure to maintain a complete digital asset/intellectual property inventory

  4. Failure to maintain digital situational awareness

  5. Failure to prepare for incidents


The first three items revolve around knowing your environment. If you don't know what houses your data (item 1), how that data is transported (item 2), and what data you are trying to protect (item 3), you have little chance of success.

Once you know your environment, you should learn who is trying to exploit your vulnerabilities to steal, corrupt, or deny access to your data (item 4). Security incidents will occur, so you should have policies, tools, techniques, and trained and exercised personnel ready to respond (item 5).

Rabu, 15 November 2006

Five Blog Posts You Should Read

I found the following five posts to be very interesting. You might too:

The first four are more conceptual, dealing with the need to collapse security measures around data instead of hosts. The fifth is a report of an incident with some decent details.

Comments on SANS Top 20

You may have seen that the latest SANS Top 20 was released yesterday. You may also notice I am listed as one of several dozen "experts" (cough) who "helped create" the list. Based on last year's list, I thought I might join the development process for the latest Top 20. Maybe instead of complaining once the list was published, I could try to influence the process from inside?

First let me say that project lead Rohit Dhamankar did a good job considering the nature of the task. He even made a last-minute effort to solicit my feedback, and some of my comments altered the categories you now see in the Top 20. I thank him for that.

As far as the nature of the list goes, it's important to realize that it's based on a bunch of people's opinions. There is no analysis of past vulnerability trends or conclusions based on real data, like the Vulnerability Type Distribution I mentioned earlier. At the point where I realized people were just going to write up their thoughts on various problems (Internet Explorer, Mac OS X, etc.) I left the project. Rohit emailed me early this week, but I was formally done in early October.

If you think a bunch of people's opinions is worthwhile, then you may find the Top 20 useful. I think the majority of the Top 20's utility, such as it is, derives from name recognition. If that can help influence your organization's management, then I guess it is helpful.

At the very least, the newest Top 20 is a very informative document with plenty of references. I would expect most security practitioners to understand or at least recognize everything on the list. I don't think the list is as "actionable" as the original Top 10, which listed specific vulnerabilities (e.g., "RDS security hole in IIS," CVE-1999-1011) that you needed to patch now.

The latest Top 20 has hundreds of CVE entries, and as such is more of a meta-description of Internet targets. In that respect I like the fact it's called an "attack targets" document, since there's nothing inherently "vulnerable" about, say, Mac OS X. Instead, Mac OS X is being attacked.

What do you think of the new list?

Jumat, 10 November 2006

SCTP and OpenBSM in FreeBSD

Here are two quick notes on my favorite operating system. First, support for Stream Control Transmission Protocol (SCTP) has been added to FreeBSD CURRENT (i.e., 7.x). SCTP is a layer 4 alternative to TCP or UDP. I saw it mentioned in the final issue of Cisco's Packet magazine, in the context of NetFlow , specifically the new Flexible Netflow. When I get a chance to test this it will probably be using this technology.

Second, Federico Biancuzzi conducted an excellent interview with Robert Watson regarding OpenBSM and FreeBSD. This is incorporated into the upcoming FreeBSD 6.2, which I expect to see in early December.

Kamis, 09 November 2006

Gvinum on FreeBSD

Two years ago I documented how I used Vinum on FreeBSD. Since then Vinum has been replaced by Gvinum, although it's not always clear when you should use either term. The Handbook documentation isn't easy to understand, either. Luckily I combined my old notes with this helpful tutorial to accomplish my goal.

I wanted to take two separate partitions, /nsm1 on one disk and /nsm2 on a second disk, and make them look like a single /nsm partition. I had already been using /nsm1, but I was prepared to lose that data since it was only for test purposes thus far. This is what the df command produced.

cel433:/root# df -m
Filesystem 1M-blocks Used Avail Capacity Mounted on
/dev/ad0s1a 495 36 419 8% /
devfs 0 0 0 100% /dev
/dev/ad0s1f 989 0 910 0% /home
/dev/ad0s1h 10553 8655 1053 89% /nsm1
/dev/ad1s1d 18491 0 17012 0% /nsm2
/dev/ad0s1g 989 25 884 3% /tmp
/dev/ad0s1d 1978 328 1492 18% /usr
/dev/ad0s1e 2973 25 2710 1% /var

Here's bsdlabel output.

cel433:/root# bsdlabel /dev/ad0s1
# /dev/ad0s1:
8 partitions:
# size offset fstype [fsize bsize bps/cpg]
a: 1048576 0 4.2BSD 2048 16384 8
b: 1048576 1048576 swap
c: 39102273 0 unused 0 0 # "raw" part, don't edit
d: 4194304 2097152 4.2BSD 2048 16384 28552
e: 6291456 6291456 4.2BSD 2048 16384 28552
f: 2097152 12582912 4.2BSD 2048 16384 28552
g: 2097152 14680064 4.2BSD 2048 16384 28552
h: 22325057 16777216 4.2BSD 2048 16384 28552
cel433:/root# bsdlabel /dev/ad1s1
# /dev/ad1s1:
8 partitions:
# size offset fstype [fsize bsize bps/cpg]
c: 39102273 0 unused 0 0 # "raw" part, don't edit
d: 39102273 0 4.2BSD 2048 16384 28552

So, I need to combine /dev/ad0s1h and /dev/ad1s1d into one bigger virtual disk.

First I unmounted both /nsm1 and /nsm2 were not mounted. Next I edited the bsdlabel using 'bsdlabel -e'. These were the results.

cel433:/root# bsdlabel /dev/ad0s1
# /dev/ad0s1:
8 partitions:
# size offset fstype [fsize bsize bps/cpg]
a: 1048576 0 4.2BSD 2048 16384 8
b: 1048576 1048576 swap
c: 39102273 0 unused 0 0 # "raw" part, don't edit
d: 4194304 2097152 4.2BSD 2048 16384 28552
e: 6291456 6291456 4.2BSD 2048 16384 28552
f: 2097152 12582912 4.2BSD 2048 16384 28552
g: 2097152 14680064 4.2BSD 2048 16384 28552
h: 22325057 16777216 vinum
cel433:/root# bsdlabel /dev/ad1s1
# /dev/ad1s1:
8 partitions:
# size offset fstype [fsize bsize bps/cpg]
c: 39102273 0 unused 0 0 # "raw" part, don't edit
d: 39102273 0 vinum

That's an example where you use the term 'vinum' even though the implementation is Gvinum.

Then I created /etc/gvinum.conf which described that I wanted to create one big /nsm drive. I used the drive size numbers from the df -m command showed earlier.

cel433:/root# cat /etc/gvinum.conf
drive drive1 device /dev/ad0s1h
drive drive2 device /dev/ad1s1d
volume nsm
plex org concat
sd length 10553m drive drive1
sd length 18491m drive drive2

Now I loaded the Gvinum kernel module and invoked gvinum:

cel433:/root# kldload geom_vinum
cel433:/root# kldstat
Id Refs Address Size Name
1 4 0xc0400000 691a48 kernel
2 1 0xc0a92000 58554 acpi.ko
3 1 0xc1d2c000 10000 geom_vinum.ko
cel433:/root# gvinum create /etc/gvinum.conf
2 drives:
D drive2 State: up /dev/ad1s1 A: 601/19092 MB (3%)
D drive1 State: up /dev/ad0s1h A: 347/10900 MB (3%)

1 volume:
V nsm State: up Plexes: 1 Size: 28 GB

1 plex:
P nsm.p0 C State: up Subdisks: 2 Size: 28 GB

2 subdisks:
S nsm.p0.s1 State: up D: drive2 Size: 18 GB
S nsm.p0.s0 State: up D: drive1 Size: 10 GB

That's good news. Time to prepare /dev/gvinum/nsm for data.

cel433:/root# newfs /dev/gvinum/nsm
/dev/gvinum/nsm: 29044.0MB (59482112 sectors) block size 16384, fragment size 2048
using 159 cylinder groups of 183.77MB, 11761 blks, 23552 inodes.
super-block backups (for fsck -b #) at:
160, 376512, 752864, 1129216, 1505568, 1881920, 2258272, 2634624, 3010976,
3387328, 3763680, 4140032, 4516384, 4892736, 5269088, 5645440, 6021792,
6398144, 6774496, 7150848, 7527200, 7903552, 8279904, 8656256, 9032608,
9408960, 9785312, 10161664, 10538016, 10914368, 11290720, 11667072, 12043424,
...truncated...

Finally I created a /nsm mount point and mounted the new drive.

cel433:/root# mkdir /nsm
cel433:/root# mount /dev/gvinum/nsm /nsm
cel433:/root# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/ad0s1a 496M 36M 420M 8% /
devfs 1.0K 1.0K 0B 100% /dev
/dev/ad0s1f 989M 74K 910M 0% /home
/dev/ad0s1g 989M 26M 885M 3% /tmp
/dev/ad0s1d 1.9G 328M 1.5G 18% /usr
/dev/ad0s1e 2.9G 25M 2.6G 1% /var
/dev/gvinum/nsm 27G 4.0K 25G 0% /nsm

To enable Gvinum at boot, I added the following to /boot/loader.conf:

geom_vinum_load="YES"

I also added this entry to /etc/fstab:

/dev/gvinum/nsm /nsm ufs rw 2 2

Unfortunately, after a reboot, I had problems with the new /nsm:

Nov 9 15:52:38 cel433 kernel: GEOM_VINUM: subdisk nsm.p0.s1 state change: down
-> stale
Nov 9 15:52:38 cel433 kernel: GEOM_VINUM: subdisk nsm.p0.s0 state change: down
-> stale
Nov 9 15:52:47 cel433 kernel: g_vfs_done():gvinum/nsm[READ(offset=65536, length
=8192)]error = 6
Nov 9 15:52:56 cel433 kernel: g_vfs_done():gvinum/nsm[READ(offset=65536, length
=8192)]error = 6

When I tried to mount /nsm I got this error:

mount: /dev/gvinum/nsm: Device not configured

Gvinum didn't look happy:

cel433:/root# gvinum list
2 drives:
D drive1 State: up /dev/ad0s1h A: 347/10900 MB (3%)
D drive2 State: up /dev/ad1s1 A: 601/19092 MB (3%)

1 volume:
V nsm State: down Plexes: 1 Size: 28 GB

1 plex:
P nsm.p0 C State: down Subdisks: 2 Size: 28 GB

2 subdisks:
S nsm.p0.s0 State: stale D: drive1 Size: 10 GB
S nsm.p0.s1 State: stale D: drive2 Size: 18 GB

Thankfully I found this post which solved the problem.

cel433:/root# gvinum start nsm
2 drives:
D drive1 State: up /dev/ad0s1h A: 347/10900 MB (3%)
D drive2 State: up /dev/ad1s1 A: 601/19092 MB (3%)

1 volume:
V nsm State: up Plexes: 1 Size: 28 GB

1 plex:
P nsm.p0 C State: up Subdisks: 2 Size: 28 GB

2 subdisks:
S nsm.p0.s0 State: up D: drive1 Size: 10 GB
S nsm.p0.s1 State: up D: drive2 Size: 18 GB

Then I was able to access /nsm:

cel433:/root# mount /nsm
cel433:/root# ls -al /nsm
total 6
drwxr-xr-x 3 root wheel 512 Nov 9 15:28 .
drwxr-xr-x 23 root wheel 512 Nov 9 15:29 ..
drwxrwxr-x 2 root operator 512 Nov 9 15:28 .snap
cel433:/root# df -h /nsm
Filesystem Size Used Avail Capacity Mounted on
/dev/gvinum/nsm 27G 4.0K 25G 0% /nsm

This process survived a reboot, so I am all set now.

Selasa, 07 November 2006

ISSA NoVA Meeting Next Thursday

The next ISSA NoVA meeting will take place 1730 Thursday 16 Nov 06 at Oracle Corp in Reston, VA. Marcus Sachs will be the guest speaker. Please RSVP as soon as possible.

Unfortunately a new NoVA Snort Users Group decided to ignore this meeting of 100+ security practitioners by scheduling their first meeting at exactly the same time. Hopefully future NoVA SUG meetings will take a look at their surroundings before scheduling future events or at least respond to posts about their group.

Who Needs CISSP for Ethics?

Last year I discussed the value of the CISSP with respect to its code of ethics. Today while renewing my ISSA membership, I was presented with the following:

The primary goal of the Information Systems Security Association, Inc. (ISSA) is to promote practices that will ensure the confidentiality, integrity, and availability of organizational information resources. To achieve this goal, members of the Association must reflect the highest standards of ethical conduct. Therefore, ISSA has established the following Code of Ethics and requires its observance as a prerequisite for continued membership and affiliation with the Association.

As an applicant for membership and as a member of ISSA, I have in the past and will in the future:

* Perform all professional activities and duties in accordance with all applicable laws and the highest ethical principles;
* Promote generally accepted information security current best practices and standards;
* Maintain appropriate confidentiality of proprietary or otherwise sensitive information encountered in the course of professional activities;
* Discharge professional responsibilities with diligence and honesty;
* Refrain from any activities which might constitute a conflict of interest or otherwise damage the reputation of employers, the information security profession, or the Association; and
* Not intentionally injure or impugn the professional reputation of practice of colleagues, clients, or employers.

Please check the box indicating you have read the above statement and agree to its principles:


It looks to me like ISSA has the ethics bases covered. If I agree to that statement, I get as much value as being a CISSP as far as ethics goes.

Unfortunately, misdirected efforts like DoD 8570.1 attach significance to the CISSP out of all proportion to its worth.

Registration Deadlines for TaoSecurity Training

This is a reminder for those interested in attending one or more of the training classes I'm conducting in December. These will be the last public classes for several months. I have consulting and private classes occupying my time in Q107, although I'll have some public work in Q207.

Thank you. If you have any questions email training [at] taosecurity [dot] com.

Bejtlich Cited in Sourcefire IPO Story

Bill Brenner published this quote in his story Sourcefire IPO could fuel Snort, users say:

The infrastructure to support Snort isn't cheap and Sourcefire isn't flush with cash, said Richard Bejtlich, founder of the Washington, D.C.-based consultancy Tao Security. "The money to keep Snort thriving has to come from somewhere, and an IPO could give Snort more legs," he said.

I based this thought on the following from Sourcefire's S-1, listed under Risks Related to Our Business:

We have incurred operating losses each year since our inception in 2001. Our net loss was approximately $10.5 million for the year ended December 31, 2004, $5.5 million for the year ended December 31, 2005 and $2.9 million for the nine months ended September 30, 2006. Our accumulated deficit as of September 30, 2006 is approximately $40.3 million.

It looks like Sourcefire's losses are narrowing, which points to future profitability. My point is that development of Snort and associated software (RNA, etc.) takes significant resources. While it might not be that difficult to fork Snort and maintain its code base, adding significant features and developing complex rules would be extremely tough for a noncommercial enterprise to sustain.

When Laws Aren't Enough

CIO Magazine published The Global State of Information Security 2006. The story contained what I consider to be some fairly disappointing results.

Complacency, it seems, abounds. A large proportion of security execs admitted they're not in compliance with regulations that specifically dictate security measures their organization must undertake or risk stiff sanctions, up to and including prison time for executives. Some of these regulations—such as California's security breach law, the Health Insurance Portability and Accountability Act (HIPAA), and non-U.S. laws such as the European Union Data Privacy Directive—have been around for years. ..

The information security discipline still suffers from the fundamental problem of making a business value case for security. Security is still viewed and calculated as a cost, not as something that could add strategic value and therefore translate into revenue or even savings.
(emphasis added)

No one spends money on insurance because it "adds strategic value." At best security spending can produce "savings," i.e. avoid losses.

Perhaps the problem is ignorant management?

From 2003 to 2005, the percentage of survey respondents saying they had fewer than 10 negative information security incidents in the past year remained steady. But this year, we included the option to answer that you do not know how many negative security incidents occurred. This year, nearly one-third of respondents admitted that they do not know how many breaches or unauthorized access events occurred within their organizations.

To a certain extent, that's understandable. Attacks can be hard to identify, and networks can be extensive. What's less comprehensible is that a significant portion of respondents said they have not installed some of the most rudimentary network safeguards. Only one-third of respondents have put in place patch management tools or monitor user activity. Less than half use intrusion detection software or monitor log files (the two best methods organizations can employ to detect breaches) and even fewer use intrusion prevention tools. Surprisingly, more than 20 percent of respondents don't even have a network firewall.


Let's assume these managers are not being brutally honest, i.e., they are not recognizing that it can be impossible to know of every incident. Instead, I assume they are admitting they just don't have the tools and tactices to measure incidents. That's disappointing.

There is some hope in certain industries.

Companies in the financial services sector—banks, insurance companies, investment firms—are more likely to employ a CSO than other industries. Security budgets in the financial sector are typically a bigger slice of the IT budget as a whole and increase at a faster rate than in other sectors. That may be because financial services companies are more likely to link security policies and spending to business processes. These companies are proactive, instituting formal information security processes such as log file monitoring and periodic penetration tests. More of their employees follow company security policies. Not surprising, financial services companies also have deployed more information security technology gadgets, such as intrusion detection and encryption tools, and identity management solutions.

It's obvious, therefore, that financial services organizations are far more likely—almost twice as likely, in fact—to have an overall strategic security plan in place. Consequently, they reported fewer financial losses, less network downtime and fewer incidents of stolen private information than any other vertical.

The reason for all this is also obvious. The product in the financial services industry is money, and money is the prime target of cybercriminals, including organized crime, insiders and even terrorists. Protecting the money is the industry's most critical concern. The past few years have seen a sharp increase in cybercrime (phishing, identity theft, extortion and spyware, to name a few). Anytime a security executive can demonstrate to top executives that investing in security can protect and increase shareholder value, he will be more likely to convince the boardroom to make that investment and make security a strategic part of the organization.

Financial services companies are more likely than enterprises in other industries to use ROI to measure the effectiveness of security investments (29 percent versus an average of 25 percent), and they also are more likely to use potential impact on revenue to justify investments (36 percent versus an average of 27 percent). These arguments work. More financial services companies saw a double-digit increase in their 2006 security budgets than those in any other sector.

Regulation plays a part too. The financial industry must adhere to the most stringent information security laws, and therefore it leads other industries in following proven, strategic information security practices.


I'd like to provide a slightly different interpretation. Financial services companies are used to dealing with threats as well as protecting assets. Everyone has assets to protect, but not until recently has everyone been within the reach of threats. Your risk is zero if you face no threats, no matter how vulnerable you are or how important your assets.

Minggu, 05 November 2006

Review of Hack the Stack

Amazon.com just posted my three star review of Hack the Stack by Michael Gregg, et al. From the review:

I teach a course called "TCP/IP Weapons School" that involves walking students up the OSI model. We look at network traces generated by tools and techniques to defeat security measures. When I saw "Hack the Stack" (HTS) I thought it might make a good resource for my class, since HTS seemed to advocate a similar approach. Unfortunately, technical errors, shoddy production, internal repetition and poor organization, and a lack of original material make me question the value of HTS...

Overall, I think there is room for a book like HTS. It's too bad this one did not deliver what I was expecting. I do appreciate the authors citing my network security monitoring methodology on p 232.

Jumat, 03 November 2006

Real Insider Threats

Just the other day I read the following in Cliff Berg's book High-Assurance Design:

Roles should be narrowly defined so that a single role does not have permission for many different functions, at least not without secure traceability.

The CTO of a Fortune 100 financial services company once bragged to me over dinner that if he wanted to, he had the ability to secretly divert a billion dollars from his firm, erase all traces of his actions, and disappear before it was discovered.

Clearly, the principles of separation of duties and compartmentalization were not being practiced within his organization.


Now I read the following in VARBusiness:

Federal law enforcement officials Tuesday arrested the well-known CEO of White Plains, N.Y.-based MSP provider Compulinx on charges of stealing the identities of his employees in order to secure fraudulent loans, lines of credit and credit cards, according to an eight-count indictment unsealed by the U.S. Attorney's office in White Plains.

Terrence D. Chalk, 44, of White Plains was arraigned in federal court in White Plains, along with his nephew, Damon T. Chalk, 35, after an FBI investigation turned up the curious lending and spending habits. The pair are charged with submitting some $1 million worth of credit applications using the names and personal information -- names, addresses and social security numbers -- of some of Compulinx's 50 employees...

Terrence Chalk is also charged with racking up more than $100,000 in unauthorized credit card charges. If convicted, he faces 165 years in prison and $5.5 million in fines, prosecutors say. Damon faces a maximum sentence of 35 years imprisonment and $1.25 million in fines.


These are exactly the problems I mentioned earlier. Both cases make me sick. In the former, the Fortune 100 CEO knew his organization was broken but he thought it was a joke. In the latter, someone in a position of authority abused his access and ruined the financials lives of his employees.

This is a great example of the need to implement proper corporate governance by not centralizing the roles of CEO, President, and Chairman of the Board in a single person. Furthermore, none of those people should have access to the data abused by Mr Chalk. That level of access should stop at the VP for Human Resources.

Obviously the smallest of companies (mine included) can't separate certain duties because there are too many roles for too few people! However, organizations with 100 or more employees should certainly be taking steps to limit the access all employees have -- including the CEO.

This includes system and security administrators. According to surveys like those conducted by Dark Reading, a certain percentage of those with privileged access are abusing their power.

I often hear that system administrators should be responsible for securing their systems. I believe that sys admins should configure their systems as securely as possible, but outside parties (auditors, independent security staffs) should be responsible for auditing system activity and ensuring operation in compliance with security policies.

At some point we will also be able to remove the ability of system administrators to access sensitive data, perhaps using role-based access control (RBAC). There is no need for a sys admin who maintains a platform housing Social Security numbers and the like to be able to read those records. It will not be popular for current sys admins to relinquish their "godlike" powers, but it will result in more secure operations. Sys admins are data custodians; they are not data owners.

Kamis, 02 November 2006

Air Force Cyberspace Command

According to Air Force Link, 8th Air Force will become the new Air Force Cyberspace Command. This appears to be the next step following the creation of a Air Force Network Operations Command structure in August. That came on the heels of the Air Force Information Warfare Center being redesignated as the Air Force Information Operations Center. That was a result of the Air Force Tactical Fighter Weapons Center being redesignated as the Air Force Warfare Center. In a related move, the former 67th Information Operations Wing is now the 67th Network Warfare Wing. Follow all that?

It also appears the Air Force is centralizing control of network operations and security centers, according to this article:

All Air Force network operations security centers, which were previously decentralized among the major commands, will consolidate under the 67th with the stand-up of two integrated network operations and security centers, or I-NOSCs, located at Langley AFB, Va., and at Peterson AFB, Colo.

Apparently the former AFCERT, now the Air Force Network Operations and Security Center Network Security Division (AFNOSC NSD) in San Antonio, TX, is adding 191 MacAulay-Brown contractors.

For some higher level insights into these changes, the latest version of AFI 33-115v1: Network Operations (.pdf) might be interesting.

Returning to the creation of the new Cyberspace Command -- remember the Air Force was once part of the United States Army. I see no reason why the United States should maintain independent services that fight on land, sea, air, and space, but have cyber forces scattered throughout the other services. (You might make the counter-argument that each service maintains its own "air forces," but these support their parent service.)

I think within my lifetime we will see an independent Cyber Force to centralize information warfare capabilities alongside the Army, Navy, Air Force, and Marines. If it happens within the next 10 years, I think Col Greg Rattray might be in charge. (Yes, I'm assuming he continues to be promoted!) Before that happens, I'd like to see the new Cyberspace Command sponsor a new Air Force Specialty Code (AFSC) for information warriors. The current Intel or Comm paradigm isn't suitable.

Reviews of Six Software Security Books

Amazon.com just posted my six new reviews on books about software security. The first is Software Security by Gary McGraw. This was my favorite of the six because it was the most logically organized. Here is a link to the five star review.

The second is Security Development Lifecycle by Microsoft's Michael Howard and Steve Lipner. I thought it was neat to read about Microsoft's software development practices with respect to security. Just don't expect the CD-ROM training videos to keep you awake. Here is a link to the four star review.

The third is Writing Secure Code, 2nd Ed by Microsoft's Michael Howard and David LeBlanc. This is probably the definitive book on writing secure code for Windows, although the terminology gives me pains. Here is a link to the four star review.

The fourth is 19 Deadly Sins of Software Security by Michael Howard, David LeBlanc, and John Viega. This book is a stripped down version of other secure coding books, but it has some cool insights on topics like SSL. Here is a link to the four star review.

The fifth is High-Assurance Design by Cliff Berg. Java and object-oriented developers will like the second half of this book; I preferred the first half. Here is a link to the four star review.

The last book is Security Patterns by Markus Schumacher, et al. This book presents a framework that we might see more of in the future. Here is a link to the four star review.

All six reviews share this common introduction, since I read and reviewed them as a set:


I read six books on software security recently, namely "Writing Secure Code, 2nd Ed" by Michael Howard and David LeBlanc; "19 Deadly Sins of Software Security" by Michael Howard, David LeBlanc, and John Viega; "Software Security" by Gary McGraw; "The Security Development Lifecycle" by Michael Howard and Steve Lipner; "High-Assurance Design" by Cliff Berg; and "Security Patterns" by Markus Schumacher, et al. Each book takes a different approach to the software security problem, although the first two focus on coding bugs and flaws; the second two examine development processes; and the last two discuss practices or patterns for improved design and implementation. My favorite of the six is Gary McGraw's, thanks to his clear thinking and logical analysis. The other five are still noteworthy books. All six will contribute to the production of more security software.


Sometime this month I plan to review a set of books about vulnerability discovery and writing exploits. You'll see those titles on my reading list.