Senin, 29 September 2008

Wanted: Incident Handler with Mentoring Skills

Previously I posted Wanted: Incident Handler with Reverse Engineering/Malware Analysis Skills. That article noted our GE Careers job posting (843369). We received several great candidates with reverse engineering and malware skills, but none in Cincinnati. Therefore, I am shuffling the positions a bit. The RE/malware person does not need to reside in Cincinnati, but now I need a different incident handler definitely located in Cincinnati.

The incident handler in Cincinnati should meet the following requirements.

  1. Strong incident handling skills. I want this person to be able to speak authoritatively and confidently when dealing with internal business partners. (This is not a job supporting external customers.)

  2. Strong mentoring skills. This candidate will interact daily with our Command Center personnel. The Command Center will be the 24x7 component of our Incident Response Center. This incident handler will need to be a mentor and coach for the Command Center analysts, although not their manager.

  3. Be an ambassador. This incident handler will be our in-person representative to two crucial groups: our Infrastructure businesses and our local IT staff. I need a candidate who represents our interests well and collaborates with partner organizations in a professional manner.

  4. Intermediate host forensics skills. We need a person who has traditional host-centric forensic experience.

  5. Introductory-to-intermediate log analysis skills. We need a person who can support others on the team who do log analysis. Experience with or intense willingness to learn Splunk is crucial.


To reiterate, this is a GE employee position in Cincinnati. Please apply if you believe you fit the bill. Thank you.

Sabtu, 27 September 2008

Snort Report 19 Posted

My 19th Snort Report titled Using SnortSP and Snort 2.8.2 has been posted. From the article:

Solution provider takeaway: Solution providers will learn how to set up two Snort 3.0 beta components -- the Snort Security Platform (SnortSP) and the Snort 2.8.2 detection engine on the SnortSP.

In the last Snort Report, I discussed the architectural basics of Snort 3.0. The new Snort system consists of the Snort Security Platform (SnortSP) plus an assortment of engines. SnortSP is a foundation that provides traffic-inspection functions, like packet acquisition, traffic decoding, flow management and fragment reassembly. Each engine runs as a module on SnortSP. The first available module is a port of Snort 2.8.2 specifically for running on top of SnortSP.


I can never tell when SearchSecurity will post these articles... this one is dated 5 Sep but I just noticed it online.

Why Blog?

Recently a group of managers at work asked me to explain why I blog. This is a very good question, because the answer might not be intuitively obvious. Perhaps by sharing my rationale here, I might encourage others to blog as well.

  1. Blogging organizes thoughts. Recently I nodded in agreement when I heard a prolific author explain why he writes. He said the primary purpose for writing his latest book was to organize his thoughts on a certain topic. Writing an entire book is too much for most of us, but consolidating your ideas into a coherent statement is usually sufficient.

  2. Blogging captures and shares thoughts. Once your thoughts are recorded in electronic form, you can refer to them and point others to them. If I am asked for an opinion, I can often point to a previous blog post. If the question is interesting enough, I might write a new post. That satisfies this reason and the previous one.

  3. Blogging facilitates public self-expression. This is a positive aspect of the modern Web, if approached responsibly. Many social networking sites contain information people would not want to preserve for all time, but a carefully nutured blog can establish a positive presence on the Web. If you blog on certain topics that interest me, I am going to recognize you if you contact me.

  4. Blogging establishes communities. The vast majority of the blogs I read are professionally-oriented (i.e., digital security). I follow blogs of people handling the same sorts of problems I do. I often meet other bloggers at conferences and can easily speak with them, because I've followed their thoughts for months or years. Book authors share a similar trait, although books are a much less fluid medium.

  5. Blogging can contribute original knowledge faster than any other medium. Blogging is just about the easiest way to contribute knowledge to the global community that I can imagine. It costs nothing, requires only literacy, is easily searchable, and can encourage feedback when comments are supported.

Why do you blog? And if you don't, why not?

Is Experience the Only Teacher in Security?

Another reader asked me this question, so I thought I might share it with you:

I'm really struggling with... how to communicate risk and adequate controls to the business managers at my employer... To put it bluntly, this is the first time the company has really looked at it [security] at all and they don't really want to deal with it. They have to because of the business we are in though... So while I've got a blazing good example of what doesn't work, I still don't know what does.

What are some good resources that you have found in communicating security (or other) risks to business? Are there books, blogs or authors that you would recommend?


I've written about this problem in the past, in posts like Disaster Stories Help Envisage Risks and Analog Security is Threat-Centric. I'll be speaking about this problem in my SANS Forensics Summit keynote next month, with the theme of "speaking truth to power."

Throughout my career, I've found few managers care about security until they've personally experienced a digital train wreck. Until a manager has had some responsibility for explaining an incident to his or her superiors, the manager has no real frame of reference to understand security. For me, this is a strength of the incident response community. We are absolutely the closest parties to ground truth in security, because we are the ones who manage security failures. The only other party as close to the problem is the adversary, and he/she isn't going to share thoughts on the issue.

Therefore, I recommend planning your security improvements, whatever they may be, then waiting for the right moment. Of course you can tell management that you have concerns, but don't be surprised when they ignore you. When a digital train wreck happens in your enterprise, step forward with your plan and say "I have an answer." In most intrusions managers want someone to tell them everything will be ok, especially when it's wrapped in a documented course of action. Be the person with the plan and you'll have greater success moving your security projects forward.

Does anyone else have suggestions for this blog reader?

Jumat, 26 September 2008

Security vs IT at Computerworld

A long-time blog reader pointed me towards this Computerworld article Making enemies, but needing allies. I must absolutely emphasize that this story is not me, nor does it reflect issues I have. However, my blog reader asked me specifically to ask if any of you share this problem, and if yes, how do you handle it?

Our fledgling security organization is starting to run into some significant relationship challenges. As we're beginning to build our information security program from scratch, we're causing some friction.

In my company, information security is part of the IT department, but like several other IT disciplines, it reports directly to the CIO. As a result, the infosec and IT support teams are peers, a relationship as uneasy as that of siblings. Over the past couple of weeks, tensions between our teams have been rising sharply...

As we try to bring security to an acceptable level, we are introducing new policies and standards that are being met with hostility by the IT support teams. They will have to perform some of the remediation we have identified, such as patching and updating devices, cleaning up firewall rules and implementing redundant systems. So, basically we are telling them what to do -- which they interpret as telling them how to do their jobs. And they don't like that.


Does this situation resonate with any of you, and if yes, how did you deal with it?

VizSec and RAID Wrap-Up

Last week I attended VizSec 2008 and RAID 2008. I'd like to share a few thoughts about each event.

I applaud the conference organizers for scheduling these conferences in the same city, back-to-back. That decision undoubtedly improved attendance and helped justify my trip. Thank you to John Goodall for inviting me to join the VizSec program committee.

I enjoyed the VizSec keynote by Treemap inventor Ben Shneiderman. I liked attending a non-security talk that had security implications. Sometimes I focus so strictly on security issues that I miss the wider computing field and opportunities to see what non-security peers are developing.

I must admit that I did not pay as much attention to the series of speakers that followed Prof Shneiderman as I would have liked. Taking advantage of the site's wireless network, I was connected to work the entire day doing incident handling. I did manage to speak with Raffy Marty during lunch, which was (as always) enlightening.

One theme I noticed at VizSec was the limitation of tools and techniques to handle large data sets. Some people attributed this to the Prefuse visualization toolkit used by many tools. Several attendees said they turn to visualization approaches because their manual analysis methods fail for large data sets. They don't need visualization tools which also croak when analyzing more than several hundred thousand records.

I also noticed that many visualization work for security tends to focus on IP addresses and ports. That is nice if you are limited to analyzing NetFlow records or other session data, but most of the excitement these days exists as log files, URLs, or layer 7 content. Perhaps just when the researchers have figured out a great way to show who is talking to who, it won't matter much anymore. Clients will all be talking to the cloud, and the action will be within the cloud -- beyond the inspection of most clients.

One presentation which I really liked was Improving Attack Graph Visualization through Data Reduction and Attack Grouping (.pdf) by John Homer, Xinming Ou, Ashok Varikuti and Miles McQueen. I thought their paper addressed a really practical problem, namely reducing the number of attack paths to those most likely (and logically) used by an intruder. I believe the speaker was unnecessarily criticized by several participants. I could see this approach being used in operational networks to assist security staff make defensive and detective decisions.

At the end of the day I participated in a poster session by virtue of being a co-author of Towards Zero-Day Attack Detection through Intelligent Icon Visualization of MDL Model Proximity with Scott Evans, Stephen Markham, Jeremy Impson and Eric Steinbrecher. Scott and Stephen work at GE Research, and I plan to collaborate with them for our internal security analysis.

Following VizSec I attended two days of RAID, or the 11th Recent Advanced in Intrusion Detection conference. Five years ago I participated in the 6th RAID conference and posted my thoughts. In that post I noted comments by Richard Steinnon, months after his 2003 comments that IDS was "dead":

"Gateways and firewalls are finally plugging the holes... we are winning the arms race with hackers... the IDS is at the end of life."

I found those comments funny on their own, and in light of the recent story Intrusion-prevention systems still not used full throttle: survey:

Network-based intrusion-prevention systems are in-line devices intended to detect and block a wide variety of attacks, but the equipment still is often used more like an intrusion-detection system to passively monitor traffic, new research shows...

[Richard] Stiennon -- who created some controversy five years ago while a Gartner ananlyst when he declared IDSs "dead” -- says this Infonetics survey gives him fuel to fan the flames of criticism once again.

“IDS should be dead because it’s still a failed technology,” Stiennon says, expressing the view that simply logging alerts about attacks is almost always a pointless exercise. “IPS equipment should be doing more to block attacks.”


The fundamental problem was, is, and will continue to be, the following:

If you can detect an attack with 100% accuracy, of course you should try to prevent it. If you can't, what else is left? Detection.

I continue to consider so-called "intrusion detection systems" to really be attack indication systems. It's important to try to prevent what you can, but to also have a system to let you know when something bad might be happening. This subject is worthy of a whole chapter in a new book, so I'll have to wait to write that argument.

Overall, I felt that a lot of the RAID talks were divorced from operational reality. Several attendees addressed this subject with questions. Too many researchers appear to be working on subjects that would never see the light of day in real networks.

Sabtu, 20 September 2008

CERIAS to CAE: We're Not a Lemon

Every so often we discuss topics like starting out in digital security on this blog. Formal education is one method, with one approach being a Centers of Academic Excellence in Information Assurance Education. This program reports "93 Centers across 37 states and the District of Columbia." At first glance it is tough to see a downside to this program.

This is why I was surprised to read Centers of Academic... Adequacy, a recent post by Dr Gene Spafford. The core argument appears in this excerpt:

[W]e do not believe it is possible to have 94 (most recent count) Centers of Excellence in this field. After the coming year, we would not be surprised if the number grew to over 100, and that is beyond silly. There may be at most a dozen centers of real excellence, and pretending that the ability to offer some courses and stock a small library collection means “excellence” isn’t candid.

The program at this size is actually a Centers of Adequacy program. That isn’t intended to be pejorative — it is simply a statement about the size of the program and the nature of the requirements.

Some observers and colleagues outside the field have looked at the list of schools and made the observation that there is a huge disparity among the capabilities, student quality, resources and faculties of some of those schools. Thus, they have concluded, if those schools are all equivalent as “excellent” in cyber security, then that means that the good ones can’t be very good ("excellent" means defining the best, after all). So, we have actually had pundits conclude that cyber security & privacy studies can’t be much of a discipline. That is a disservice to the field as a whole.

Instead of actually designating excellence, the CAE program has become an ersatz certification program...
(emphasis added)

Therefore:

[W]e did not renew the certifications, and we dropped out of the CAE program when our certification expired earlier this year.

Wow, that is striking. CERIAS decided to remove itself from the "Centers of Academic Excellence" program for the reasons cited, plus several more listed in the blog. That's like me deciding to not renew my CISSP on moral grounds... except I did renew late last year when my employer requested the renewal and paid for it. CERIAS drew a real line in the sand and said "no thanks" to the government.

Does Spaf's comments remind you of the market for lemons?

There are good security programs and defective security programs ("lemons"). The prospective student of a security program does not know beforehand whether it is a good program or a lemon. So the student's best guess for a given program is that the program is of average quality; accordingly, he/she will be willing to pay for it only the price of a program of known average quality.

This means that the owner of a good security program will be unable to get a high enough tuition to make offering that program worthwhile. Therefore, owners of good programs will not place their programs in the CAE system. The withdrawal of good programs reduces the average quality of programs on the market, causing students to revise downward their expectations for any given program. This, in turn, motivates the owners of moderately good programs not to participate in CAE, and so on. The result is that a market in which there is asymmetrical information with respect to quality shows characteristics similar to those described by Gresham's Law: the bad drives out the good...
(That's the latest Wikipedia entry modified to discuss the issue at hand.)

The question now becomes: will any other university not renew their CAE status? Furthermore, will any of us decide not to renew our CISSP? I already decided not to renew my CCNA and CIFI certs. I let the CCNA lapse because it just isn't important for what I do. I let the CIFI lapse because the organization behind it collapsed following the tragic passing of its founder.

Jumat, 19 September 2008

Cost of Intellectual Property Theft

I liked the following excerpt from Tim Wilson's story Experts: US Is Not Prepared to Handle Cyber Attacks:

If the bad guys launched a coordinated cyber attack on the United States tomorrow, neither government nor industry would be able to stop it, experts warned legislators yesterday.

At a hearing held by the House Permanent Select Committee on Intelligence, cyber defense experts testified that government agencies are insufficiently coordinated to handle an attack, and that efforts to build a defense have not adequately addressed issues in the private sector...

[Paul] Kurtz [a partner with Good Harbor Consulting and a member of the Center for Strategic and International Studies's (CSIS) Commission on Cybersecurity] registered concerns about the theft of intellectual property from U.S. companies, which he said is occurring at a rate of $200 billion a year. "American industry and government are spending billions of dollars to develop new products and technology that are being stolen at little to no cost by our adversaries," he said. "Nothing is off limits -- pharmaceuticals, biotech, IT, engine design, weapons design."


Why spend money on research and development if you can steal the product from someone else? The long-term foundation of this country's power is economic, not military. When our competitiveness is systematically eroded by foreign nation-states, action must be taken. A year ago I wrote US Needs Cyber NORAD:

We often hear that the private sector should protect itself, since the private sector owns most of the country's critical infrastructure. Using the same reasoning, I guess that's the reason why Ford defends the airspace over Dearborn, MI; Google protects Mountain View, CA, and so on.

Industry needs help, and we need it now.

Selasa, 16 September 2008

On Breakership

Last week Mark Curphey asked Are You a Builder or a Breaker. Even today at RAID 2008, the issue of learning or teaching offensive techniques ("breakership") was mentioned. I addressed the same issue a few months ago in Response to Is Vulnerability Research Ethical.

Mark channels the building architecture theme by mentioning Frank Lloyd-Wright. I recommend reading my previous post for comprehensive thoughts, but I'd like to add one other component. Two years I wrote Digital Security Lessons from Ice Hockey where I made a case for defenders to develop offensive skills in order to be "well-rounded." Why is that? Turning to the building architecture idea Mark mentioned, why don't classical architects learn "offense," i.e., why aren't they "well-rounded"?

It turns out that classical architects do learn some "offense," except they limit themselves to the natural physics of their space and less on what an intelligent adversary might do. In other words, architects learn about various forces and the limits of their building materials, but usually not how to design a building that could withstand a Tomahawk Land Attack Missile (TLAM). Of course there are a very few number of people who do learn how to design structures that can withstand TLAMs, but most architects do not.

Digital architects are waking up to the fact that they face the equivalent of digital TLAMs constantly. Any system connected to the Internet, or could be connected to the Internet one day, are vulnerable to digital TLAMs. Therefore, digital architects need to know how these weapons work so they can better build their systems.

It turns out that classical architects must also learn something about intelligent adversaries, especially as the terrorism threat occupies greater mindshare and drives building codes. Mindshare can be transitory but building codes are persistent. Even if we build mindshare or attention to security issues in the digital space, we still lack a "building code." That means we will probably remain vulnerable.

Senin, 08 September 2008

Wanted: Incident Handler with Reverse Engineering/Malware Analysis Skills

I am looking for an incident handler with reverse engineering and malware analysis skills to join a new security organization we are building within General Electric. We are hiring several people, so the generic job description appears on our GE Careers site under job number 843369. This is a GE employee position with great benefits and career prospects.

For this specific role, I am looking for the following qualities:

  1. Strong incident handling skills. I want this person to be able to speak authoritatively and confidently when dealing with internal business partners. (This is not a job supporting external customers.) If you are a great RE but are not comfortable doing generic incident handling, please do not apply.

  2. Intermediate-to-advanced reverse engineering and malware analysis skills. I am looking for someone who can tear apart malicious code that we encounter, determine how it works, and what we can do to resist and detect it.

  3. Intermediate coding skills. The ability to meet short-term operational tool development needs to support incident detection and response is a huge plus.

  4. Introductory-to-intermediate assessment skills. This would be a secondary task, but any assessment work you've done would be helpful.

  5. Willing to work in Cincinnati. GE has major NOC/SOC/data center infrastructure in Cincinnati, and I need to locate this subject matter expert in that city. In the event I find the perfect person who cannot work in Cincinnati, we can discuss alternative arrangements.


This is a demanding yet exciting role, and I figured multiple people who read this blog might be interested. If you are a serious candidate and have questions, please email taosecurity at gmail dot com. Thank you.

Bejtlich to Judge NYU-Poly CSAW Forensics Challenge

Dr. Nasir Memon was kind enough to ask me to be a judge at the Forensics Challenge component of the 5th annual Cyber Security Awareness Week, held by the Information Systems and Internet Security Lab within the Polytechnic Institute of New York University. NYU-Poly's ISIS lab is an NSF-funded lab and a NSA designated Center of Excellence that provides focus for multidisciplinary research and hands-on education in emerging areas of information security.

Anyone can participate in the challenge, which ISIS designed. (I have no knowledge of it, so I am considered "impartial.") Review the instructions on the Forensics Challenge Web site, and be sure to submit your analysis no later than Thursday 25 September 2008. I will not be able to attend the awards ceremony on 14 October, since I will be speaking at the SANS Forensics event that day. However, I will help judge the submissions.

Sabtu, 06 September 2008

Internal Security Staff Matters

I read Gunter Ollmann's post in the IBM ISS blog with interest today. Gunter is "Director Security Strategy, IBM Internet Security Systems," so he is undoubtedly pro-outsourcing. Here is his argument:

[S]ecurity doesn’t come cheap. While individual security technologies get cheaper as they commoditize, the constant influx of new threats drives the need for new classes of protection and new locations to deploy them...

If you were to examine a typical organizations IT security budget, you’d probably see that the majority of spend isn’t in new appliances or software license renewals, instead it’ll lie in the departments staffing costs...

This is at odds with the way most organizations normally deal with specialized and professional skill requirements... Just about every organization I deal with (including some of the biggest international companies) relies upon external agencies to provide these specialist services and consultancy – as and when required – it’s more cost effective that way.

With that in mind, why are organizations building up their own highly-trained (and expensive) specialist internal security teams? Granted, some of the security technologies being deployed by organizations are relatively complex, but do they really require a Masters degree and CISSP certified experts to babysit them full-time...

Nowadays you can tap in an incredibly broad range of expertise – ranging from hard-core security researchers capable of helping you evaluate the security of new products you’re thinking of buying and deploying throughout your enterprise, through to 24x7 security sentinels; so knowledgeable about the security product you’ve deployed that they’re capable of guaranteeing protection with money-back SLA’s...

Organizations should take a closer look at their security budgets and evaluate whether they’re getting the right value out of their internal teams and whether their skills investment meets the daily need of the business.
(emphasis added)

By highlighting the focus on "security products," you can probably predict my response to Gunter's post. Sure, you can get hire experts that may (or may not) be cheaper than internal staff, and they may be smarter in individual products or even defensive tactics, but they are poor with respect to the most critical aspect of modern security: business knowledge. It does not matter if you are the world's greatest packet monkey if you 1) don't know what matters to a business; 2) don't know business systems; 3) don't know what is normal for a business... do I need to continue?

This is the biggest challenge I see for consultants, having been one and having hired them. It's easier to hire a consultant to help configure a security product than it is to figure out if that product is even needed, which to buy, how to get approval and business buy-in, how to support it operationally, and a dozen other decisions.

I agree that certain specialized tasks merit outside support. That list changes from organization to organization. However, beware arguments like Gunter's.

The Analyzer Charged Again

I read a name I hadn't seen in years today when I read Kim Zetter's story Israeli Hacker Known as "The Analyzer" Suspected of Hacking Again:

Canadian authorities have announced the arrest of a 29-year-old Israeli named Ehud Tenenbaum whom they believe is the notorious hacker known as "The Analyzer" who, as a teenager in 1998, hacked into unclassified computer systems belonging to NASA, the Pentagon, the Israeli parliament and others.

Tenenbaum and three Canadians were arrested for allegedly hacking the computer system of a Calgary-based financial services company and inflating the value on several pre-paid debit card accounts before withdrawing about CDN $1.8 million (about U.S. $1.7 million) from ATMs in Canada and other countries. The arrests followed a months-long investigation by Canadian police and the U.S. Secret Service.


The Analyzer was the "mastermind" behind Solar Sunrise, one of the original "so easy a Caveman could do it" intrusions -- back in 1998. Solar Sunrise was huge and it was one of several very rude awakenings I remember while serving in the Air Force that decade.

Seeing The Analyzer back in law enforcement custody reminds me of the post I made about Max Ray Butler and somewhat of my post Intruders Selling Security Software. It's all about trust.

Bejtlich Keynote at 1st ACM Workshop on Network Data Anonymization

Brian Trammell and Bill Yurcik were kind enough to ask me to deliver the keynote at the 1st ACM Workshop on Network Data Anonymization (NDA 2008). The one day event takes place 31 October 2008 at George Mason University in northern VA. My talk will discuss the trials and tribulations of OpenPacket.org, and changes planned for the project.

Request for Feedback on Deny by Default

A friend of mine is working on digital defense strategies at work. He is interested in your commentary and any relevant experiences you can share. He is moving from a "deny bad, allow everything else" policy to an "allow good, deny everything else" policy.

By policy I mean a general approach to most if not all defensive strategies. On the network, define which machines should communicate, and deny everything else. On the host, define what applications should run, and deny everything else. In the browser, define what sites can be visited, and deny everything else. That's the central concept, although expansions are welcome.

My friend would like to know if anyone in industry is already following this strategy, and to what degree. If you can name your organization all the better (even if privately to me, or to him once the appropriate introductions are made). Thank you.

Bejtlich Keynote at SANS Forensics Summit

Rob Lee was kind enough to ask me to deliver the keynote on the second day of the SANS WhatWorks in Incident Response and Forensic Solutions Summit. The two-day event takes place 13-14 October 2008 at Caesars Palace in Las Vegas, NV. The conference agenda looks great, with training classes available before and after the summit. The tuition fee is $1,595 if paid by 10 Sep or $1,845 thereafter. I am very much looking forward to attending this event.

Rob also pointed out the new SANS Computer Forensics and E-discovery Community and SANS Forensics Blog.

Jumat, 05 September 2008

Microsoft Network Monitor 3.2 Beta for Tracking Traffic Origination

I'm always looking for a tool to map the traffic to or from a host with the process receiving or sending it. Today I noticed that Microsoft Network Monitor offers a beta that appears to have the functionality, according to this Netmon blog post. I visited the Netmon site on Microsoft Connect (registration required) to download beta 3.2. I ran two live capture tests to see what Netmon 3.2 beta would report.



As you can see in this first screen capture, the vast majority of traffic is considered "unknown." I tried using ping.exe in a cmd.exe terminal. I tried using ftp.exe in the same cmd.exe terminal. I used Firefox to watch a YouTube video, and I used Microsoft Media Player to view some video. It seemed that the more time an activity occupied, the more likely Netmon would associate it with the right process. For example, downloading a FreeBSD .iso through Firefox appeared associated with Firefox, but visiting most Web sites did not.



I tried a second session where I updated Adobe Acrobat Reader, launched Skype, and a few other actions. Again the vast majority of traffic is "unknown," although I could tell much of it was caused by launching Skype.

Does anyone else use this program and get different results? Incidentally I took these actions as Administrator to ensure I didn't run into any permissions problems, but it doesn't seem to have made a difference here.

Do you have a program to map traffic to generating processes, live?

Selasa, 02 September 2008

Schneier Agrees: Security ROI is "Mostly Bunk"

I know a lot more people pay attention to Bruce Schneier than they do to me, so I was thrilled to read his story on Security ROI (also in CSO Magazine):

Return on investment, or ROI, is a big deal in business. Any business venture needs to demonstrate a positive return on investment, and a good one at that, in order to be viable.

It's become a big deal in IT security, too. Many corporate customers are demanding ROI models to demonstrate that a particular security investment pays off. And in response, vendors are providing ROI models that demonstrate how their particular security solution provides the best return on investment.

It's a good idea in theory, but it's mostly bunk in practice.

Before I get into the details, there's one point I have to make. "ROI" as used in a security context is inaccurate. Security is not an investment that provides a return, like a new factory or a financial instrument. It's an expense that, hopefully, pays for itself in cost savings. Security is about loss prevention, not about earnings. The term just doesn't make sense in this context.

But as anyone who has lived through a company's vicious end-of-year budget-slashing exercises knows, when you're trying to make your numbers, cutting costs is the same as increasing revenues. So while security can't produce ROI, loss prevention most certainly affects a company's bottom line.


I am really honored to see Bruce's blog post link to three of my previous posts on the subject too.

Enterprise Users Should Not Be Records Managers

I found J. Timothy Sprehe's FCW article Seeking the records decider interesting. The whole article is worth reading, and it's short, but I'll post some excerpts to get the point across:

Like everyone else — including NARA — GAO assumes and accepts that employees will decide whether e-mail messages are federal records. It is fundamentally wrong to lodge decision-making for records management at the desktop PC level. It means the agency has as many records managers as it has e-mail users — a patent absurdity.

Managing e-mail at the desktop level is failing everywhere...

Records management works best when it happens in the background in a way that is transparent to employees...

Conventional wisdom says the technology for making e-mail management decisions at the software or server level is not yet mature. In my judgment, that mindset demonstrates a lack of imagination and an unwillingness to tackle old questions in new ways...

The Air Force is moving even further with the implementation of its enterprise information management strategy. Using proven commercial products, the Air Force is investing heavily in automated metadata extraction for all information objects, including e-mail messages, and populating an enterprisewide metadata registry. Air Force officials believe they can construct a rules engine that will use the detailed metadata to automate records management decisions, including retention and disposition schedules. Desktop PC users will see none of that.

Another beauty of the Air Force strategy is that it holds the promise of supplying an enterprisewide solution for e-discovery, which involves providing electronic documents for evidence in legal cases...

Agencies will never train their senior officials — let alone every rank-and-file user — to make well-informed decisions about e-mail records management. Why not accept that fact and experiment with new approaches that really work?


I agree with that sentiment. What's better, an automated system whose rules can be explained, tested, and agreed upon, or a policy that relies on interpretation and implementation by users?

This article reinforces one of the great recent security insights of our time, by Nitesh Dhanjani:

The job of information security is to make it harder for people to do wrong things.

Automatic background patch installation, automatic background backups and archiving, and related unobtrusive yet effective measures are the way forward. Users neither care nor are equipped to defend themselves, and they really shouldn't have to worry about being security experts.

Can anyone comment on the Air Force's approach?

Senin, 01 September 2008

Standards for System Administration

My favorite article from the August ;login: magazine is online: "Standard Deviations" of the Average System Administrator (.pdf) by Alva Couch. I'd like to highlight some excerpts:

System administrators have a surprising amount in common with electricians. Both professions require intensive training. Both professions are plagued by amateurs who believe (erroneously) that they can do a good job as a professional. Both professions are based upon a shared body of knowledge.

But electricians can call upon several resources that system administrators lack. Electricians have a legally mandated mentorship/apprenticeship program for training novices. They have a well-defined and generally-accepted profession of job grades, from apprentice to journeyman to master. They advance in grade partly through legally mandated apprenticeship and partly through legally mandated certifications. These certifications test for knowledge of a set of standards for practice—again, mandated by law. The regulations are almost universally accepted as essential to assuring quality workmanship, function, and safety.

In short, one electrician can leave a job and another can take over with minimal trouble and without any communications between the two, and one can be sure that the work will be completed in the same way and to the same standard. Can any two system administrators, working for different employers, be interchanged in such a fashion?

At present, system administrators are at a critical juncture. We have functioned largely as individuals and individualists, and we greatly value our independence. But the choices we make as individuals affect the profession as a whole. I think it is time for each of us to act for the good of the profession, and perhaps to sacrifice some of that independence for what promises to be a greater good. This will be a difficult sacrifice for some, and the benefits may be intangible and long-term rather than immediate. But I think it is time now for us to change the rules.

From standards for distributions (e.g., the Linux Standard Base) to standards for procedures (e.g., those upon which Microsoft Certified Engineers are tested), I believe that — although standards may annoy us as individuals — standards for our profession (and certification to those standards) help build respect for system administration as a profession. Compliance with standards gives us a new and objective way to measure the quality of management at a site. Standards not only make the task easier but also enforce desirable qualities of the work environment and help to justify appropriate practices to management. Adoption of standards also has a profound effect upon our ability to certify system administrators and even changes the
meaning and form of such a certification.

Is a system administrator accorded the same respect as an electrician? I think the answer is an emphatic “no,” at least for those electricians who hold a master’s license. There are two factors that engender respect for a master electrician: legally mandated standards linked closely to legally mandated apprenticeship and certification.


I think the whole article is worth reading, but those are the key points. Now, I'm sure many of us have electrician horror stories. I know someone (not me) who was unlicensed and therefore had to hire an electrician to wire an addition to his house. The "electrician" did such a poor job that this person then rewired everything to code himself rather than bother with the electrician again. I don't think that's the norm, but I wonder if there is any research that might support Dr. Couch's statement that one electrician can leave a job and another can take over with minimal trouble and without any communications between the two, and one can be sure that the work will be completed in the same way and to the same standard?

Still, I guarantee that most every system administrator handles boxes differently. Even within the same company, I find systems horrendously maintained. I once assumed control of a set of "Linux appliances" built and operated by a managed security service provider. They were all built for the same purposes, but ran a variety of Linux kernels with different applications, versions, and configurations. These were all operated by the same small MSSP!

Perhaps one of the worst examples of our lack of standardization involves network diagrams. Sites like Rate My Network Diagram will make you laugh and cry. I usually cry because I took four years of architecture training in high school. We did most everything by hand (it was the late 1980s), to include learning how to write the various "architectural fonts" we were expected to use. (We did start learning how to operate AutoCAD on the Applie IIGS just before graduation!) The point, however, was all of our diagrams looked similar, if not the same. This standardization allows one architect to review and build using another's plans without wondering what the various lines and icons mean.

Incidentally, I know about Cisco's icons. I'm talking about a standard way to use such icons, not standardized icons themselves. That's only one step.

Don't get me started on standard terminology... Yes, the image on the left depicts my feelings about the maturity of our industry. It's still early days, so I hope we decide to professionalize during my working lifetime.

NetworkMiner

Thanks to the great Toolsmith article by Russ McRee, I decided to try Eric Hjelmvik's NetworkMiner, a Windows-based network forensic tool.

You might think that Wireshark is the only tool you need for network forensics, but I maintain that Wireshark (while a great tool) is best used for packet-by-packet analysis. 95% of network forensics investigations are mostly concerned with the application layer data passed during a transaction, not the value of the initial sequence number sent in a SYN segment.

I intend to keep an eye on NetworkMiner because it's free and very easy to use. It would be great to see functionality in NetworkMiner merged into Wireshark. For example, I don't see any reason to implement feature requests for parsing any protocol that Wireshark already supports (which is basically every protocol that matters). NetworkMiner should focus on content extraction and perhaps leverage Wireshark where it can.