Minggu, 30 Maret 2008

Wireshark 1.0.0 Released

I'd like to congratulate the Wireshark team for releasing Wireshark 1.0.0. As the news item
says, it's been nearly 10 years in the making. I started using Ethereal in 1999 at the AFCERT with data collected from our ASIM sensors.

It's a great time for network security monitoring right now! With Sguil 0.7.0 released there's a lot of attention from high level players. It's cool.

Jumat, 28 Maret 2008

Practical Data Analysis and Reporting with BIRT

A friend of mine from my days at Ball Aerospace named John Ward wrote a book titled Practical Data Analysis and Reporting with BIRT. John was responsible for writing the reports we provided to customers of our network security monitoring service. He used that experience as a reason to learn more about BIRT, the Business Intelligence and Reporting Tools Eclipse-based reporting system.

If you have any interest in using an open source product to create reports, check out Practical Data Analysis and Reporting with BIRT. I think you can get moving with BIRT using this book faster than you can with the longer titles from AWL. John's blog contains many posts on using BIRT to design and create reports as well.

Rabu, 26 Maret 2008

Two Studies on Security Spending

I would like to note two articles on security spending. I learned of the first by listening to the audio edition of The Economist, specifically Anti-terrorist spending: Feel safer now?. The article summarizes a report (Transnational Terrorism, [.pdf]) by The Copenhagen Consensus, a think tank that analyzes government spending. The Economist says:

The authors of the study calculate that worldwide spending on homeland security has risen since 2001 by between $65 billion (if security is narrowly defined) and over $200 billion a year (if one includes the Iraq and Afghan wars). But in either case the benefits are far smaller.

Terrorism, the authors say, has a comparatively small impact on economic activity, reducing GDP in affected countries by perhaps $17 billion in 2005. So although the number of terrorist attacks has fallen, and fewer people have been injured, the imputed economic benefits are limited — just a tenth of the costs.

That does not necessarily mean the extra spending was wasted. The number of attacks might have been even higher. In 2007 Britain's prime minister, Gordon Brown, said his country had disrupted 15 al-Qaeda plots since 2001. Yet so big is counter-terrorism spending and so limited is terrorism's economic impact that, even if 30 attacks like the London bombings of July 2005 were prevented each year, the benefits would still be lower than the costs. The authors conclude that spending is high because it is an insurance policy against a truly devastating operation such as a dirty bomb...

There were fewer terrorist attacks, they say, but the balance of costs and benefits is still poor—between five and eight cents of benefit for every dollar spent. But international co-operation to disrupt terrorist finances would be cost-effective, they think, producing $5-15 of benefits for each $1.


I am not here to debate the politics of the event, and if I get any comments about that I'll just delete them. Rather, I find the effort to perform a cost-benefit analysis to be interesting. I highly prefer a cost-benefit approach (such as that recommended but not capable of being fulfilled in Managing Cyber-Security Resources) instead of so-called "return on security investment." It's fascinating to see a debate about whether spending is justified if "nothing bad happens." If nothing bad happens, was the money wasted or was it effective?

A second study is available via SecureWorks, titled Forrester Total Economic Impact™ of SecureWorks’ SIEM Service. Ok, this is a vendor pitch, but I thought the approach taken by the Forrester researchers to quantify the benefit of security operations could at least be a template for others.

In December 2007, SecureWorks commissioned Forrester Consulting to examine the total economic impact and potential return on investment (ROI) that enterprises might realize from deploying SecureWorks’ Security Information and Event Management (SIEM) Service...

Pacific Gas and Electric Company (PG&E), one of the largest natural gas and electric utilities in the United States, uses SecureWorks’ SIEM Service at the monitoring level for more than 90 systems in its network. In in-depth interviews with PG&E, Forrester found that the organization achieved comprehensive, enterprise-level security monitoring at a lower cost than the alternative of implementing and maintaining an in-house 24x7 Security Operating Center (SOC) and SIEM solution. PG&E also achieved a lower risk of loss due to security breaches, and was better able to track security performance for audits and reporting, thus building credibility for their security program within the organization and with clients. Forrester calculated that PG&E achieved a return on investment (ROI) of 193%, with a nearly immediate payback period.


Ugh, yes I detest "ROI," but check out the whitepaper to see how they justified the security program. You can download it without giving your life details away.

Sguil 0.7.0 Released

...and there was much rejoicing. Sguil 0.7.0 is now available for download. Sguil is an open source interface to statistical, alert, session, and full content data written by Bamm Visscher. A great way to quickly see the differences between 0.6.1 and 0.7.0 is to visit the NSM Wiki Sguil Overview and check out the diagrams near the bottom of the page. I've been using Sguil 0.7.0 from CVS for several weeks in production and it's working well. I plan to create a new virtual machine with Sguil 0.7.0 on FreeBSD 7.0. Shortly you will be able to buy a copy of the new BSD Magazine featuring my article Sguil 0.7.0 on FreeBSD 7.0 also. Check out the release announcement for more details.

Minggu, 23 Maret 2008

Implementing Enterprise Visibility by Leading Change

I've been advocating increased digital situational awareness via network security monitoring and related enterprise visibility initiatives for several years. Recently I read a Harvard Business Review case study called Leading Change: Why Transformation Efforts Fail by John P. Kotter. His eight stage process for creating a major change include:

  1. Establish a sense of urgency.

  2. Create a guiding coalition.

  3. Develop a vision and strategy.

  4. Communicate the change vision.

  5. Empower broad-based action.

  6. Generate short-term wins.

  7. Consolidate gains and produce more change.

  8. Anchor new approaches in the culture.


Failure to follow these eight steps often result in failed change efforts. Kotter notes for item 1 that the goal is to make the status quo seem more dangerous than launching into the unknown... When is the urgency rate high enough? [T]he answer is when about 75% of a company's management is honestly convinced that business-as-usual is totally unacceptable. Consider that level of commitment when trying to rally support for improved digital security!

For item 3, Kotter advises if you can't communicate the vision to someone in five minutes or less and get a reaction that signifies both understanding and interest you are not yet done with this phase of the transformation process. "Botnet, C&C channel, rootkit, Trojan, what??"

For item 4, Kotter says transformation is impossible unless... people are willing to help, often to the point of making short-term sacrifices. "You mean I have to schedule an outage window to deploy that network tap so you can observe traffic?"

For item 5, Kotter counsels communication is never sufficient by itself. Renewal also requires removal of obstacles. "We're sorry, we just don't have enough space in our data center for your equipment!"

For item 6, Kotter states Real transformation takes time, and a renewal effort risks losing momentum if there are no short-term goals to meet and celebrate. Most people won't go on the long march unless they see compelling evidence within 12 to 24 months that the journey is producing expected results. Without short-term wins, too many people give up or actively join the ranks fo those people who have been resisting change. I think that is a compelling point; find something useful, fast.

For item 8, Kotter writes change sticks when it becomes "the way we do things around here." For me this means Building Visibility In. For example, no new network link is deployed without a network tap. No new application is activated without a logging mechanism enabled and logs being sent to a central collection point. It is possible to enforce this behavior via mandate and procedure, but it is preferable for the need for these activities to be recognized as essential to success.

If you want to read the whole case study it appears in several forms online thanks to Google.

E-discovery Is an Information Lifecycle Management Problem, Not a Security Problem

The more I learn about e-discovery, the less I think it's a security problem. The vast majority of e-discovery issues are pure Information Lifecycle Management (ILM) concerns. The one area where I think security has a role is countering the subject's utilization of anti-forensics and counter-forensics (defined previously as attacking evidence and attacking tools, respectively).

I was reminded of this opinion while reading Find What You're Looking For? in Information Security magazine. Take a look at these Evidence Sources, for example.



Given the data sources depicted in the figure, why should information security have anything to do with e-discovery? I'll answer that question: history and tradition. In the "old days," internal investigations primarily meant imaging hard drives, reviewing content for disgusting images or incriminating documents, and producing them for management. Only the security team had the necessary expertise for this exercise. Today, the age of thinner clients, centralized storage, remote outsourced backup, and so on, we need to image hard drives less and less. Those who support the IT infrastructure should be responsible for e-discovery. In fact, I've seen a lot of attention to e-discovery in the storage press (One year after FRCP, struggles continue with e-discovery, How to purchase an e-discovery tool, and so on). I think this is appropriate.

Note this is totally different from intrusion investigations. Analyzing what an intruder did (insider or outsider) is not the same as producing documents for opposing counsel, a regulatory agency, or another party. E-discovery is not about investigating violations of CIA -- it's a document production exercise.

I liked the following figure in the Information Security article.



It's probably easy to see where your organization falls on this continuum.

I think it's time to push the e-discovery issue to where it belongs -- with the data managers or at least the legal team. As the number of true security mandates increase the load of the security team, I suggest sending work where it should be done, not where it might traditionally have been done. (I could say the same thing about backup, by the way. Wait, isn't that an availability issue? No -- availability is a security responsibility when it is at risk due to attack, not because someone's hard drive died.)

Finally, I'd like to reproduce part of the article that is not online but which is very important in my opinion.

Spare the White Gloves: Electronic Evidence does not need to be handled with excessive care.

Organizations need to debunk a chain-of-custody myth that perseveres in security circles: that evidence must be handled with white gloves, plastic bags and forceps (metaphorically speaking). In other words, the assumption that electronically stored information (ESI) must have extreme tamper-proofing and virtuous handling procedures and be pure as the driven snow for presentation in court simply isn't true.

Enterprises are not law enforcement and the cases they are usually involved in are not criminal ones. ESI comprises business records, and as long as it is stored in accordance with policy and as part of the normal IT operation in support of the business, then it is adequate for e-discovery purposes.

The US Federal Rules of Evidence state that just because data can be manipulated doesn't mean it can't be used. Rather, an enterprise simply must show that methods used to collect and store the information are essentiallyl trustworthy. Although prudent integrity protections should be employed -- such as access controls and logs of the actions of administrators who can delete or modify information -- an elaborate digital signature infrastructure or cryptographic checksums is unlikely to be required.

This is a worthwhile matter to discuss with a legal team. Consider the email records of Microsoft senior executives that were used as part of multibillion-dollar antitrust investigations. There were no intricate antitampering mechanisms for the ESI in that case, yet the evidence stood and few cases have stakes so high.


This reflects my own opinion too. You don't want to act irresponsibly, but you don't have to approach every event like it's a criminal case and you're the investigating detective.

Justifying Digital Security via 10-K Risk Factors

I'm a shareholder in Ball Corporation, thanks to the compensation plan I joined as an employee many years ago. Last week I received the company 10-K in the mail. I thought about my last reference to the form 10-k in my post CIO Magazine 20 Minute Miracles and Real Risks. I wondered if any of the Risk Factors in the 10-K could be used to justify a digital security program?

Let's look at each of them. If you're not familiar with Ball, it's mainly a manufacturer of packaging products, although a section is an aerospace company (where I worked).

  1. The loss of a key customer could have a significant negative impact on our sales... [Our] [c]ontracts are terminable under certain circumstances, such as our failure to meet quality or volume requirements... The primary customers for our aerospace segment are U.S. government agencies or their prime contractors... Our contracts with these customers are subject to several risks, including funding cuts and delays, technical uncertainties, budget changes, competitive activity and changes in scope. For this risk factor, a digital attack upon the manufacturing process could cause customers to turn elsewhere. Should a defense contractor lose faith in Ball's security measures, it may source defense products and services elsewhere.

  2. We face competitive risks from many sources that may negatively impact our profitability... Our current or potential competitors may offer products at a lower price or products that are deemed superior to ours. There is no clear link to digital security here, as this risk factor is fairly vague itself.

  3. We are subject to competition from alternative products, which could result in lower profits and reduced cash flows. There is no clear link to digital security here either.

  4. We have a narrow product range, and our business would suffer if usage of our products decreased. Same.

  5. Our business, financial condition and results of operations are subject to risks resulting from increased international operations... This sizeable scope of international operations may lead to more volatile financial results... Reasons for this include, but are not limited to, the following: 1) political and economic instability in foreign markets; 2) foreign governments’ restrictive trade policies; 3) the imposition of duties, taxes or government royalties; 4) foreign exchange rate risks; 5) difficulties in enforcement of contractual obligations and intellectual property rights; and 6) the geographic, language and cultural differences between personnel in different areas of the world. This item could have also listed vulnerability to economic espionage by hiring foreign nationals in overseas plants.

  6. We are exposed to exchange rate fluctuations. This is purely a business concern.

  7. Our business, operating results and financial condition are subject to particular risks in certain regions of the world... We may experience an operating loss in one or more regions of the world... Moreover, overcapacity, which often leads to lower prices, exists in a number of regions. The economic espionage aspect could fit here as well.

  8. If we fail to retain key management and personnel, we may be unable to implement our key objectives. Poor personnel management increases the likelihood of insider attacks, and poor handling of terminated personnel could result in IP loss.

  9. Decreases in our ability to apply new technology and know-how may affect our competitiveness. This is the closest we get to seeing technology mentioned as a business risk. Here it is failure to use technology, not protect data manipulated by technology.

  10. Bad weather and climate changes may result in lower sales. This is purely a business worry.

  11. We are vulnerable to fluctuations in the supply and price of raw materials. Same.

  12. Prolonged work stoppages at plants with union employees could jeopardize our financial position. The disgruntled insider is a possibility here, along with digital activism via DoS or defacement or even phishing.

  13. Our business is subject to substantial environmental remediation and compliance costs. This is mainly an environmental issue, although Ball is subject to various laws with digital security implications.

  14. There can be no assurance that any acquisition, including the U.S. Can and Alcan businesses, will be successfully integrated into the acquiring company. Acquisitions have historically been problematic for IT and security. An acquisition could be compromised or be an easy conduit for compromise.

  15. If we were required to write down all or part of our goodwill, our net earnings and net worth could be materially adversely affected. Business only.

  16. If the investments in Ball's pension plans do not perform as expected, we may have to contribute additional amounts to the plans, which would otherwise be available to cover operating expenses. Same.

  17. Our significant debt level could adversely affect our financial health and prevent us from fulfilling our obligations under the notes issued pursuant to our bond indentures. Same.

  18. We will require a significant amount of cash to service our debt and fund other investment opportunities. Our ability to generate cash depends on many factors beyond our control. Same.

  19. We are subject to U.S. generally accepted accounting principles (U.S. GAAP), under which we are often required to make changes in our accounting and reported results. Same.


Overall, the great majority of these risks that business people really care about do not have much do to with digital security. However, several of them do and several could. "Alignment" of IT with business objectives is an often-cited mantra. Perhaps digital security could try aligning itself with the risk factors in the company 10-K?

Rabu, 19 Maret 2008

Ten Themes from Recent Conferences

I blogged recently about various conferences I've attended. I considered what I had seen and found ten themes to describe the state of affairs and some general strategies for digital defense. Your enterprise has to be of a certain size and complexity for these items to hold true. For example, I do not expect item one to hold true for my lab network since the user base, number of assets, and nature of the assets is so small. Furthermore, I heavily instrument the lab (that's the purpose of it) so I am less likely to suffer item one. Still, organizations that use their network for business purposes (i.e., the network is not an end unto itself) will probably find common ground in these themes.

  1. Permanent compromise is the norm, so accept it. I used to think digital defense was a cycle involving resist -> detect -> respond -> recover. Between recover and the next attack there would be a period where the enterprise could be considered "clean." I've learned now that all enterprises remain "dirty" to some degree, unless massive and cost-prohibitive resources are directed at the problem.

  2. We can not stop intruders, only raise their costs. Enterprises stay dirty because we can not stop intruders, but we can make their lives more difficult. I've heard of some organizations trying to raise the $ per MB that the adversary must spend in order to exfiltrate/degrade/deny information.‏

  3. Anyone of sufficient size and asset value is being targeted. If you are sufficiently "interesting" but you don't think you are being attacked and compromised, you're not looking closely enough.

  4. Less Enterprise Protection, more Enterprise Defense. We need to think less in terms of raising our arms to block our face while digitally boxing, and more in terms of side-stepping, ducking and weaving, counter-punching, and other dynamic defenses.

  5. Less Prevention, more Detection, Response, Disruption. One of my laws from my books is Prevention eventually fails. Your best bet is to identify intrusions and rapidly contain and frustrate the intruder. You have to balance information gathering against active responses, but most organizations cannot justify what are essentially intel gathering operations against the adversary.

  6. Less Vulnerability Management, more System Integrity Analysis. Vulnerability management is still important, but it's an input metric. We need more output metrics, like SIA. Are all the defenses we institute doing anything useful? SIA can provide some answers.

  7. Less Totality, more Sampling. In security, something is better than nothing. Instead of worrying about determining the trustworthiness of every machine in production, devise statistically valid sample sizes and conduct SIA, tactial traffic assessment, and other evaluation techniques and extrapolate to the general population.

  8. Less Blacklisting, more Whitelisting. Organizations are waking up to the fact that there is no way to enumerate bad and allow everything else, but it is possible to enumerate good and deny everything else.

  9. Use Infrequency/Rarity to our advantage. If your organization adopts something like the FDCC on your PCs and whitelists applications, the environment will be fairly homogenous. Many organizations are deciding to make the trade-off between diversity/survivability and homogeneity/susceptibility in favor of homogeneity. If you're going down that path, why not spend extra attention on anything that deviates from your core load? Chances are it's unauthorized and potentially malicious.

  10. Use Blue and Red Teams to measure and validate. I've written about this a lot in my blog but I'm seeing other organizations adopt the same stance.


Have you adopted any themes based on your work or conference attendance?

Selasa, 18 Maret 2008

CIO Magazine 20 Minute Miracles and Real Risks

I liked CIO Magazine's article 20 Things You Can Do In 20 Minutes to Be More Successful at Work by Stephanie Overby. Several excerpts follow.

  • Grab the annual 10-K reports that your top competitors have filed with the Securities and Exchange Commission and read the section called "Management's Discussion and Analysis." That's where the CEO (through corporate lawyers) describes what happened to the company in the past year, good and bad.

    By scanning that material, you can immediately get a better understanding of the competition.

  • Sit down right now and reschedule all your internal IT meetings for just 20 minutes...

    "There's only about 15 minutes to 30 minutes of true productivity in most meetings, even though meetings are typically set up for an hour," says Michael Hites, CIO of New Mexico State University, who once placed a 30-minute limit on all meetings. "The idea is that it forces you and your meeting buddies to prepare and focus." Hites found that shorter meetings were more effective and left more time to actually accomplish things.

    If you like that idea, consider this even more sweeping suggestion from Direct Energy CIO Kumud Kalia: Cancel all recurring meetings with your subordinate staff. "Ask them to come to you with major issues, not every little decision," Kalia advises.

  • Take your own company's 10-K and pay attention to the bad stuff that happened in the past year. Think about how technology affects such events, then figure out what you can do about them. For example, in its latest 10-K, Owens Corning, the $6.5 billion maker of construction materials, talks about how the decline in U.S. home building hurt sales. Could better business intelligence have predicted how steeply new construction would fall and have helped Owens prepare?

    Think also about how IT can mitigate the scary possibilities cited in the "risk factors" section.

  • Ask yourself if you're working toward something or just working.

  • [S]end an e-mail to your staff to encourage them to pick up on something new. And tell them they are expected to spend one day a month learning. Make it an official day on everyone's calendar...

    One no-cost way to do this is to encourage participation in computer user group meetings and industry associations.


Speaking of 10-K forms, I looked at the latest from Owens Corning, specifically the Risk Factors section. It reminded me that the idea of creating a "Chief Risk Officer" out of the ranks of the information security staff is generally a bad idea. Why? All of the risks that businesses care about have little to do with information or security. Here's what Owens Corning cites:

  • Downturns in residential and commercial construction activity or general business conditions could materially negatively impact our business and results of operations.

  • Our cost-reduction projects may not result in anticipated savings in operating costs.

  • Adverse weather conditions and the level of severe storms could materially negatively impact our results of operations.

  • We may be exposed to increases in costs of energy, materials and transportation and reductions in availability of materials and transportation, which could reduce our margins and harm our results of operations.

  • Our hedging activities to address energy price fluctuations may not be successful in offsetting future increases in those costs or may reduce or eliminate the benefits of any decreases in those costs.

  • And the list continues...


Do you see what I mean? At the top levels of business, risk is all about business. It has little or nothing to do with anything we in the information security space manage on a day-to-day basis. I'm fine with that. My major role is to protect my company, our users, and to the extent possible, our customers and peers from digital threats... without them worrying about it. My company makes money, and I try to keep us safe.

If you do aspire to be a CRO, work for a financial or insurance firm, get a MBA, and lead a business line after being a security person. The companies popularly cited as having CROs are all insurance and financial in nature. These industries internalize risk via financial calculations and models on a daily basis, but it's risks involving capital and not data.

The Data Center in a Switch

We all know how security has been baked into virtualization projects from day 0. Ok, enough joking. Given our history with virtualization I'm a little scared when I read stories like Dawn of the App Aware Network that show switches becoming giant VM servers. If you didn't think of your routers and switches already as computers, you won't be able to ignore it once they are running such complex applications. I am looking forward to seeing who manages these beasts: network team or server team? Who will get blamed for poor performance? I love how these products are supposed to solve problems when the end result could be greater conflict within the IT department. I guess it won't matter when company IT departments aren't running these devices at all, since IT will be a service offered by an outsourced providers.

Senin, 17 Maret 2008

Black Hat DC 2008 Wrap-Up

Better late than never, I suppose. I taught TCP/IP Weapons School at Black Hat DC 2008 last month, and I also attended two days of briefings (many available in the archives).

The briefings began with Jerry Dixon from Team Cymru, which appears to now offer commercial services related to large scale Internet monitoring and infrastructure issues. Jerry noted several problems hampering security efforts, including lack of a dedicated security operations team (CIRT) and lack of network cognizance. I really like the idea of "cognizance," since one word is always better than the two word version -- "situational awareness." Jerry thought the Federal government's plan to reduce network gateways and monitor traffic at those points made sense.

The image at right is a small snapshot of Team Cymru's Internet Malicious Activity Map. I think visualizations like this are interesting. I was glad to see my class A dark.

Special Agent Andy Fried from the US Treaury Department spoke about his work countering attacks against his agency. He explained that it's impossible to stop everyone, so you have to rely on "aggressive identification and shutdown" of compromised systems.

Chuck Willis from MANDIANT discussed using Cross-Site Request Forgery to create "false evidence" on a person's computer. He said CSRF is usually a problem for server admins, not people browsing the Web. The idea is to force clients to sliently visit incriminating Web sites, thereby adding entries to their browser history, Web cache, and so on. As a simple example he showed (live) how to add a movie to someone's Netflix basket without their involvement. Chuck described how various encoding methods (decimal, dword, hex, octal) can obfuscate URLs, thereby frustrating simple forensic analysis. Including unguessable parameters when designing Web apps is one way to counter CSRF.

Oliver Friedrichs from Symantec previewed some material from his upcoming book Crimeware, some of which is described in this post.

Niteshi Dhanjani and Billy Rios presented how exposed many phishers as relative newbies who are open about their activities and obvious when you know where to look ("fullz", vip-dumps, etc.). I'd like to mention that I love Nitesh's statement in Social Engineering Social Networking Services: A LinkedIn Example:

The job of information security is to make it harder for people to do wrong things.

Nathan McFeters and Rob Carter talked about protocol handler issues in URI handlers, or URIs that link to applications like "aim://". They showed how URIs can be accessed via XSS and many of them suffer buffer overflow vulnerabilities.

I missed Tiller Beauchamp discuss Re-Tracer, or using Dtrace to for reverse engineering. At the same time Chris Tarnovsky from Flylogic Engineering was destroying "security devices" like USB tokens and related "secure chip" technologies. He showed how most vendors security claims are completely bogus. I was astounded by what he could do with several thousand dollars of used equipment, stepping through single instructions on a chip and dumping memory. Brian Chess and Jacob West explained how to instrument code using dynamic taint propagation.

The presentation on Cisco router forensics by Felix Lindner (FX) was awesome -- probably my favorite talk. He discussed TCL backdoors, patched IOS images on the Web, enabled lawful intercept hidden from router admins, and other cool IOS tricks. Most interesting was his description of configuring routers to "write core" and uploading the resulting file to a FTP server for router integrity analysis. His company provides a free service to analyze router dumps. I hope he commercializes it so I can add it to my operations.

David Dagon from Georgia Tech and Chris Davis from Damballa talked about botnets. They described using IP IDs (hello TCP/IP Weapons School) to estimate botnet size. They referenced the 15th Annual Network & Distributed System Security Symposium Proceedings for related work.

Sinan Eren from Immunity described how his team conducts "information operations," which is not DoD IO but systematic, stealthy, long-term compromise for red teaming purposes. His methodology in the case at hand was as follows:

  1. Attack the anti-virus/spam filter on the target company's mail transfer agent.

  2. Hook the AV to grab copies of all email. (Feeling good about that AV scanner now? Hey, it's defense in depth! Add more, you're secure! Not only does it not work 2/3 of the time, it's an avenue to be compromised! Argh.)

  3. Analyze email to understand the target.

  4. Inject forged email into ongoing thread between target and customer. Include malicious attachment.

  5. From target's computer, exploit DNS MSRPC vulnerability in target's PDC.

  6. Grab hashes, exploit other hosts. Find files of interest.

  7. Identify special network segmented from current network but accessed via USB drive.

  8. Modify USBDumper to acquire files when drive is moved from first network to special network.

  9. All interesting data transferred via Immunity's "PINK" C&C channel.


PINK is a proxy-aware, HTTP-based C&C channel that reads and writes to blog sites after conducting Google searches for highly specific text. The bot and master communicate via blog posts and comments. PINK was installed as an Explorer shell extension, which doesn't require admin privileges.

Sinan concluded by recommending we invest in human capital, not security products. Agreed!

Minggu, 16 Maret 2008

Thoughts from Several Conferences

Over the last several months I've accumulated several pages of notes after attending a variety of conferences. I thought I would present a few cogent points here. As with most of my posts, I record thoughts for future reference. If you'd rather not read a collection of ideas, please tune in later.

I attended the 28 Nov 07 meeting of the Infragard Nation's Capital chapter. I found the talk by Waters Edge Consulting CEO Jeffrey Ritter to be interesting. Mr. Writter is a lawyer and self-proclaimed "pirate" who works for the defendant by attacking every aspect of the adversary's case. As more lawyers become "cyber-savvy" I expect to encounter more of his type. Mr. Ritter offered three rules of defense.

  1. That which is unrecorded did not occur.

  2. That which is undocumented does not exist.

  3. That which is unaudited is vulnerable.


He also said "Litigation isn't about the truth... it's about getting money." He offered three questions to be asked of any evidence.

  1. Is it relevant?

  2. Is it real?

  3. Is it admissible?


Mr. Ritter mentioned three ediscovery-related sites, namely the Electronic Discovery Reference Model, the International Research on Permanent Authentic Records in Electronic Systems (InterPARES) project, and the Sedona Conference.

On 10-11 Dec 08 I attended several sessions of the Intelligence Support Systems for Lawful Interception, Criminal Investigations, Intelligence Gathering and Information Sharing Conference and Expo. I spoke at the May 07 event and attended an earlier conference in 2006. There is really nothing else like ISS World as far as I'm concerned. It's basically all about lawful intercept (LI). ISS World is heavily attended by police and vendors used to tapping phone lines, now confused by tapping IP traffic.

I thought comments by Alessandro Guida of ATIS Systems and Klaus Mochalski of IPOQUE were helpful. They noted that "traffic decoding," or representing traffic in as close a representation to what the user manipulated as possible, in a form friendly to investigators, is the big problem with LI today. They noted the difference between protocols and applications, since HTTP can be used for Web traffic, file transfers, mobile multimedia (all their terms), and so on. They said the four steps for traffic decoding are 1) classifying traffic; 2) correlating sessions; 3) extracting information; and 4) presenting content. They believe that "LI is becoming a data retention issue," because the volume of IP traffic manipulated by any end user is vastly increasing.

Dana Sugarman from Verint either stated the following or caused me to react with the following observations. "Security" typically focuses defense against a number of threats attacking a number of assets. LE, in contrast, focuses surveillance against a specific target, or perhaps several targets (a target being a potential criminal). Intelligence operations can focus on large numbers of threats or specific parties.

A few other themes arose at ISS World. "Application-specific lawful intercept" is the Holy Grail, meaning recording only the data necessary to render content useful to the investigator. Some judges are rejecting the idea that it is necessary or proper to monitor a suspect wherever he goes, rather than focusing on a method of communication (like a home telephone). Finally, most of the LI guys I met are former telecom people who seem to be reinventing the wheel. They are facing all of the issues we encountered with intrusion detection systems in the late 1990s. It would be amusing if it weren't sad too.

Finally on 25 Feb 08 I attended one day of the Institute for Applied Network Security 7th Annual Mid-Atlantic Information Security Forum. I went to the event to see specific people, including Angela Orebaugh, Ron Ritchey, Rocky DeStefano, Aaron Turner, Nick Selby, and Marty Roesch. I thought Phil Gardner's six themes were thought-provoking:

  1. Businesses will be, or already are, eliminating corporate computing assets in favor of personal computing assets. This is the "university model" I've blogged previously, meaning universities have been coping with student-provided endpoints on "corporate" networks for years.

  2. Information and physical security continues to converge.

  3. Risk of all forms is converging.

  4. NAC is a failure; "what does it even mean?" asks Phil.

  5. Data Leakage Protection is "stopping stupid, period." (I heard this repeatedly. Leakage is accidental and can possibly be stopped. Loss is intentional and cannot be reliably stopped.)

  6. Middle management who exist to manage techies are losing their jobs. In the end only executives and the techies themselves will be left.


At the talk on NIST by Orebaugh and Richey I pitched in vain my desire to see greater use of red teaming and time-based security. I think they thought I spoke in Greek, or was crazy. They would like to see NIST documents used to create a common security vocabulary. For the sake of the community I may try to adopt the definitions in NIST's Glossary of Key Information Security Terms (.pdf).

Rocky DeStefano and Brandon Dunlap talked about SIM. Their three recommendations were:

  1. After deploying the SIM, disable all built-in rules.

  2. Write rules specific to your organization, using the built-in rules as samples.

  3. Have experts review the resulting output.


Corrolaries of these rules are:

  • Deploying a SIM requires understanding your network to begin with. You can't deploy a SIM and expect to use it to learn how your network works.

  • You can't use a SIM to reduce security staffing. Your staffing requirements will definitely increase once you begin to discover suspicious and malicious activity.

  • You can't expect tier one analysts to be sufficient once a SIM is deployed. They still need to escalate to tier two and three analysts.


I liked John Schlichting's case study. It made me wonder why we bother blocking anything but specific IPs outbound. All we've done by restricting outbound protocols is force everything to be SSL-encrypted HTTPS traffic. Wonderful!

Sabtu, 15 Maret 2008

How Many Burning Homes

I mentioned the idea of host integrity assessment in my post Controls Are Not the Solution to Our Problem. The idea is to sample live devices (laptops, desktops, servers, routers, switches -- anything that runs a network-enabled operating system) to see if they are trustworthy. (They may be trusted, but that does not make them trustworthy.)

I described how I might determine trustworthiness, or integrity, in Three Capabilities, Three Companies. I'd like to expand on these thoughts with five metrics. Before showing the security metrics, I'd like to introduce an analogy.

Imagine a city with an understaffed, under-resourced, and possibly unappreciated fire department. The FD would like to prevent fires, but it spends most of its time responding to fires. How should city leadership decide how to staff and resource the FD? (There is no way to eliminate fires, at least no way that could ever be financed using any foreseeable resources. Even if people lived in concrete cells with no furnishings, they would probably figure out a way to light each other or the ground on fire!)

In this situation, one might argue that one way to judge the peril of the situation is the ability of the FD to "manage the fires." In other words, perhaps there is some number of burning homes that can be maintained while the FD responds, contains, and extinguishes fires. If the FD is large enough the number of fires can be rapidly decreased such that the time to extinguish is very small. If the FD is too small, then eventually the whole city burns because the fires overwhelm the FD's ability to respond, contain, and extinguish.

The question becomes what is the "right" number? You could think in terms of the following metrics.

  1. Number of burning homes at any sampled time. The higher this number, the more likely the fire will spread.

  2. Average length of time any home is burning. Again, the higher this number, the more likely the fire will spread.

  3. Average time from detection to response. This measures how fast the FD arrives on site.

  4. Average time from response to recovery. This measures how effective the FD is fighting fires.

  5. Average property value of burning homes. One would be less concerned if the burning homes are abandoned or condemned, and more concerned if they are inhabited.


I do not consider the number of arsonists here. That is relevant but it brings into question the role of the police to deter, investigate, apprehend, prosecute, and incarcerate threats. The FD cannot fight arsonists directly.

Now let's turn to digital security. While it's easy to spot a fire, identifying a "burning" (i.e., compromised) computer can be more difficult. If we could do that via host integrity assessment, we could imagine the following metrics.

  1. Number of compromised computers at any sampled time. This is a statistically valid sample.

  2. Average length of time any computer is compromised. Answering this quesiton requires a forensic investigation to identify the point in time where the intrusion is most likely to have happened.

  3. Average time from detection to response. This measures the effectiveness of the intrusion detection program.

  4. Average time from response to recovery. This measures the effectiveness of the IRT and provisioning personnel.

  5. Average asset value of compromised computers. Again, a lot of owned low-value assets might not be a big problem.


So what do you do with these numbers? First, I recommend just collecting them. Second, take them to business owners and ask if the situation is acceptable. For example:

  • Is it acceptable to have 25% of a business' computers compromised? 50% 10%? 5%?

  • Is it ok for them to be owned for 6 months? 1 day? 2 years?

  • Is it ok for us to take 6 months to notice? 2 hours? 2 days?

  • Is it ok for us to take 1 week to recover? 1 day? 1 month?

  • Is it ok for us to be suffering compromise on development servers? Call center PCs? Human resources databases?


Note on arsonists: you should be able to tell that "arsonists" are intruders. Since most companies can't reduce threats directly, IRTs are in exactly the same position as the FD.

Note on prevention: you can extend the fire analogy to other areas. Fire resistance is like the time required for a red team to penetrate a target. Applying fire retardants is like blue teams taking countermeasures upon discovering vulnerabilities.

Finally, with these answers we can make decisions to change the metrics. For example, a firefighter could say "increase my staff by two people per shift, and buy this new fire engine, and I can change the metrics this way." In the digital realm, a security analyst could say "increase my staff by two people per shift, and buy this new sensor grid, and I can change the metrics this way."

You could also try to influence the prevention side by saying "change all antivirus software from vendor A to vendor B, and change all local users from administrators to unprivileged users" and then see if the metrics change.

The manager is now in a position where spending influences metrics, and the failure to spend could result in an unacceptable answer to the question "How many burning homes?"

Jumat, 14 Maret 2008

Reactions to Latest Schneier Thoughts on Security Industry

The March 2008 Information Security Magazine features an article titled Consolidation: Plague or Progress, where Bruce Schneier continues his Face-Off series with one of my Three Wise Men, Marcus Ranum. Marcus echoes the point I made in my review of Geekonomics concerning the merits of open source projects:

Most of us have had a product suddenly go extinct--to be followed shortly by a sales call from the vendor that fired the fatal shot--in spite of the fact that we depended on it and paid 20 percent annual maintenance...

To me, it's the best argument for do-it-yourself or integrating open source technologies into your product choices. Remember: the big argument that's levied against open source is "Who is going to maintain it?" That argument stacks up pretty neatly against, "Is this product going to exist tomorrow?"


I liked that thought, but I became more interested in Bruce's counterpoint on security industry consolidation. This echoed what I reported last year in Response to Bruce Schneier Wired Story. This month Bruce says:

Honestly, no one wants to buy IT security. People want to buy whatever they want--connectivity, a Web presence, email, networked applications, whatever--and they want it to be secure. That they're forced to spend money on IT security is an artifact of the youth of the computer industry. And sooner or later the need to buy security will disappear.

It will disappear because IT vendors are starting to realize they have to provide security as part of whatever they're selling. It will disappear because organizations are starting to buy services instead of products, and demanding security as part of those services. It will disappear because the security industry will disappear as a consumer category, and will instead market to the IT industry.

The critical driver here is outsourcing. Outsourcing is the ultimate consolidator, because the customer no longer cares about the details...

IT is infrastructure. Infrastructure is always outsourced. And the details of how the infrastructure works are left to the companies that provide it.

This is the future of IT, and when that happens we're going to start to see a type of consolidation we haven't seen before. Instead of large security companies gobbling up small security companies, both large and small security companies will be gobbled up by non-security companies.


I think Bruce has nailed this argument. Now he is saying "the need to buy security will disappear" not because "the IT products we purchased [will be] secure out of the box" -- what he said last year -- but because "IT is infrastructure. Infrastructure is always outsourced. And the details of how the infrastructure works are left to the companies that provide it." This sounds like the Does IT Matter? argument of a few years ago, and I think Nick Carr and Bruce Schneier are right here.

What does this mean for security professionals? I think it means we will end up working for more service providers (like Bruce with Counterpane at BT) and fewer "normal" companies. Bruce wrote "the security industry will disappear as a consumer category, and will instead market to the IT industry," which means we security people will tend to either work for those who provide IT goods and services or we will work for small specialized companies that cater to the IT goods and services providers.

Bruce ends his article by saying

If I were Symantec and McAfee, I would be preparing myself for a buyer.

I think he is right again. These security companies will end up part of Cisco, Microsoft, Google, IBM, or a telecom. I doubt we will have large "security vendors" in the future.

A subtle point not made in this article is the idea that security folks who work for the CTO or CIO are probably going to stay there. I also think that smaller companies will be the first to see their security staffs go, but the biggest companies will always retain security staff -- if only to manage all of the outsourcing relationships.

Bejtlich Teaching at Black Hat USA Training 2008

Black Hat was kind enough to invite me back to teach TCP/IP Weapons School at Black Hat USA 2008 on 2-3 and 4-5 August 2008, at Caesars Palace, Las Vegas, NV. These are my last scheduled training classes in 2008.

I plan to rewrite and augment the class in my off time (late at night, basically!) for these two offerings. The cost for the two-day class is $2200 until 1 May, $2400 until 1 July, $2600 until 31 July, and $2900 starting 1 August. (I don't set the prices.) Register while seats are still available -- both of my sessions in Las Vegas last year sold out, and I sold out in DC last month too. Thank you.

Bejtlich Teaching at Techno Security 2008

I've previously spoken at the Techno Security 2005 and Techno Security 2006 conferences, and I taught Network Security Operations at Techno Security 2007. I'll be back at Techno Security 2008 teaching Network Security Operations (NSO) on Saturday 31 May 2008 at the Myrtle Beach Marriott Resort at Grande Dunes, a great family vacation spot.

This is the only planned offering of NSO in 2008. I'll attend the conference after the one day class. I can accommodate 25 students and each seat costs $995 for the one day class. They great news about registering for NSO is that if you sign up for the class, you get a free ticket to the entire Techno Security 2008 conference. Early registration for Techno costs $1195 and ends 31 March 2008, so signing up for my class is a great deal all around.

If you'd like to register for my NSO class, please check out the details here and return the registration form (.pdf) to me as quickly as you can. The deadline for registration is Friday 23 May 2008, and seats are first-come-first-serve. Thank you.

Senin, 10 Maret 2008

Bejtlich in Access Control and Security Solutions Magazine

Sandra Kay Miller interviewed me for the July 2007 issue of Access Control and Security Solutions magazine, but I forget about it until now. The interview describes my security experiences and my thoughts on working at GE.

Sabtu, 08 Maret 2008

Review of Professional Xen Virtualization Posted

Amazon.com just posted my four star review of Professional Xen Virtualization by William von Hagen. From the review:

I really enjoyed reading Professional Xen Virtualization (PXV). The book answered exactly the right questions for me, a person who had no Xen experience but wanted to give the product a try. If you are looking for a book on Xen internals, you should read The Definitive Guide to the Xen Hypervisor by David Chisnall. If you are less concerned about source-code-level details but still want to learn a lot about Xen, you will definitely enjoy PXV.

Network Security Monitoring for Fraud, Waste, and Abuse

Recently a blog reader asked the following:

You frequently mention "fraud, waste, and abuse" in your writing (for example), most often to say that NSM is not intended to address FWA. One thing I've been wondering though--why is fraud in there? I can see waste (employee burning time/resources on ESPN.com or Google Video) or abuse (pornography, etc), but Fraud seems to be in a different class. If someone is using the network to commit a crime, why shouldn't that be in scope? Indeed, preventing loss (monetary, reputational, of intellectual property) is really the bottom line for a strong security program, correct?

My stance on this question dates back to my days in the AFCERT. Let me explain by starting with some definitions from AFI90-301 (.pdf):

Fraud: Any intentional deception designed to unlawfully deprive the Air Force of something of value or to secure from the Air Force for an individual a benefit, privilege, allowance, or consideration to which he or she is not entitled. Such practices include, but are not limited to:

  1. The offer, payment, acceptance of bribes or gratuities, or evading or corrupting inspectors of other officials.

  2. Making false statements, submitting false claims or using false weights or measures.

  3. Deceit, either by suppressing the truth or misrepresenting material facts, or to deprive the Air Force of something of value.

  4. Adulterating or substituting materials, falsifying records and books of accounts.

  5. Conspiring to carry out any of the above actions.

  6. The term also includes conflict of interest cases, criminal irregularities, and the unauthorized disclosure of official information relating to procurement and disposal matters.


For purposes of this instruction, the definition can include any theft or diversion of resources for personal or commercial gain.

Waste: The extravagant, careless, or needless expenditure of Air Force funds or the consumption of Air Force property that results from deficient practices, systems controls, or decisions. The term also includes improper practices not involving prosecutable fraud.

Abuse: Intentional wrongful or improper use of Air Force resources. Examples include misuse of rank, position, or authority that causes the loss or misuse of resources such as tools, vehicles, computers, or copy machines.


Given these definitions, the first reason I do not think counter-FWA is an appropriate NSM mission is the identification of these actions. Security analysts perform NSM. Security analysts are not human resources, legal, privacy, financial audit, or police personnel. Trying to identify FWA (aside from the obvious, like wasting bandwidth or visiting pornography sites) is outside the scope of the security analyst's profession. If any of the aforementioned parties want to use some content inspection method to identify FWA, that's their job. Security analysts are generally tasked with identifying violations of confidentiality, integrity, and availability.

Second, in many organizations the inclusion of FWA would crowd out other security tasks. I have heard of some monitoring shops who do nothing but FWA because the volume of inappropriate activity seems to dwarf traditional security concerns. I think that is a poor allocation of resources.

Third, I think NSM for FWA is shaky on privacy grounds. Employees really have no expectation of privacy in the workplace, but the degree of monitoring required to identify non-obvious FWA is very invasive. Security analysts avoid reading email and reconstructing Web pages, but FWA investigations essentially rely on that very task. FWA is seldom easily detected using alert-based mechanisms, so identifying real FWA can turn into a fishing expedition where all content is analyzed in the "hope" of finding something bad. I think this is a waste of resources as well.

Having said that, in some cases NSM data can be used to support FWA tasks. However, I do not think FWA investigation should be a routine part of NSM operations.

What do you think?

Matt Jonkman and Endace on Accelerating Snort

If you missed it last month, you can watch Matt Jonkman's Faster Snorting Webinar at the Endace Web site. Matt posted answers to various questions posed by readers and you can download his slides or whitepapers if interested.

New Hakin9 Released

The latest issue of Hakin9 has been published. This is a subscription magazine published in Europe. Articles which caught my attention include Programming with Libpcap - Sniffing the network from our own application by Luis Martin Garcia, Reverse Engineering Binaries by Aditya K. Sood aka 0kn0ck, and Writing IPS Rules – Part 4 by Matthew Jonkman.

Jumat, 07 Maret 2008

Common Interface to Packets

Recently a blog reader asked me an interesting question. He wanted to know if it would be possible to replace the variety of network traffic inspection and analysis products with a single box running multiple applications. He was interested in some sort of common interface to packets that could perform the collection function and make traffic available to other products.

There are several ways to look at this issue. First, one can do that already using a commodity hardware platform. It is possible to run multiple traffic inspection applications against a single interface now, but one has to be careful as the number of applications increases. We use this approach with Sguil, where Snort listens to generate alerts, SANCP listens to create session records, Daemonlogger listens to log full content data, PADS listens to generate host records, and so on.

Second, one could buy a fairly open packet capture box and create virtual interfaces which provide a traffic stream to applications. Options which come to mind include Solera Networks capture appliances and Endace Ninja platforms. These typically run Linux and act as a high-end option for packet capture.

Third, one could think of a network tap (like a Net Optics regeneration tap or a Gigamon GigaVUE as that common interface to packet data. The tap collects traffic and then sends it to multiple products. This is a very common scenario for a simple reason: few vendors are willing to accept the decisions made by another vendor regarding packet capture. Everyone wants to collect data themselves, using their own NICs, or drivers, or libraries. That's perfectly understandable but it makes it tough for users who end up managing so many separate boxes.

What do you think?

Rabu, 05 Maret 2008

Infrastructure Protection in the Ancient World

In preparation for my career as an Air Force intelligence officer, I earned a bachelor of science degree in history at the Air Force Academy. (Yes, not a bachelor of arts degree. Because of the number of core engineering, math and science classes -- 12 I think? -- the degree is "science". At a civilian school I would have qualified for a minor in engineering, so I was told.) I really enjoy history because anyone who takes a minute to look backwards realizes 1) nothing is new; 2) we are not smarter than our predecessors; and 3) we enjoy the same successes and suffer the same mistakes.

With this background you might expect me to like reading Michael Assante's paper Infrastructure Protection in the Ancient World. (The link points to a summary written for CSO magazine. You can learn a little more about Michael at INL employee to advise next U.S. president on cybersecurity.) I did indeed find the paper interesting because it compares the security of Roman aqueducts with the security of the modern electricity grid. I would have preferred a comparison of ancient water systems with modern water systems, but Michael is a former electric utility CSO.

This quote resonates with me:

By the time the Romans realized the real risks they faced it was far too late. Much like today, the consequences are not fathomable without a clearly demonstrated threat.

Those words remind me of my post Disaster Stories Help Envisage Risk.

I hope to read more of these sorts of comparative papers.

Senin, 03 Maret 2008

Must-Read Blog for Networkers

The reason so many security researchers can run their l33t 0-day attacks on Web appz is that they (usually) don't have to worry about the underlying network layers failing them. I've always been more interested in network plumbing, particularly at the WAN and backbone levels. If you sympathize, you must read the Renesys Blog. Posts like Pakistan Hijacks YouTube and Iran Is Not Disconnected are primers on how the Internet works. Those guys rock.

Best. Quote. Ever.

2003: "IDSs [intrusion detection systems] have failed to provide value relative to its costs and will be obsolete by 2005." (Gartner, "Gartner Information Security Hype Cycle Declares Intrusion Detection Systems a Market Failure")

2008: "Our adversaries are very adept at hiding attacks in normal traffic. The only true way to protect our networks is to have an intrusion detection system." (Robert Jamison, Under Secretary of the National Protection and Programs Directorate at DHS)

I will have more to say about this in a future Snort Report.

This Network Is Maintained as a Weapon System

I've been very busy the last two weeks, and this week is no different. I expect to resume my regular blogging schedule gradually this week and more next week.

I'm posting to ask if anyone in the Air Force could send me an image like that posted at left, except taken when trying to visit TaoSecurity Blog. I think it would make a great laptop background if sufficiently large and high-quality. Thank you!