Senin, 30 Maret 2009

Scalable Infrastructure vs Large Problems, or OpenDNS vs Conficker

After seeing Dan Kaminsky's talk at Black Hat DC last month, I blogged about the benefits of DNS' ability to scale to address big problems like asset management records. I've avoid talking about Conficker (except for yesterday) since it's all over the media.

Why mention DNS and Conficker in the same post? All of the commotion about Conficker involves one variant's activation of a new domain generation algorithm on 1 April. Until today no one had publicly announced the reverse engineering of the algorithm, but right now you can download a list of 50,014 domains that one Conficker variant will select from when trying to phone home starting 1 April. Some of the domains appear to be pre-empted:

$ whois aadqnggvc.com.ua
% This is the Ukrainian Whois query server #B.
% Rights restricted by copyright.
%

% % .UA whois
% Domain Record:
% =============
domain: aadqnggvc.com.ua
admin-c: CCTLD-UANIC
tech-c: CCTLD-UANIC
status: FROZEN-OK-UNTIL 20090701000000
dom-public: NO
mnt-by: UARR109-UANIC (ua.admin)
remark: blocked according to administrator decision
changed: CCTLD-UANIC 20090320144409
source: UANIC

Others appear ready for registration:

~$ whois aafkegx.co.uk

No match for "aafkegx.co.uk".

This domain name has not been registered.

WHOIS lookup made at 00:56:31 31-Mar-2009

Keep in mind that another 50,000 domains will be generated on 2 April, and so on. With such a big problem, what could we do to contain this malware?

OpenDNS is a possible answer:

OpenDNS has kept our users safe from Conficker for the past several months by blocking the domains it uses to phone home...

The latest variant of Conficker is now churning through 50,000 domains per day in an attempt to thwart blocking attempts. Consider this: at any given time we have filters that hold well over 1,000,000 domains (when you combine our phishing and domain tagging filters). 50,000 domains a day isn’t going to rock the boat.

So here’s our update: OpenDNS will continue to identify the domains, all 50,000, and block them from resolving for all OpenDNS users. This means even if the virus has penetrated machines on your network, its rendered useless because it cannot connect back to the botnet.


That's one advantage of outsourcing your Internet DNS to a third party. They have the resources to integrate the latest threat intelligence and the position to do something to protect users.

This is a great example of scalable infrastructure (DNS) vs large problems (Conficker).

Finally, you've probably heard about the Conficker Know Your Enemy paper and associated upgraded scanning tools, like Nmap 4.85BETA5 and the newest Nessus check. I can't wait to see the results of tools like this. It could mark one of the first times we could fairly easily generate a statistic for the percentage of total assets compromised, similar to steps 8 and 9 from my 2007 post Controls Are Not the Solution to Our Problem. In other words, you can scan for Conficker and determine one score of the game -- the percentage of hosts compromised by one or more Conficker variants. The question is, how long until those controlling Conficker update the code to resist these remote, unauthenticated scans?


Richard Bejtlich is teaching new classes in Europe and Las Vegas in 2009. Online Europe registration ends by 1 Apr, and seats are filling. Early Las Vegas registration ends 1 May.

Minggu, 29 Maret 2009

NSM vs The Cloud

A blog reader posted the following comment to my post Network Security Monitoring Lives:

How do you use NSM to monitor the growing population of remote, intermittently connect mobile computing devices? What happens when those same computers access corporate resource hosted by a 3rd party such as corporate SaaS applications or storage in the cloud?

This is a great question. The good news is we are already facing this problem today. The answer to the question can be found in a few old principles I will describe below.

  • Something is better than nothing. I've written about this elsewhere: computer professionals tend to think in binary terms, i.e., all or nothing. A large number of people I encounter think 'if I can't get it all, I don't want anything." That thinking flies in the face of reality. There are no absolutes in digital security, or analog security for that matter. I already own multiple assets that do not strictly reside on any single network that I control. In my office I see my laptop and Blackberry as two examples.

    Each could indeed have severe problems that started when they were connected to some foreign network, like a hotel or elsewhere. However, when the obtain Internet access in my office, I can watch them. Sure, a really clever intruder could program his malware to be dormant on my systems when I am connected to "home." How often will that be the case? It depends on my adversary, and his deployment model. (Consider malware that never executes on VMs. Hello, malware-proof hosts that only operate on VMs!)

    The point is that my devices spend enough time on a sufficiently monitored network for me to have some sense that I could observe indicators of problems. Of course I may not know what those indicators could be a priori; cue retrospective security analysis.

  • What is the purpose of monitoring? Don't just monitor for the sake of monitoring. What is the goal? If you are trying to identify suspicious or malicious activity to high priority servers, does it make sense to try to watch clients? Perhaps you would be better off monitoring closer to the servers? This is where adversary simulation plays a role. Devise scenarios that emulate activity you expect an opponent to perform. Execute the mission, then see if you caught the red team. If you did not, or if your coverage was less than what you think you need, devise a new resistance and detection strategy.

  • Build visibility in. When you are planning how to use cloud services, build visibility in the requirements. This will not make you popular with the server and network teams that want to migrate to VMs in the sky or MPLS circuits that evade your NSM platforms. However, if you have an enterprise visibility architect, you can build requirements for the sort of data you need from your third parties and cloud providers. This can be a real differentiator for those vendors. Visibility is really a prerequisite for "security," anyway. If you can't tell what's happening to your data in the cloud via visibility, how are you supposed to validate that it is "secure"?


I will say that I am worried about attack and command and control channels that might reside within encrypted, "expected" mechanisms, like updates from the Blackberry server and the like. I deal with that issue by not handling the most sensitive data on my Blackberry. There's nothing novel about that.


Richard Bejtlich is teaching new classes in Europe and Las Vegas in 2009. Online Europe registration ends by 1 Apr, and seats are filling. Early Las Vegas registration ends 1 May.

Response to 60 Minutes Story "The Internet Is Infected"

I just watched the 60 Minutes story The Internet Is Infected. I have mixed feelings about this story, but I think you can still encourage others to watch and/or read it. Overall I think the effect will be positive, because it often takes a story from a major and fairly respected news source to grab the attention of those who do not operationally defend networks.

I'd like to outline the negative and positive aspects of the story, in my humble point of view.

The negative aspects are as follows:

  1. I detest the term "infected." Computers in 2009 are not "infected." They are compromised by malware operated by a human with an objective. The malware is a tool; it is not the end goal. In the late 1990s I enjoyed defending networks because the activity I monitored was caused by a human, live on the Internet, whose very keystrokes I could watch. At the beginning of this decade I despaired as human action was drowned in a sea of malware that basically propagated but did little otherwise. Since the middle of the decade we have had the worst of both worlds; when I see malware I know there is a human acting through it for malicious purposes. I detest "infection" because the term implies we can apply some antiseptic to the wound to "clean it." In reality the malware's operator will fight back, resist "cleaning," and maintain persistence.

  2. Cue the "teenage hacker." I thought we were collectively making progress away from the pasty-faced teenager in the parental basement. It seems the popular consciousness has now moved to the pasty-faced teenager in Russia, courtesy of 14-year-old "Tempest" in the 60 Minutes video. Never mind the organized crime, foreign intelligence, and economic espionage angles. Two other groups are definitely going to be upset by this: Chinese hackers and insider threats. Actually, not hearing a word about the latter makes me feel happy inside.

  3. "I thought I had a good enough firewall." GROAN. Hearing people talk about their firewalls and anti-virus was disheartening. I almost thought Vint Cerf was going to spill the beans on the easiest way to avoid Conficker when he said the following:

    I’ve been on the Net ever since the Net started, and I haven’t had any of the bad problems that you’ve described," Cerf replied...

    Because I don't use Windows! Say it Vint! Oh well.


The positive aspects are as follows:

  1. Hello security awareness. Stories like this wake people up to the problems we face every day. Sure Conficker is just the latest piece of malware, definitely not "one of the most dangerous threats ever," as said on TV. At the very least this story should enable a conversation between management and security operations.

  2. Client-side exploitation via socially-engineered and social network attacks were demonstrated. Good for Symantec to show that Morley Safer owns Leslie Stahl via Facebook. Better yet, 60 Minutes even used the term "owned"!

  3. Real consequences were demonstrated. I am very glad that Symantec showed just what an intruder can do to an owned computer. Keystroke logging, screen scraping, sensitive informatiomn retrieval, the works. They didn't even mention opening and closing the CD tray or activating the Webcam. That would have been cool, though.


Expect a few questions about this tomorrow at work!


Richard Bejtlich is teaching new classes in Europe and Las Vegas in 2009. Online Europe registration ends by 1 Apr, and seats are filling. Early Las Vegas registration ends 1 May.

Sabtu, 28 Maret 2009

Network Security Monitoring Lives

Every once in a while I will post examples of why Network Security Monitoring works in a world where Webbed, Virtual, Fluffy Clouds abound and people who pay attention to network traffic are considered stupid network security geeks.

One of the best posts I've seen on the worm-of-the-week, Conficker, is Risk, Group Think and the Conficker Worm by the Verizon Security Blog. The post says:

With the exception of new customers who have engaged our Incident Response team specifically in response to a Conficker infection, Verizon Business customers have reported only isolated or anecdotal Conficker infections with little or no broad impact on operations. A very large proportion of systems we have studied, which were infected with Conficker in enterprises, were “unknown or unmanaged” devices. Infected systems were not part of those enterprise’s configuration, maintenance, or patch processes.

In one study a large proportion of infected machines were simply discarded because a current user of the machines did not exist. This corroborates data from our DBIR which showed that a significant majority of large impact data breaches also involved “unknown, unknown” network, systems, or data.


This my friends is the reality for anyone who defends a live network, rather than those who break them, dream up new applications for them, or simply talks about them. If a "very large proportion of systems" that are compromised are beyond the reach of the IT team to even know about them, what can be done? The answer is fairly straightforward: watch the network for them. How can you do that? Use NSM.

Generate and collect alert, statistical, session, and full content data. I've also started using the term transaction data to mean data which is application-specific but captured from the network, like DNS requests and replies, HTTP requests and replies, and so on. These five forms of data can tell you what systems live on the network and what they are doing. It is low-cost compared to the variety of alternatives (manual, physical asset control; network access control; scanning; etc.). Once a sensor is deployed in the proper place you can perform self-reliant (i.e., without the interference of other groups) NSM, on a persistent and consistent basis.

Where should you monitor? Watch at your trust boundaries. The best place to start is where you connect to the Internet. Make sure you can see the true source IP (e.g., a desktop's real IP address) and the true destination IP (e.g., a botnet C&C server). If that requires tapping two locations, do it. If you can approximate one or the other location using logs (proxy, NAT, firewall, whatever), consider that, but don't rely only on logs.

NSM lives, and it is working right now.


Richard Bejtlich is teaching new classes in Europe and Las Vegas in 2009. Online Europe registration ends by 1 Apr, and seats are filling. Early Las Vegas registration ends 1 May.

Minggu, 22 Maret 2009

NSM on Cisco AXP?

Last year I wrote Run Apps on Cisco ISR Routers. That was two weeks after our April Fool's joke that the Sguil Project Was Acquired by Cisco.

I am wondering if any TaoSecurity Blog readers are using Cisco AXP in production? Looking at the data sheet for the modules, they appear too underpowered for NSM applications, especially at the price point Cisco is advertising.


Richard Bejtlich is teaching new classes in Europe and Las Vegas in 2009. Online Europe registration ends by 1 Apr, and seats are filling. Early Las Vegas registration ends 1 May.

Sabtu, 14 Maret 2009

Association of Former Information Warriors

In response to my TaoSecurity Blog post titled Buck Surdu and Greg Conti Ask "Is It Time for a Cyberwarfare Branch?", I decided to create the Association of Former Information Warriors. I set up a LinkedIn Group with the following description:

The Association of Former Information Warriors is a professional networking group for those who once served as military members in information operations (IO) or warfare (IW) units. The mission of the AOFIW is to propose, promote, and debate policies and strategies to preserve, protect, and defend digital national security interests. Candidate members must be referred by current members. Those no longer in military service are candidates for full membership; those currently serving in uniform are candidates for associate membership.

In other words, to join AOFIW you need to know an existing member. This weekend I am going to try kickstarting the membership process by inviting those I personally know and trust to meet these criteria. You must be a LinkedIn user to join the group, since that is the mechanism we will use to vet and accept members.

I'll be posting about AOFIW at the AOIFW Blog, which will offer thoughts from other AOFIW members as we grow the group.




Richard Bejtlich is teaching new classes in Europe and Las Vegas in 2009. Online Europe registration ends by 1 Apr, and seats are filling. "Super Early" Las Vegas registration ends 15 Mar.

Jumat, 13 Maret 2009

More PowerPoint Woes

Last year I attended The Best Single Day Class Ever, taught by Prof. Tufte. He changed my outlook on PowerPoint for ever. Today in FCW magazine I found a pointer to 8 PowerPoint Train Wrecks, like the slide Bill Gates is presenting at left. While following some of the linked presentations, I came across this line from the shmula blog:

While at Amazon, we were all told by Divine Fiat that ALL presentations — regardless of kind, cannot ever be on Powerpoint. Period. Bezos prefers prose and actual thoughts slapped in a report — an actual paper report with paragraphs, charts, sentences, an executive summary, introduction of problem, research approach and findings (body of paper), conclusions and recommendations — not choppy, half-thoughts on a gazillion slides.

Thank goodness. I am not crazy after all.

That same blog post makes other good points, and links to an imagined Barack Obama "Yes We Can" PowerPoint deck. Hilarious.


Richard Bejtlich is teaching new classes in Europe and Las Vegas in 2009. Online Europe registration ends by 1 Apr, and seats are filling. "Super Early" Las Vegas registration ends 15 Mar.

Thoughts on Latest Government Focus on Digital Security

Ties between the US government and digital security are all over the news right now. We have the Director of National Intelligence supporting greater NSA involvement in defending cyberspace, which prompts the (now former) Director of the National Cyber Security Center (NCSC) to resign in protest.

We have the chief security officer of Oracle calling for a Monroe Doctrine for cyberspace while the former director of the National Cyber Security Division says (paraphrasing his speech) security resources are often misaligned and misallocated because organizations are driven to present number-driven metrics based on some combination of threats, vulnerabilities and asset value to management — and that doesn't work.

There is talk of creating a Cyberspace Combatant Command, to stand alongside other Unified Combatant Commands. (Thanks to Greg Conti for the link.) I think a Cyber COCOM would be a great step forward, since Combatant Commands, not the individual services, are the entities which fight the nation's wars,

On a related note, I attended part of the latest Software Assurance Forum sponsored by DHS. Presentations by Mischel Kwon, director of US-CERT, and Tony Sager, chief of the Vulnerability Analysis and Operations (VAO) Group in NSA, were the most interesting to me. I'd reproduce a few noteworthy items.

Mischel Kwon said or mentioned:

  • "Legacy systems are not an excuse. They are a flaw." In other words, you can't make excuses for operating indefensible networks.

  • US-CERT is building its own incident management and ticketing system. This was interesting to me because incident management is a massive headache.

  • US-CERT is looking at using Security Content Automation Protocol as a detection tool, to identify when system configurations change. (SCAP is a protocol, not a tool; but the tools using SCAP can watch for changes.)


Tony Sager said or mentioned:

  • "We can't just fix software to 'solve' security problems because vulnerability is everywhere." Wow, amen. Someone else believes we live in a world of vulnerabilities. Tony may displace one of my Three Wise Men!

  • "No single group of security practitioners is big enough to develop and maintain its own security configuration guides." Therefore, the FDCC was developed. Seriously, if you have to run Windows, why not start with the FDCC as your core image and make changes to FDCC? Don't waste time trying to figure out what a security system looks like. Make use of the government's collective work, applied to millions of computers, and adjust to suit your needs.

  • "DoD cannot afford to maintain separate IT... DoD doesn't improve unless everyone else improves. Tony said that modern network security relies on everyone improving their status, even if that means knowledge to improve security is used by the adversary.

  • "VAO doesn't brief 90% of our constituents." In other words, VAO publishes Security Configuration Guides, which its world-wide constituency consumes. "VAO briefings" refer to NSA's red team presenting its findings to DoD customers following an adversary simulation activity. Red and blue teaming used to be the primary means that customers would learn how to improve their networks. Now, VAO's expertise is delivered much more often in the form of written reports. The written word scales.

  • "Even if a single tool could manage all DoD vulnerabilities, DoD wouldn't want to rely on only one tool." That places too much trust and power in the hands of a single vendor. Instead, DoD (and others) should rely on common protocols to describe vulnerabilities, like SCAP, and then ensure the wiude variety of tools DoD uses can speak that common language.

  • "Every human is a sensor." Advanced intruders are likely to evade technical detection. People are often the best, and only, way to identify advanced intrusions.


Finally, I'd like to briefly mention commentary by two other speakers. Curt Barker from NIST listed two "leap-ahead" initiatives at NIST, namely asymmetric algorithms for the quantum computing environment (in 20-25 years) and very large scale key management. I wonder how long those with quantum computers will be active before new algorithms that resist quantum computer cryptography breaking are widely deployed?

Jason Providakes from MITRE described the potential for the government to build a core capability with known pedigree, augmented by open and commercial software. I found this interesting, because it's possibly 5 to 10 years out of date. In other words, the problems we often see these days involve applications, not the operating system (if that's the "core capability" mentioned).


Richard Bejtlich is teaching new classes in Europe and Las Vegas in 2009. Online Europe registration ends by 1 Apr, and seats are filling. "Super Early" Las Vegas registration ends 15 Mar.

Senin, 09 Maret 2009

The Security World Is Not Just a Webbed, Virtual, Fluffy Cloud

If you've been watching the digital security scene for a while, you'll notice trends. Certain classes of attack rise and fall. Perceptions of risks from insiders vs outsiders change. I think it is important to realize, however, that globally, security vulnerabilities and exposures are persistent. By that I mean that if we forget or neglect problems from the past (or even present) and focus only the future, we will lost.

For example, the three big themes you'll see in many IT and security discussions are the following.

  1. Web apps

  2. Virtualization

  3. Cloud


If you're not dealing with those three areas, you're a dinosaur, man! Forget all that other stuff you've learned!

The problem with that attitude is that it sees the world through a tunnel of shiny newness.

Consider the following list of recent security issues and see how many of them deal with those three hot topics.

I could continue. The point is there's a lot more to our security problems than Web, VM, and Cloud. It might be simpler to think of only those three problems, but there are at least a dozen more that require attention. This problem makes our security lives more difficult, but also more interesting.


Richard Bejtlich is teaching new classes in Europe and Las Vegas in 2009. Online Europe registration ends by 1 Apr, and seats are filling. "Super Early" Las Vegas registration ends 15 Mar.

Building Security In Maturity Model Partly Applies to Detection and Response

Gary McGraw was kind enough to share a draft of his new Building Security In Maturity Model. I'm not a "software security" guy but I found that the Governance and Intelligence components of the Software Security Framework apply almost exactly to anyone trying to build a detection and response, or "security operations", center. Consider:

I think the whole document is just what the software security world needs, but the two sections should apply equally well, and almost without any modification, to someone trying to build a detection and response operation or at least trying to assess the maturity of their operation.


Richard Bejtlich is teaching new classes in Europe and Las Vegas in 2009. Online Europe registration ends by 1 Apr, and seats are filling. "Super Early" Las Vegas registration ends 15 Mar.

Thoughts on Technology Careers for the Next Generation

I think the next generation of IT and digital security professionals will find limited opportunities in the "traditional" non-IT/security companies of today. I wrote about this last year in Reactions to Latest Schneier Thoughts on Security Industry when I said this, specifically about the security field:

What does this mean for security professionals? I think it means we will end up working for more service providers (like Bruce with Counterpane at BT) and fewer "normal" companies.

Bruce wrote "the security industry will disappear as a consumer category, and will instead market to the IT industry," which means we security people will tend to either work for those who provide IT goods and services or we will work for small specialized companies that cater to the IT goods and services providers...

[S]ecurity companies will end up part of Cisco, Microsoft, Google, IBM, or a telecom. I doubt we will have large "security vendors" in the future.


I'd like to extend this prediction (which is not unique to me, of course, but writing it here means I'm planning for the change) from security to IT in general. I re-examined my stance on this issue after reading GE CIO Gets His Head in the Cloud for New SaaS Supply Chain App. The fact that the article talks about GE isn't the specific point (disclaimer: my employer). It's another reminder that IT and security are not the end goal for most organizations: they are means to an end. The only exceptions are companies whose products and services are IT and/or security, e.g., Cisco, Microsoft, Google, IBM, telecoms, etc.

This doesn't mean that "IT [or security] doesn't matter." On the contrary, both are crucial, but history has shown a relentless drive to focus the business on core competencies and away from non-core functions. The definition of core competencies is what matters.

Businesses are spread across a large spectrum. One end might have a (largely theoretical) fully-closed organization that could generate its own electricity, mine its own raw materials, design its own products, staff every seat with employees, design/build/run/defend its own information assets, and run its own sales, distribution, and customer service functions. At the extreme opposite is a firm that does nothing but buy patented ideas and sell licenses, with minimum staff and every other function outsourced.

The history of capitalism has demonstrated the power of comparative advantage, specialization, and division of labor. Businesses continue to migrate away from the do-it-yourself model to the outsourced model, with labor, legal, and security concerns as a few sources of friction.

If you look around your own enterprise you'll see signs that this migration is happening. I'd like to know which of you manage a 3G network? Chances are if you answer yes, you work for a telecoms provider. How many of you keep the operating system on your Blackberry or iPhone patched? If you answer yes you work for a telecoms provider or Apple.

It's entirely within the realm of possibility to imagine enterprise users operating personally-owned assets, with network connectivity supplied by a 3G network, accessing software-as-a-service Web apps hosted by a cloud provider. Oh wait, that is already happening. Anyone who wants to see what the "consumerization of IT" looks like should visit a university campus and see how students learn in the 21st century.

This doesn't mean that universities and other organizations who are embracing this model have zero IT and security staff. Rather, I think it is important to imagine where we (or our kids) could be working in 20 years, if we want to stay in the IT and/or security fields. Many more jobs, percentage-wise, are going to be with providers and vendors, not customers. Consider how many companies maintain their own electricians, phone technicians, and so on. There are plenty of those roles in the modern economy, but they tend not to work for non-electrical, non-phone companies.


Richard Bejtlich is teaching new classes in Europe and Las Vegas in 2009. Online Europe registration ends by 1 Apr, and seats are filling. "Super Early" Las Vegas registration ends 15 Mar.

Sabtu, 07 Maret 2009

Requirements for Defensible Network Architecture: Monitored

Last year I posted Defensible Network Architecture 2.0, consisting of 8 (originally 7, plus 1 great idea from a comment) characteristics of an enterprise that give it the best chance to resist an intrusion.

In this post I'd like to define some specifics for the first of the 8 characteristics: monitored. At some point in the future it would probably make sense to think of these characteristics in terms of a capability maturity model. Right now I'd like to capture some thoughts for use in later work. I will approach the requirements from a moderate point of view, meaning I will try to stay between what I would expect from a low-capability operation and a high-capability operation.

Like my related posts, this is a work in progress and I appreciate feedback.

A Defensible Network Architecture is an information architecture that is:

  1. Monitored. Monitored can be described using the following categories, which collectively can be considered intrusion detection operations. (Add in Response or Resolution, depending on your IRT's mandate, and you have the CAER model for security operations.)


    • Collection. The following technical data is collected and available to the security operations team.


      • Network Security Monitoring (NSM) data from passive sensors; note the NSM data must depict true source IP and true destination IP (i.e., monitoring traffic between a NAT gateway and a proxy means seeing only the source IP of the NAT gateway and the destination IP of the proxy, radically decreasing the value of the observed traffic)


        • Alert data from devices making judgements while inspecting network traffic

        • Statistical data summarizing network traffic

        • Session data describing conversations in network traffic

        • Full content data providing traffic headers and payloads


      • Infrastructure Security Monitoring data from routers, firewalls, switches, so-called intrusion prevention systems, and other network infrastructure that actively manipulates network traffic, or provides fundamental network services; by "fundamental services" I mean services that, without which, nothing much else works, e.g., DHCP, DNS, BGP


        • Access Control logs that report on allowed and denied traffic

        • Infrastructure logs that report DHCP address assignments, DNS queries and responses, BGP routing tables, etc.


      • Platform Security Monitoring data from nodes (laptops, desktops, non-infrastructure servers, etc.)


        • Operating system security logs, like Windows Event Logs

        • Application logs, like Web server logs, Web application logs, etc.

        • Platform memory, preferably exposing memory segments as needed (think retrieving a live system registry) or the entire memory (think ManTech DD plus Volatility)



    • Analysis.


      • A dedicated team analyzes technical data collected in the previous stage.

      • The team has access to subject matter experts who can answer questions on the nature of threats, vulnerabilities, and assets in order to better understand the risk posed by monitored activity.

      • Analysis is understood and supported by management as a creative task that cannot be "automated away." If automation were possible for detecting intrusions, the same automation could be applied to preventing them. ("If you can detect it, why not prevent it?") Assuming everything detectable is preventable, by definition the analysis team is left to identify activity which is most likely not easily detectable, or at least not easily validated as being malicious.


    • Escalation.


      • The team has defined categories to identify the nature of intrusions and non-intrusions.

      • The team has defined severity levels describing the impact of various types of intrusions.

      • The team has an escalation matrix summarizing the stes to be taken given an intrusion of a specific category and severity.




You should monitor at trust boundaries, to the extent you perceive risk and have the technical and legal resources to do so. (For more on trust boundaries with respect to monitoring please see NSM vs Encrypted Traffic, Plus Virtualization and NSM vs Encrypted Traffic Revisited.

I will stop here, but continue with Inventoried when I have time.


Richard Bejtlich is teaching new classes in Europe and Las Vegas in 2009. Online Europe registration ends by 1 Apr, and seats are filling. "Super Early" Las Vegas registration ends 15 Mar.

Using Forensic Tools Offensively

This should not be a surprise to people who use forensic tools on a daily basis, but it is a good reminder. I just noticed two great posts, Dumping Memory to extract Password Hashes Part 1 and Dumping Memory to extract Password Hashes Part 2, on the Attack Research blog. They show how to exploit a system with Metasploit, upload the Meterpreter, upload Mantech's MDD memory dumper, dump memory, download it to an attacker's system, and then follow instructions from Forensiczone to use Moyix's volreg extensions to the Volatility Framework to extract passwords.

I would be curious to see if intruders are really using methodologies like this. One way to identify such activity would be to watch for files being exfiltrated from the enterprise that match common memory sizes, such as 512 MB, 1 GB, 2 GB, 4 GB, and so on.


Richard Bejtlich is teaching new classes in Europe and Las Vegas in 2009. Online Europe registration ends by 1 Apr, and seats are filling. "Super Early" Las Vegas registration ends 15 Mar.

Jumat, 06 Maret 2009

Recoverable Network Architecture

Last year I outlined my Defensible Network Architecture 2.0, consisting of 8 (originally 7, plus 1 great idea from a comment) characteristics of an enterprise that give it the best chance to resist an intrusion.

I'd like to step into the post-intrusion phase to discuss Recoverable Network Architecture (RNA, goes well with DNA, right?), a set of characteristics for an enterprise that give it the best chance to recover from an intrusion. This list is much rougher than the previous DNA list, and I appreciate feedback. The idea is that without these characteristics, you are not likely to be able to resume operations following an incident.

RNA does not mean your enterprise will be intruder-free, just as DNA didn't mean you would be intrusion free. Rather, if you do not operate a Recoverable Network Architecture you have very little chance of returning at least the system of interest to a trustworthy state. (Please remember the difference between trusted and trustworthy!)

  1. The recoverable network must be defensible. Being defensible not only helps with resisting intrusions; it helps recovery too. For example, the network must already be:


    • Monitored: Monitoring helps determine incident scope before recovery and remediation effectiveness after recovery.

    • Inventoried: Inventories help incident responders understand the range of potential victims in an incident before recovery and help ensure no unrecognized victims are left behind after recovery.

    • Controlled: Control helps implement short term incident containment, if appropriate, before recovery, and enforces better resistance after recovery.

    • Claimed: Because an asset is claimed, incident responders know which asset owners to contact.

    • Minimized: Assets that retain security exposures following recovery are subject to easy compromise again.

    • Assessed: Assessment validates that monitoring works (can we see the assessment?), that inventories are accurate (is the system where it should be?), that controls work (did we need an exception to scan the target, or could we sail through?), and that minimization/keeping current worked (are easy holes present?)

    • Current: Assets that retain security vulnerabilities following recovery are subject to easy compromise again.

    • Measured: Measurement helps justify various recovery actions, e.g. showing that so-called "cleaning" is less effective and costs more than complete system rebuilds.


  2. Assets in a recoverable network must be capable of being replaced -- fast. IT shops are slowly waking up to the fact that "cleaning" does not work, is too expensive, and should be standard for any disaster recovery/business process continuity activity anyway. Complete rebuilds are becoming the only semi-effective remedy. (I say semi-effective because even complete rebuilds can preserve BIOS-level and other persistent, extra-OS rootkits.)

  3. Incident responders in a recoverable network must be authorized and empowered to collect evidence, analyze leads, escalate findings, and guide remediation. An IRT that is asked to assist with an incident, but that is not allowed or able to collect information from a victim, is basically helpless. An IRT that must wait for other parties to provide information is ineffective, and likely to find the "data" provided by the other party to be of decreasing value as time passes and asset owners trounce host-based evidence.

  4. A recoverable network is supported by an organization that has planned for intrusions. The IR plan must engage a variety of parties, contain realistic scenarios, and actually be followed. IR plans help increase the likelihood of incident recovery because time is not wasted on phone calls asking "what do we do now?"

  5. A recoverable network is supported by an organization that has exercised the IR plan. Drills find weaknesses in plans that will hamper recovery.

  6. A recoverable network is supported by an IRT that is appropriately segmented. By that I mean that the IRT's infrastructure is not hosted or maintained by the same infrastructure the IRT is trying to recover. In other words, the IRT should not depend on equipment administered by the same people who suffered a loss of their credentials, or be part of the same Windows domain, and so on. If the IRT does share infrastructure with the victim, then the IRT can no longer trust its own systems and must first restore the trustworthiness of its own gear before turning to the organization.

  7. A recoverable network is supported by an IRT that is also connected. The team can communicate in degraded situations, with itself and with outside parties. The IRT will definitely have requirements that exceed the end user community, and almost certainly even the IT shop.


What do you think of these requirements? I may try expanding on each of the DNA items with examples at some point. If that works well I will apply the same to RNA.


Richard Bejtlich is teaching new classes in Europe and Las Vegas in 2009. Online Europe registration ends by 1 Apr, and seats are filling. "Super Early" Las Vegas registration ends 15 Mar.

Steve Liesman on Inputs vs Outputs

I've been blogging recently on Inputs vs Outputs, or Why Controls Are Not Sufficient. I've also been writing about Wall Street for the past year and a half. What we are seeing in the business realm is one of the biggest incident response engagements the world has ever seen.

This morning on CNBC's Squawk Box, reporter Steve Liesman summarized the market's reaction to the ongoing crisis. The latest jobs report had just been released, and panelists were debating the effectiveness of the administration's announcements of various plans. Steve said:

It's not what you're doing that matters; it's whether or not it works.

In other words, focusing on the inputs as a measure of success is a waste of time. You have to know the score of the game. In the business world, the score of the game is measured using employment numbers, stock market prices, the London Interbank Offered Rate (LIBOR), currency valuations, and so on. My post Controls Are Not the Solution to Our Problem has one set of ideas for measures in the digital security world.


Richard Bejtlich is teaching new classes in Europe and Las Vegas in 2009. Online Europe registration ends by 1 Apr, and seats are filling. "Super Early" Las Vegas registration ends 15 Mar.

Kamis, 05 Maret 2009

Cyber Stress Cases

Earlier this week I attended an IANS Mid-Atlantic Information Security Forum. During the conference Phil Gardner made a good point. He noted that the ongoing credit crisis has fundamentally altered the world's perception of business risk. He said the changes to financial operations are only the beginning. These changes will eventually sweep into information security as well.

This reminded me of the world's reaction to 9/11. The day the attacks happened, I was working at our MSSP. Some of my customers called to ask if we were seeing unusual digital attacks against their systems. That really surprised me, but it emphasized the fact that 9/11 introduced a new era of security-mindedness. I believe that era has largely passed, but for the better part of this decade 9/11 stimulated security thinking.

I watch as much CNBC as possible (during lunch and dinner) and I am hearing the term "stress cases" repeatedly. This is not the same as Treasury Secretary Geithner's "stress tests," but it is related. Businesses are essentially doing planning for various levels of financial stress. In other words, they analyze financial operations in the case that their assets are worth 50% of book value, or 40%, or 30%, and so on.

From a digital security standpoint, that sounds like incident response planning. You make plans for various contingencies and decide how to handle them. I think this will manifest itself when you hear your CxO ask "what will you do if X, Y, or Z happen?"

Richard Bejtlich is teaching new classes in Europe and Las Vegas in 2009. Online Europe registration ends by 1 Apr, and seats are filling. "Super Early" Las Vegas registration ends 15 Mar.

Selasa, 03 Maret 2009

Bejtlich Teaching at Black Hat USA 2009

Black Hat was kind enough to invite me back to teach two sessions of my new 2-day course at Black Hat USA 2009 Training on 25-26 July and 27-28 July 2009 at Caesars Palace in Las Vegas, NV.

This class, completely new for 2009, is called TCP/IP Weapons School 2.0. These are my last scheduled classes in the United States in 2009.

Registration is now open. Black Hat set five price points and deadlines for registration.

  • Super Early ends 15 Mar

  • Early ends 1 May

  • Regular ends 1 Jul

  • Late ends 22 Jul

  • Onsite starts at the conference


As you can see in the Sample Lab I posted last week, this class is all about developing an investigative mindset by hands-on analysis, using tools you can take back to your work. Furthermore, you can take the class materials back to work -- an 84 page investigation guide, a 25 page student workbook, and a 120 page teacher's guide, plus the DVD. I have been speaking with other trainers who are adopting this format after deciding they are also tired of the PowerPoint slide parade.

If you've attended any of my previous classes, you are most welcome in the new one. Unless you attended my Black Hat DC training last month, you will not see any repeat material whatsoever in TWS2. I look forward to seeing you, either in Las Vegas or Amsterdam. Thank you.


Richard Bejtlich is teaching new classes in Europe and Las Vegas in 2009. Online Europe registration ends by 1 Apr, and seats are filling. "Super Early" Las Vegas registration ends 15 Mar.

Bro SSL Certificate Details

I was asked today about using Bro to record details of SSL certificates. I wanted to show an excerpt from one of my class labs as an example.

In one of the labs I use Bro to generate logs for a network trace. The idea is that by looking at the server subject and server issuer fiels, you might identify odd activity.

First I generate Bro logs.

analyst@twsu804:~/case03$ /usr/local/bro/bin/bro -r
/home/analyst/pcap/tws2_15casepcap/case03.pcap weird notice alarm tcp udp conn http
http-request http-reply http-header ssl dns

You can see Bro summarize the SSL connections it sees on port 443 TCP by default.

analyst@twsu804:~/case03$ grep https.start ssl.log
1230953783.860406 #1 192.168.230.4/1700 > 67.199.36.111/https start
1230953792.363305 #2 192.168.230.4/1702 > 67.199.36.111/https start
1230953999.730060 #3 192.168.230.4/1712 > 63.245.209.118/https start
1230954052.303861 #4 192.168.230.4/1735 > 194.109.206.212/https start
1230954060.752904 #5 192.168.230.4/1742 > 24.92.58.169/https start
1230954060.811960 #6 192.168.230.4/1743 > 88.84.144.63/https start
1230954060.843277 #7 192.168.230.4/1740 > 92.195.102.210/https start
1230954060.860087 #8 192.168.230.4/1744 > 85.125.106.58/https start
1230954060.879373 #9 192.168.230.4/1746 > 82.94.251.204/https start
1230954061.166306 #10 192.168.230.4/1747 > 124.16.143.97/https start
1230954061.167447 #11 192.168.230.4/1738 > 220.175.170.133/https start
1230954064.376426 #12 192.168.230.4/1748 > 82.29.1.204/https start
1230954064.408963 #13 192.168.230.4/1749 > 87.97.231.238/https start
1230954075.839499 #14 192.168.230.4/1754 > 91.143.87.107/https start
1230954136.655647 #15 192.168.230.4/1763 > 140.247.60.83/https start
1230954136.763340 #16 192.168.230.4/1764 > 62.141.58.13/https start

You can take a deeper look at these SSL connections using Bro. First I create a list of search terms for grep, and then I grep for those search terms in ssl.log.

analyst@twsu804:~/case03$ cat ssl_grep.txt
server subject
server issuer

Here is the grep.

analyst@twsu804:~/case03$ grep -f ssl_grep.txt ssl.log
1230953999.730060 #3 X.509 server issuer /C=US/O=Equifax/OU=Equifax Secure Certificate
Authority
1230953999.730060 #3 X.509 server subject /C=US/ST=California/L=Mountain
View/O=Mozilla Corporation/CN=*.addons.mozilla.org
1230954052.494060 #4 X.509 server issuer /CN=www.z72ey43i.net
1230954052.494060 #4 X.509 server subject /CN=www.defgig6t6azjbr2.net
1230954060.813874 #5 X.509 server issuer /CN=www.kmz5vo6e6.net
1230954060.813874 #5 X.509 server subject /CN=www.pkpwmlwen7vge.net
1230954060.932578 #6 X.509 server issuer /CN=www.ne2jqp556.net
1230954060.932578 #6 X.509 server subject /CN=www.dpcmd6qbqlpabomp5ki5.net
1230954061.007888 #8 X.509 server issuer /CN=www.rdsm2znz.net
1230954061.007888 #8 X.509 server subject /CN=www.dme2njaquxi.net
1230954061.022973 #9 X.509 server issuer /CN=www.hqnn5zhz.net
1230954061.022973 #9 X.509 server subject /CN=www.76grma4ml.net
1230954061.500215 #10 X.509 server issuer /CN=www.4h33vtek5c4p57wuae.net
1230954061.500215 #10 X.509 server subject /CN=www.tx7iuwu56.net
1230954061.510028 #11 X.509 server issuer /CN=www.npn3go6542.net
1230954061.510028 #11 X.509 server subject /CN=www.fqhbh226p.net
1230954063.926987 #7 X.509 server issuer /CN=www.ennvjjpqlvnehtbqae74.net
1230954063.926987 #7 X.509 server subject /CN=www.3lp45iastk.net
1230954064.513351 #12 X.509 server issuer /CN=www.3bxwanjs7lrqrduij.net
1230954064.513351 #12 X.509 server subject /CN=www.5cioy5x224bja6wnf.net
1230954064.575053 #13 X.509 server issuer /CN=www.i6rtf7w3bdbdh.net
1230954064.575053 #13 X.509 server subject /CN=www.r7thso6x.net
1230954076.059391 #14 X.509 server issuer /CN=www.uiwpjnmjsqgatlo2ppik.net
1230954076.059391 #14 X.509 server subject /CN=www.r4g5fuzu3rybrf.net
1230954136.715980 #15 X.509 server issuer /CN=www.dsl47i66rnpesdparhj.net
1230954136.715980 #15 X.509 server subject /CN=www.zgxc7xvt6aj2xqo7z.net
1230954136.904599 #16 X.509 server issuer /CN=www.u2vuanrtt6v3ckj77u.net
1230954136.904599 #16 X.509 server subject /CN=www.b6w4ffeimiezuhp7bilm.net

If you've ever looked at Tor SSL certificates you'll recognize the traffic here.

In a later lab I show how to ask Bro to look at SSL to any port.


Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.