Jumat, 30 Oktober 2009

Bejtlich and Bradley on SANS Webcast Monday 2 Nov

Ken Bradley and I will conduct a Webcast for SANS on Monday 2 Nov at 1 pm EST. Check out the sign-up page. I've reproduced the introduction here.

Every day, intruders find ways to compromise enterprise assets around the world. To counter these attackers, professional incident detectors apply a variety of host, network, and other mechanisms to identify intrusions and respond as quickly as efficiently as possible.

In this Webcast, Richard Bejtlich, Director of Incident Response for General Electric, and Ken Bradley, Information Security Incident Handler for the General Electric Computer Incident Response Team, will discuss professional incident detection. Richard will interview Ken to explore his thoughts on topics like the following:

  1. How does one become a professional incident detector?

  2. What are the differences between working as a consultant or as a member of a company CIRT?

  3. How have the incident detection and response processes changed over the last decade?

  4. What challenges make it difficult to identify intruders, and how can security staff overcome these obstacles?



I will lead this event and conduct it more like a podcast, so the audio will be the important part. This is a short-notice event, but it will be cool. Please join us. Thank you!

Rabu, 28 Oktober 2009

Partnerships and Procurement Are Not the Answer

The latest Federal Computer Week magazine features an article titled Cyber warfare: Sound the alarm or move ahead in stride? I'd like to highlight a few excerpts.

Military leaders and analysts say evolving cyber threats will require the Defense Department to work more closely with experts in industry...

Indeed, the Pentagon must ultimately change its culture, say independent analysts and military personnel alike. It must create a collaborative environment in which military, civilian government and, yes, even the commercial players can work together to determine and shape a battle plan against cyber threats...


Ok, that sounds nice. Everyone wants to foster collaboration and communication. Join hands and sing!

“Government may be a late adopter, but we should be exploiting its procurement power,” said Melissa Hathaway, former acting senior director for cyberspace for the Obama administration, at the ArcSight conference in Washington last month...

Hmm, "procurement power." This indicates to me that technology is the answer?

Although one analyst praised the efforts to make organizational changes at DOD, he also stressed the need to give industry more freedom. “The real issue is a lack of preparedness and defensive posture at DOD,” said Richard Stiennon, chief research analyst at independent research firm IT-Harvest and author of the forthcoming book "Surviving Cyber War."

“Private industry figured this all out 10 years ago,” he added. “We could have a rock-solid defense in place if we could quickly acquisition through industry. Industry doesn’t need government help — government should be partnering with industry.”


Hold on. "Private industry figured this all out?" Is this the same private industry in which my colleagues and I work? And there's that "acquisition" word again. Why do I get the feeling that technology is supposed to be the answer here?

Industry insiders say they are ready to meet the challenge and have the resources to attract the top-notch talent that agencies often cannot afford to hire.

That's probably true. Government civilian salaries cannot match the private sector, and military pay is even worse, sadly.

Industry vendors also have the advantage of not working under the political and legal constraints faced by military and civilian agencies. They can develop technology as needed rather than in response to congressional or regulatory requirements or limitations.

I don't understand the point of that statement. Where do military and civilian agencies go to get equipment to create networks? Private industry. Except for certain classified scenarios, the Feds and military run the same gear as everyone else.

“This is a complicated threat with a lot of money at stake,” said Steve Hawkins, vice president of information security solutions at Raytheon. “Policies always take longer than technology. We have these large volumes of data, and contractors and private industry can act within milliseconds.”

Ha ha. Sure, "contractors and private industry can act within milliseconds" to scoop up "a lot of money" if they can convince decision makers that procurement and acquisition of technology are the answer!

Let's get to the bottom line. Partnerships and procurement are not the answer to this problem. Risk assessments, return on security investment, and compliance are not the answer to this problem.

Leadership is the answer.

Somewhere, a CEO of a private company, or an agency chief, or a military commander has to stand up and say:

I am tired of the adversary having its way with my organization. What must we do to beat these guys?

This is not a foreign concept. I know organizations that have experienced this miracle. I have seen IT departments aligned under security because the threat to the organization was considered existential. Leaders, talk to your security departments directly. Listen to them. They are likely to already know what needs to be done, or are desperate for resources to determine the scope of the problem and workable solutions.

Remember, leaders need to say "we're not going to take it anymore."

That's step one. Leaders who internalize this fight have a chance to win it. I was once told the most effective cyber defenders are those who take personal affront to having intruders inside their enterprise. If your leader doesn't agree, those defenders have a lonely battle ahead.

Step two is to determine what tough choices have to be made to alter business practices with security in mind. Step three is for private sector leaders to visit their Congressional representatives in person and say they are tired of paying corporate income tax while receiving zero protection from foreign cyber invaders.

When enough private sector leaders are complaining to Congress, the Feds and military are going to get the support they need to make a difference in this cyber conflict. Until then, don't believe that partnerships and procurement will make any difference.

Selasa, 27 Oktober 2009

Initial Thoughts on Cloud A6

I'm a little late to this issue, but let me start by saying I read Craig Balding's RSA Europe 2009 Presentation this evening. In it he mentioned something called the A6 Working Group. I learned this is related to several blog posts and a Twitter discussion. In brief:

  • In May, Chris Hoff posted Incomplete Thought: The Crushing Costs of Complying With Cloud Customer “Right To Audit” Clauses, where Chris wrote Cloud providers I have spoken to are being absolutely hammered by customers acting on their “right to audit” clauses in contracts.

  • In June, Craig posted Stop the Madness! Cloud Onboarding Audits - An Open Question... where he wondered Is there an existing system/application/protocol whereby I can transmit my policy requirements to a provider, they can respond in real-time with compliance level and any additional costs, with less structured/known requirements responded to by a human (but transmitted the same way)?

  • Later in June, Craig posted in Vulnerability Scanning and Clouds: An Attempt to Move the Dialog On... where he spoke of the need for customers to conduct vulnerability assessments of cloud providers: A “ScanAuth” API call empowers the customer (or their nominated 3rd party) to scan their hosted Cloud infrastructure confident in the knowledge they won’t fall foul of the providers Terms of Service.

  • In July, Chris extended Craig's idea with Extending the Concept: A Security API for Cloud Stacks, building on the aforementioned Twitter discussions. Chris mentioned The Audit, Assertion, Assessment, and Assurance API (A6) (Title credited to @CSOAndy)... Specifically, let’s take the capabilities of something like SCAP and embed a standardized and open API layer into each IaaS, PaaS and SaaS offering (see the API blocks in the diagram below) to provide not only a standardized way of scanning for network vulnerabilities, but also configuration management, asset management, patch remediation, compliance, etc.


Still with me? In August Network World posted A6 promises a way to check up on public cloud security, which said:

What cloud services users need is a way to verify that the security they expect is being delivered, and there is an effort underway for an interface that would do just that.

Called A6 (Audit, Assertion, Assessment and Assurance API) the proposal is still in the works, driven by two people: Chris Hoff - who came up with the idea and works for Cisco - and the author of the Iron Fog blog who identifies himself as Ben, an information security consultant in Toronto.

The usefulness of the API would be that cloud providers could offer customers a look into certain aspects of the service without compromising the security of other customers’ assets or the security of the cloud provider’s network itself.

Work on a draft of A6 is posted here http://www.scribd.com/doc/18515297/A6-API-Documentation-Draft-011. It’s incomplete, but offers a sound framework for what is ultimately needed.


So let's see what that says:

The A6 API was designed with the following concepts in mind:

  1. The security stack MUST provide external systems with the ability to query a utility computing provider for their security state. Ok, that's pretty generic. We don't know what is meant by "security state," but we're just starting.

  2. The stack MUST provide sufficient information for an evaluation of security state asserted by the provider. Same issue as #1.

  3. The information exposed via public interfaces MUST NOT provide specific information about vulnerabilities or result in detailed security configurations being exposed to third parties or trusted customers. Hmm, I'm lost. I'm supposed to determine "security state" but without "specific information about vulnerabilities"?

  4. The information exposed via public interfaces SHOULD NOT provide third parties or trusted customers with sufficient data as to infer the security state of a specific element within the providers environment. Same issue as #4.

  5. The stack SHOULD reuse existing standards, tools and technologies wherever possible. Neutral, throwaway concern.


That's about it, with the following appearing below:

In classic outsourcing deals these security policies and controls would be incorporated into the procurement contract; with cloud computing providers, the ability to enter in specific contractual obligations for security or allow for third party audits is either limited or non-existent. However, this limitation does not reduce the need for consuming organizations to protect their data.

The A6 API is intended to close this gap by providing consuming organizations with near real-time views into the security of their cloud computing provider. While this does not allow for consuming organizations to enforce their security policies and controls upon the provider, they will have information to allow them to assess their risk exposure.


Before I drop the question you're all waiting for, let me say that I think it is great people are thinking about these problems. Much better to have a discussion than to assume cloud = secure.

However, my question is this: how does this provide "consuming organizations with near real-time views into the security of their cloud computing provider"?

Here is what I think is happening. Craig started this thread because he wanted a way to conduct audit and compliance (remember I highlighted those terms) activities against cloud providers without violating their terms of service. I am sure Craig would agree that compliance != security.

The danger is that someone will believe that complaince = security, thinking one could conceivably determine security state by scanning for network vulnerabilities, but also configuration management, asset management, patch remediation, compliance, etc..

This is like network access control all over again. A good "security state" means you're allowed on the network because your system is configured "properly," the system is "patched," and so on. Never mind that the system is 0wned. Never mind that there is no API for quering 0wnage.

Don't get me wrong, this is a really difficult problem. It is exceptionally difficult to assess true system state by asking the system, since you are at the mercy of the intruder. It could be worse with cloud and virtual infrastructure if the intruder owns the system and the virtual infrastructure. Customer queries the A6 API and the cloud returns a healthy response, despite the reality. Shoot, the cloud could say it IS healthy by the definition of patches or configuration and still be 0wned.

I think there's more thought required here, but that doesn't mean A6 is a waste of time -- if we are clear that it's more about compliance and really nothing about security, or especially trustworthiness of the assets.

Wednesday is Last Day for Discounted SANS Registration

In my off time I'm still busy organizing the SANS WhatWorks in Incident Detection Summit 2009, taking place in Washington, DC on 9-10 Dec 09. The agenda page should be updated soon to feature all of the speakers and panel participants. Wednesday is the last day to register at the discounted rate.

I wrote the following to provide more information on the Summit and explain its purpose.

All of us want to spend our limited information technology and security funds on the people, products, and processes that make a difference. Does it make sense to commit money to projects when we don’t know their impact? I’m not talking about fuzzy “return on investment” (ROI) calculations or fabricated “risk” ratings. Don’t we all want to know how to find intruders, right now, and then concentrate on improvements that will make it more difficult for bad guys to disclose, degrade, or deny our data?

To answer this question, I’ve teamed with SANS to organize a unique event -- the SANS WhatWorks in Incident Detection Summit 2009, on 9-10 December 2009 in Washington, DC. My goal for this two-day, vendor-neutral, practitioner-focused Summit is to provide security operators with real-life guidance on how to discover intruders in the enterprise. This isn’t a conference on a specific commercial tool, or a series of death-by-slide presentations, or lectures by people disconnected from reality. I’ve reached out to the people I know on the front lines, who find intruders on a regular, daily basis. If you don’t think good guys know how to find bad guys, spend two days with people who go toe-to-toe with the worst intruders on the planet.

We’ll discuss topics like the following:

  • How do Computer Incident Response Teams and Managed Security Service Providers detect intrusions?

  • What network-centric and host-centric indicators yield the best results, and how do you collect and analyze them?

  • What open source tools are the best-kept secrets in the security community, and how can you put them to work immediately in your organization?

  • What sources of security intelligence data produce actionable indicators?

  • How can emerging disciplines such as proactive live response and volatile analysis find advanced persistent threats?


Here is a sample of the dozens of subject matter experts who will pack the schedule:

  • Michael Cloppert, senior technical member of Lockheed Martin's enterprise Computer Incident Response Team and frequent SANS Forensics blogger.

  • Michael Rash, Senior Security Architect for G2, Inc., author of Linux Firewalls and the psad, fwsnort, and fwknop security projects.

  • Matt Richard, Malicious Code Operations Lead for the Raytheon corporate Computer Emergency Response (RayCERT) Special Technologies and Analysis Team (STAT) program.

  • Martin Roesch, founder of Sourcefire and developer of Snort.

  • Bamm Visscher, Lead Information Security Incident Handler for the General Electric CIRT, and author of the open source Sguil suite.


Ron Gula is scheduled to do one keynote and I'm working on the second. We'll have guest moderators for some panels too, such as Mike Cloppert and Rocky DeStefano.

I look forward to seeing you at the conference!

Review of Hacking Exposed: Web 2.0 Posted

Amazon.com just posted my three star review of Hacking Exposed: Web 2.0 by Rich Cannings, Himanshu Dwivedi, Zane Lackey, et al. From the review:

I have to agree with the other 3-star reviews of Hacking Exposed: Web 2.0 (HEW2). This book just does not stand up to the competition, such as The Web Application Hacker's Handbook (TWAHH) or Web Security Testing Cook (WSTC). I knew this book was in trouble when I was already reading snippets mentioning JavaScript arrays in the introduction. That set the tone for the book: compressed, probably rushed, mixing material of differing levels of difficulty. For example, p 8 mentions using prepared statements as a defense against SQL injection. However, only a paragraph on the topic appears, with no code samples (unlike TWAHH).

Note: McGraw-Hill Osborne provided me a free review copy.

Review of Web Security Testing Cookbook Posted

Amazon.com just posted my five star review of Web Security Testing Cookbook by Paco Hope and Ben Walther. From the review:

I just wrote five star reviews of The Web Application Hacker's Handbook (TWAHH) and SQL Injection Attacks and Defense (SIAAD). Is there really a need for another Web security book like Web Security Testing Cookbook (WSTC)? The answer is an emphatic yes. While TWAHH and SIAAD include offensive and defensive material helpful for developers, those books are more or less aimed at assessment professionals. WSTC, on the other hand, is directed squarely at Web developers. In fact, WSTC is specifically written for those who incorporate unit testing into their software development lifecycle. I believe anyone developing Web applications would benefit from reading WSTC.

Note: O'Reilly provided me a free review copy.

Review of SQL Injection Attacks and Defense Posted

Amazon.com just posted my five star review of SQL Injection Attacks and Defense by Justin Clarke, et al. From the review:

I just finished reviewing The Web Application Hacker's Handbook, calling it a "Serious candidate for Best Book Bejtlich Read
2009." SQL Injection Attacks and Defense (SIAAD) is another serious contender for BBBR09. In fact, I recommend reading TWAHH first because it is a more comprehensive overview of Web application security. Next, read SIAAD as the definitive treatise on SQL injection. Syngress does not have a good track record when it comes to books with multiple authors -- SIAAD has ten! -- but SIAAD is clearly a winner.


SIAAD is another serious contender for Best Book Bejtlich Read 2009.

Note: Syngress provided me a free review copy.

Review of The Web Application Hacker's Handbook Posted

Amazon.com just posted my five star review of The Web Application Hacker's Handbook by Dafydd Stuttard and Marcus Pinto. From the review:

The Web Application Hacker's Handbook (TWAHH) is an excellent book. I read several books on Web application security recently, and this is my favorite. The text is very well-written, clear, and thorough. While the book is not suitable for beginners, it is accessible and easy to read for those even without Web development or assessment experience.

TWAHH is a serious candidate for Best Book Bejtlich Read 2009.

Note: Wiley provided me a free review copy.

Kamis, 22 Oktober 2009

"Protect the Data" from the Evil Maid


I recently posted "Protect the Data" from Whom?. I wrote:

[P]rivate citizens (and most organizations who are not nation-state actors) do not have a chance to win against a sufficiently motivated and resourced high-end threat.

Joanna Rutkowska provides a great example of the importance of knowing the adversary in her post Evil Maid goes after TrueCrypt!, a follow-up to her January post Why do I miss Microsoft BitLocker?

Her post describes how she and Alex Tereshkin implemented a physical attack against laptops with TrueCrypt full disk encryption. They implemented the attack (called "Evil Maid") as a bootable USB image that an intruder would use to boot a target laptop. Evil Maid hooks the TrueCrypt function that asks the user for a passphrase on boot, then stores the passphrase for later physical retrieval.

The scenario is this:

  1. User leaves laptop alone in hotel room.

  2. Attacker enters room, boots laptop with Evil Maid, and compromises TrueCrypt loader. Attacker leaves.

  3. User returns to hotel room, boots laptop, enters TrueCrypt passphrase. Game over.

  4. User leaves laptop alone in hotel room again.

  5. Attacker enters room again, boots laptop with Evil Maid again, and retrieves passphrase.


Joanna recommends implementing a product that supports Trusted Platform Module (TPM), like Microsoft BitLocker. A detection-oriented workaround is to calculate hashes of selected disk sectors and partitions and decide that mismatches indicate an intrusion has occurred. That approach still misses BIOS-based attacks but it's the best one can do without TPM support.

Report on Chinese Government Sponsored Cyber Activities

Today's Wall Street Journal features the following story:

China Expands Cyberspying in U.S., Report Says by Siobhan Gorman.

I've reprinted an excerpt below and highlighted interested aspects. I can vouch for the quality of the Northrop Grumman team that wrote this report and for their experience in this arena.

Congressional Advisory Panel in Washington Cites Apparent Campaign by Beijing to Steal Information From American Firms

WASHINGTON -- The Chinese government is ratcheting up its cyberspying operations against the U.S., a congressional advisory panel found, citing an example of a carefully orchestrated campaign against one U.S. company that appears to have been sponsored by Beijing.

The unnamed company was just one of several successfully penetrated by a campaign of cyberespionage, according to the U.S.-China Economic and Security Review Commission report to be released Thursday. Chinese espionage operations are "straining the U.S. capacity to respond," the report concludes.

The bipartisan commission, formed by Congress in 2000 to investigate the security implications of growing trade with China, is made up largely of former U.S. government officials in the national security field.

The commission contracted analysts at defense giant Northrop Grumman Corp. to write the report. The analysts wouldn't name the company described in the case study, describing it only as "a firm involved in high-technology development."

The report didn't provide a damage assessment and didn't say specifically who was behind the attack against the U.S. company. But it said the company's internal analysis indicated the attack originated in or came through China.

The report concluded the attack was likely supported, if not orchestrated, by the Chinese government, because of the "professional quality" of the operation and the technical nature of the stolen information, which is not easily sold by rival companies or criminal groups. The operation also targeted specific data and processed "extremely large volumes" of stolen information, the report said.

"The case study is absolutely clearly controlled and directed with a specific purpose to get at defense technology in a related group of companies," said Larry Wortzel, vice chairman of the commission and a former U.S. Army attaché in China. "There's no doubt that that's state-controlled."

Attacks like that cited in the report hew closely to a blueprint frequently used by Chinese cyberspies, who in total steal $40 billion to $50 billion in intellectual property from U.S. organizations each year, according to U.S. intelligence agency estimates provided by a person familiar with them.

Rabu, 21 Oktober 2009

DojoCon to Stream Talks Live

As I mentioned last month I will be speaking at DojoCon, on Saturday 7 November at Capitol College in Laurel, MD. Organizer Marcus Carey asked me to share the following:

DojoCon will Stream Live all of the talks on the Internet for free as they happen. I believe this is first time a group of speakers of this caliber will be available to the information security community for free.

We are also offering real-life attendees the full conference for $150 for both days and a one-day pass (Either Friday or Saturday) for $85.

Bejtlich Teaching at Black Hat DC 2010

Black Hat was kind enough to invite me back to teach multiple sessions of my 2-day course this year.

First up is Black Hat DC 2010 Training on 31 January and 01 February 2010 at Grand Hyatt Crystal City in Arlington, VA.

I will be teaching TCP/IP Weapons School 2.0.

Registration is now open. Black Hat set five price points and deadlines for registration.

  • Super Early ends 15 Nov

  • Early ends 1 Dec

  • Regular ends 15 Jan

  • Late ends 30 Jan

  • Onsite starts at the conference


With an $800 difference between Super Early and Onsite, it pays to register early!

If you review the Sample Lab I posted earlier this year, this class is all about developing an investigative mindset by hands-on analysis, using tools you can take back to your work. Furthermore, you can take the class materials back to work -- an 84 page investigation guide, a 25 page student workbook, and a 120 page teacher's guide, plus the DVD. I have been speaking with other trainers who are adopting this format after deciding they are also tired of the PowerPoint slide parade.

Feedback from my 2009 sessions was great. Two examples:

"Truly awesome -- Richard's class was packed full of content and presented in an understandable manner." (Comment from student, 28 Jul 09)

"In six years of attending Black Hat (seven courses taken) Richard was the best instructor." (Comment from student, 28 Jul 09)

If you've attended a TCP/IP Weapons School class before 2009, you are most welcome in the new one. Unless you attended my Black Hat training in 2009, you will not see any repeat material whatsoever in TWS2. Older TWS classes covered network traffic and attacks at various levels of the OSI model. TWS2 is more like a forensics class, with network, log, and related evidence.

I will also be teaching in Barcelona and Las Vegas, but I will announce those dates later.

I look forward to seeing you. Thank you.

Selasa, 13 Oktober 2009

"Protect the Data" -- What Data?

This is another follow-on from my "Protect the Data" Idiot! post. If you think about the "protect the data" mindset, it's clearly a response to the sorts of data loss events that involve "records" -- credit card records, Personally Identifiable Information (PII), and the like. In fact, there's an entire "product line" built around this problem: data loss prevention. I wrote about DLP earlier this year in response to the rebranding effort taken by vendors to make whatever they sold part of the DLP "solution."

What's interesting to me about "protect the data" in this scenario is this: "what data?" Is your purpose in life to keep PII or other records in a database? That's clearly a big problem, but it doesn't encompass the whole security problem. What about the following?

  • Credentials used to access systems. For example, intruders often compromise service accounts that have wide-ranging access to enterprise systems. Those credentials can be retrieved from many locations. How do you protect those?

  • Systems that don't house PII or other records, but do serve critical functions. Your PBX, HVAC control system, routers, other network middleboxes, etc., are all important. Try accessing "data" without those devices working.

  • Data provided by others. The enterprise isn't just a data sink. Users make decisions and work by relying on data provided by others. Who or what protects that data?


Those are three examples. If you spend time thinking about the problem you can probably identify many other forms of data that are outside the "DLP" umbrella, and outside the "protect the data" umbrella.

Minggu, 11 Oktober 2009

"Protect the Data" Where?


I forgot to mention another thought in my last post "Protect the Data" from Whom? Intruders are not mindly attacking systems to access data. Intruders direct their efforts toward the sources that are easiest and cheapest to exploit. This produces an interesting corollary.

Once other options have been eliminated, the ultimate point at which data will be attacked will be the point at which it is useful to an authorized user.

For example, if a file is only readable once it has been decrypted in front of a user, that is where the intruder will attack once his other options have been exhausted. This means that the only way to completely "protect data" is to make it unusable. If data is not usable then it doesn't need to exist, so that means intruders will always be able to access data if they are sufficiently resourced and motivated, as explained in my first post on this subject.

"Protect the Data" from Whom?

This is a follow-on from my "Protect the Data" Idiot! post. Another question to consider when someone says "protect the data" is this: "from whom?" The answer makes all the difference.

I remember a conversation I overheard or read involving Marcus Ranum and a private citizen discussing threats from nation-state actors.

Questioner: How do you protect yourself from nation-state actors?

MJR: You don't.

Q: What do you do then?

MJR: You lose.


In other words, private citizens (and most organizations who are not nation-state actors) do not have a chance to win against a sufficiently motivated and resourced high-end threat. The only actors who have a chance of defending themselves against high-end threats are other nation-state actors. Furthermore, the defenders don't necessarily have a defensive advantage over average joes because the nation-state possesses superior people, products, or processes. Many nation-state actors are deficient in all three. Rather, nation-state actors can draw on other instruments of power that are unavailable to average joes.

I outlined this approach in my posts The Best Cyber-Defense..., Digital Situational Awareness Methods and Counterintelligence Options for Digital Security:

[T]he best way to protect a nation's intelligence from enemies is to attack the adversary's intelligence services. In other words, conduct aggressive counterintelligence to find out what the enemy knows about you.

In the "protect the data" scenario, this means knowing how the adversary can access the containers holding your data. Nation-states are generally the only organizations with the discipline, experience, and funding to conduct these sorts of CI actions. They are not outside the realm of organized crime or certain private groups with CI backgrounds.

To summarize, it makes no sense to ponder how to "protect the data" without determining what adversaries want it. If we unify against threats we can direct our resources against the adversaries we can possibly counter independently, and then petition others (like our governments and law enforcement) to collaborate against threats that outstrip our authority and defenses.

Sabtu, 10 Oktober 2009

"Protect the Data" Idiot!

The 28 September 2009 issue of InformationWeek cited a comment posted to one of their forums. I'd like to cite an excerpt from that comment.

[W]e tend to forget the data is the most critical asset. yet we spend inordinate time and resources trying to protect the infrastructure, the perimeter... the servers etc. I believe and [sic] information-centric security approach of protecting the data itself is the only logical approach to keep it secure at rest, in motion and in use. (emphasis added)

I hear this "protect the data" argument all the time. I think it is one of the most misinformed comments that one can make. I think of Chris Farley smacking his head saying "IDIOT!" when I hear "protect the data."

"Oh right, that's what we should have been doing for the last 10, 20, 30 years -- protect the data! I feel so stupid to have not done that! IDIOT!"

"Protect the data" represents a nearly fatal understanding of information security. I'm tired of hearing it, so I'm going to dismantle the idea in this post.

Now that I've surely offended someone, here are my thoughts.

Someone show me "data." What is "data" anyway? Let's assume it takes electronic form, which is the focus of digital security measures. This is the first critical point:

Digital data does not exist independently of a container.

Think of the many containers which hold data. Imagine looking at a simple text file retrieved from a network share via NFS and viewed with a text editor.

  1. Data exists as an image rendered on a screen attached to the NFS client.

  2. Data exists as a temporary file on the hard drive of the NFS client, and as a file on the hard drive of the NFS server.

  3. Data exists in memory on the NFS client, and in memory on the NFS server.

  4. The NFS client and server are computers sitting in facilities.

  5. Network infrastructure carries data between the NFS client and server.

  6. Data exists as network traffic exchanged between the NFS client and server.

  7. If the user prints the file, it is now contained on paper (in addition to involving a printer with its own memory, hard drive, etc.)

  8. The electromagnetic spectrum is a container for data as it is transmitted by the screen, carried by network cables and/or wireless media, and so on.


That's eight unique categories of data containers. Some smart blog reader can probably contribute two others to round out the list at ten!

So where exactly do we "protect the data"? "In motion/transit, and at rest" are the popular answers. Good luck with that. Seriously. This leads to my second critical point:

If an authorized user can access data, so can an unauthorized user.

Think about it. Any possible countermeasure you can imagine can be defeated by a sufficiently motivated and resourced adversary. One example: "Solution:" Encrypt everything! Attack: Great, wait until an authorized user views a sensitive document, and then screen-scrape every page using the malware installed last week.

If you doubt me, consider the "final solution" that defeats any security mechanism:

Become an authorized user, e.g., plant a mole/spy/agent. If you think you can limit what he or she can remove from a "secure" site, plant an agent with a photographic memory. This is an extreme example but the point is that there is no "IDIOT" solution out there.

I can make rational arguments for a variety of security approaches, from defending the network, to defending the platform, to defending the operating system, to defending the application, and so on. At the end of the day, don't think that wrapping a document in some kind of rights management system or crypto is where "security" should be heading. I don't disagree that adding another level of protection can be helpful, but it's not like intruders are going to react by saying "Shucks, we're beat! Time to find another job."

Intruders who encounter so-called "protect the data" approaches are going to break them like every other countermeasure deployed so far. It's just a question of how expensive it is for the intruder to do so. Attackers balance effort against "return" like any other rational actor, and they will likely find cheap ways to evade "protect the data" approaches.

Only when relying on human agents is the cheapest way to steal data, or when it's cheaper to research develop one's own data, will digital security be able to declare "victory." I don't see that happening soon; no one in history has ever found a way to defeat crime, espionage, or any of the true names for the so-called "information security" challenges we face.

Jumat, 09 Oktober 2009

NSM in Products

A blog reader recently asked:

I've been tasked with reevaluating our current NSM / SIEM implementation, and I see that you posted about a NetFlow book you are techediting for Lucas.

My question is this, Outside of Sguil, what do you prefer/recommend in the way of NSM products/solutions?

Our current NSM uses a modified version NetFlow and our Networking team also uses Cisco Netflow elsewhere...

While I find it useful to collect header data, the current implementation lacks payload information. So while we may be able to turn back the clock to look at flows for a given duration, its not always possible to see valuable contents...

Another wall I have hit with NetFlow is that the communication of the protocol takes place in somewhat of a half duplex manner (I.E. it is possible to receive the response flow before you receive the request flow) thus making it difficult to assure a particular direction without some processing...

I have yet to see a blog post covering any consolidated comparisons to solutions regarding NSM.

I do have your NSM book on order from Amazon today if it already has the answers I'm looking for...

As always, thank you for your time Richard, I appreciate it greatly.


Thank you for the question. I don't recommend specific products, but I do recommend NSM data types. That way, you can ask the vendor which NSM data types they support, and then decide if their answer is 1) correct and 2) sufficient. For reference, the six NSM data types are:

  1. Alert: judgment made by a product ("Port scan!" or "Buffer overflow!"); either detect or block

  2. Statistical: high-level description of activity (protocol percentages, trending, etc.)

  3. Session: conversations between hosts ("A talked to B on Friday for 61 seconds sending 1234 bytes")

  4. Full Content: all packets on the wire

  5. Extracted Content: rebuild elements of a session and extract metadata

  6. Transaction: generate logs based on request-reply traffic (DNS, HTTP, etc.)


Looking at these six types, I can make the following general assessments of products. This is my opinion based on products I have encountered. If you find a product that performs better than the general categories I describe, excellent!

If you want to learn more about this, I'll be discussing it during my solo presentation at the 2009 Information Security Summit, October 29-30, 2009 at Corporate College East in Warrensville Heights, Ohio.

Rabu, 07 Oktober 2009

Technical Visibility Levels

It's no secret that I think technical visibility is the key to trustworthy technology. Via Twitter I wrote The trustworthiness of a digital asset is limited by the owner's capability to detect incidents compromising the integrity of that asset. This topic has consumed me recently as relatively closed but IP-enabled systems proliferate. This ranges from handheld computers (iPhone, Blackberry, etc.) all the way to systems hosted in the cloud. How are we supposed to trust any of them?

One of the first problems we should address is how to describe the level of technical visibility afforded by these technologies. The following is very rough and subject to modification, but I'm thinking in these terms right now.

  • Level 0. System status available only by observing explicit failure.

  • Level 1. Anecdotal status reporting or limited status reporting.

  • Level 2. Basic status reporting via portal or other non-programmatic interface.

  • Level 3. Basic logging of system state, performance, and related metrics via defined programmatic interface.

  • Level 4. Debug-level logging (extremely granular, revealing inner workings) via defined programmatic interface.

  • Level 5. Direct inspection of system state and related information possible via one or more means.


Let me try to provide some examples.

There must be dozens of other examples here. Keep in mind this is more of a half-thought than a finished thought, but I've been sitting on it for too long. Hopefully out in the open someone might comment on it. Thank you.

Hakin9 5/2009 Issue

I just received a review copy of the 5/2009 issue of Hakin9 magazine. Several articles look interesting, such as Windows Timeline Analysis by Harlan Carvey, The Underworld of CVV Dumping by Julian Evans, and a few others on malware analysis and ASLR. Check it out!

Incident Handler, Incident Analyst, Threat Analyst, and Developer Positions in GE-CIRT

My team just opened five more positions. These candidates will report to me in GE-CIRT.

  • Information Security Incident Handler (1093498)

  • Information Security Incident Analyst (two openings, 1093494)

  • Cyber Threat Analyst (1093497)

  • Information Security Software Developer (1093499)


These candidates will sit in our new Advanced Manufacturing & Software Technology Center in Van Buren Township, Michigan. We don't have any flexibility regarding the location for these positions, and all five must be US citizens. No security clearance is required however!

If interested, search for the indicated job numbers at ge.com/careers or go to the job site to get to the search function a little faster. We are being deluged by applicants for the SIEM role, so your best bet is to apply online and let me find you after reading your resume. Thank you.

Jumat, 02 Oktober 2009

Traffic Talk 7 Posted

I just noticed that my 7th edition of Traffic Talk, titled How to deploy NetFlow v5 and v9 probes and analyzers, was posted on 28 September. I submitted it back in mid-August but it's on the Web now.

On a related note, I am tech editing a forthcoming book on NetFlow by Michael Lucas titled Network Flow Analysis. Michael is probably my favorite technical author, so keep an eye open for his book in May 2010.