Rabu, 04 Juni 2008

NSM vs Encrypted Traffic Revisited

My last post What Would Galileo Think was originally the first part of this post, but I decided to let it stand on its own. This post is now a follow on to NSM vs Encrypted Traffic, Plus Virtualization and Snort Report 16 Posted. I received several questions, which I thought deserved a new post. I'm going to answer the first with Galileo in mind.

LonerVamp asked:

So can I infer that you would prefer to MITM encrypted channels where you can, so to inspect that traffic on the wire? :)

On a related note, Ivan Ristic asked:

Richard, how come you are not mentioning passive SSL decryption as an option?

I thought I had answered those questions when I said:

If you loosen your trust boundary, maybe you monitor at the perimeter. If you permit encrypted traffic out of the perimeter, you need to man-in-the-middle the traffic with a SSL accelerator. If you trust the endpoints outside the perimeter, you don't need to.

Let's reconsider that statement with Galileo in mind. Originally I proposed that those who terminate their trust boundary at their perimeter must find a way to penetrate encrypted traffic traversing that trust boundary. Another way to approach that problem is to perform measurements to try to determine what cost and benefit can be gained by terminating SSL at the perimeter, inspecting clear text, and re-encrypting traffic as it leaves the enterprise. Does that process actually result in identify and/or limiting intrusions? If yes, use the results to justify the action. If not, abandon the plan or decide to conduct a second round of measurements if conditions are deemed to change at a later date. Don't just argue that "I need to see through SSL" because it's a philosophical standpoint.

Marcin asked:

So what do you say and do when your NSM Sensor/SSL Load Balancer/SSL Proxy gets compromised, exposing your most sensitive data (by nature, because it is being encrypted)?

Am I supposed to rely on my IDS' and my own ability to detect 0day attacks against hardened hosts?


To answer the first question, I would say check out my TaoSecurity Enterprise Trust Pyramid. The same factors which make data from sensors more reliable also make those sensors more resilient. However, no sensor is immune from compromise, and I recommend taking steps to monitor and contain the sensor itself in a manner appropriate for the level of traffic it inspects. Keep in mind a sensor is not a SSL proxy. The SSL proxy might only log URLs; it might not provide clear text to a separate sensor.

Answering the second question could take a whole book. Identifying "0day attacks," what I call "first order detection," is increasingly difficult. Performing second order detection, meaning identifying reinforcement, consolidation, and pillage is often more plausible, especially using extrusion detection methods. Performing third order detection, meaning discovering indications of your hosts in someone's botnet or similar unauthorized control, is another technique. Finally, fourth order detection, or seeing your intellectual property in places where it should not be, is a means to discover intrusions.

Vivek Rajan asked:

Daemonlogger is cool, but what do you think about more sophisticated approaches like the Time Machine ? ( http://www.net.t-labs.tu-berlin.de/research/tm/ )

Is there some value in retaining full content of long running (possibly encrypted) sessions?


I don't consider Time Machine "more sophisticated." It's just a question of trade-offs. Where possible I prefer to log everything, because you can never really be sure before an incident just what might be important later. Regarding encryption, what if you disable collecting traffic on port 443 TCP outbound because it's supposed to be SSL, when you later learn that an intruder is using some weak obfuscation method or no encryption at all?

To summarize, implement whatever system you select based on the demonstrable improvement it brings to your security posture, not because you think it is helpful. I am particularly critical when it comes to defensive measures. For measures that improve visibility, my objective is to gather additional data with a benefit that outweighs the costs of collection.

0 komentar:

Posting Komentar