Senin, 26 November 2007

Controls Are Not the Solution to Our Problem

If you recognize the inspiration for this post title and graphic, you'll understand my ultimate goal. If not, let me start by saying this post is an expansion of ideas presented in a previous post with the succinct and catchy title Control-Compliant vs Field-Assessed Security.

In brief, too many organizations, regulators, and government agencies waste precious time and resources devising and auditing "controls," regardless of the effect these controls have or do not have on security. They are far too input-centric; they should become more output-aware. They obsess over recording conditions they believe may be helpful while remaining ignorant of the "score of the game." They practice management by belief and disregard management by fact.

Let me provide a few examples from one of the canonical texts used by the control-compliant crowd: NIST Special Publication 800-53: Recommended Security Controls for Federal Information Systems (.pdf). The following is an example of a control, taken from page 140.

SI-3 MALICIOUS CODE PROTECTION


The information system implements malicious code protection.

Control: Supplemental Guidance: The organization employs malicious code protection mechanisms at critical information system entry and exit points (e.g., firewalls, electronic mail servers, web servers, proxy servers, remote-access servers) and at workstations, servers, or mobile computing devices on the network. The organization uses the malicious code protection mechanisms to detect and eradicate malicious code (e.g., viruses, worms, Trojan horses, spyware) transported: (i) by electronic mail, electronic mail attachments, Internet accesses, removable media (e.g., USB devices, diskettes or compact disks), or other common means; or (ii) by exploiting information system vulnerabilities. The organization updates malicious code protection mechanisms (including the latest virus definitions) whenever new releases are available in accordance with organizational configuration management policy and procedures. The organization considers using malicious code protection software products from multiple vendors (e.g., using one vendor for boundary devices and servers and another vendor for workstations). The organization also considers the receipt of false positives during malicious code detection and eradication and the resulting potential impact on the availability of the information system. NIST Special Publication 800-83 provides guidance on implementing malicious code protection.

Control Enhancements:
(1) The organization centrally manages malicious code protection mechanisms.
(2) The information system automatically updates malicious code protection mechanisms.


At first read one might reasonably respond by saying "What's wrong with that? This control advocates implementing anti-virus and related anti-malware software." Think more clearly about this issue and several problems appear.


  • Adding anti-virus products can introduce additional vulnerabilities to systems which might not have exposed themselves without running anti-virus. Consider my post Example of Security Product Introducing Vulnerabilities if you need examples. In short, add anti-virus, be compromised.

  • Achieving compliance may cost more than potential damage. How many times have you heard a Unix administrator complain that he/she has to purchase an anti-virus product for his/her Unix server simply to be compliant with a control like this? The potential for a Unix server (not Mac OS X) to be damaged by a user opening an email through a client while logged on to the server (a very popular exploitation vector on a Windows XP box) is practically nil.

  • Does this actually work? This is the question that no one asks. Does it really matter if your system is running anti-virus software? Did you know that intruders (especially high-end ones most likely to selectively, steathily target the very .gov and .mil systems required to be compliant with this control) test their malware against a battery of anti-virus products to ensure their code wins? Are weekly updates superior to daily updates? Daily to hourly?


The purpose of this post is to tentatively propose an alternative approach. I called this "field-assessed" in contrast to "control-compliant." Some people prefer the term "results-based." Whatever you call it, the idea is to direct attention away from inputs and devote more energy to outputs. As far as mandating inputs (like every device must run anti-virus), I say that is a waste of time and resources.

I recommend taking measurements to determine your enterprise "score of the game," and use that information to decide what you need to do differently. I'm not suggesting abandoning efforts to prevent intrusions (i.e., "inputs.") Rather, don't think your security responsibilities end when the bottle is broken against the bow of the ship and it slides into the sea. You've got to keep watching to see if it sinks, if pirates attack, how the lifeboats handle rough seas, and so forth.

These are a few ideas.

  1. Standard client build client-side survival test. Create multiple sacrificial systems with your standard build. Deploy a client-side testing solution on them, like a honeyclient. (See The Sting for a recent story.) Vary your defensive posture. Measure how long it takes for your standard build to be compromised by in-the-wild Web sites, spam, and other communications with the outside world.

  2. Standard client build server-side survival test. Create multiple sacrificial systems with your standard build. Deploy them as a honeynet. Vary your defensive posture. Measure how long it takes for your standard build to be compromised by malicious external traffic from the outside world -- or better yet -- from your internal network.

  3. Standard client build client-side penetration test. Create multiple sacrificial systems with your standard build. Conduct my recommendation penetration testing activities and time the result.

  4. Standard client build server-side penetration test. Repeat number 3 with a server-side flavor.

  5. Standard server build server-side penetration test. Repeat number 3 against your server build with a server-side flavor. I hope you don't have users operating servers as if they were clients (i.e., browsing the Web, reading email, and so forth.) If you do, repeat this step and do a client-side pen test too.

  6. Deploy low-interactive honeynets and sinkhole routers in your internal network. These low-interaction systems provide a means to get some indications of what might be happening inside your network. If you think deploying these on the external network might reveal indications of targeted attacks, try that. (I doubt it will be that useful due to the overall attack noise, but who knows?)

  7. Conduct automated, sampled client host integrity assessments. Select a statistically valid subset of your clients and check them using multiple automated tools (malware/rootkit/etc. checkers) for indications of compromise.

  8. Conduct automated, sampled server host integrity assessments. Self-explanatory.

  9. Conduct manual, sampled client host integrity assessments. These are deep-dives of individual systems. You can think of it as an incident response where you have not had indication of an incident yet. Remote IR tools can be helpful here. If you are really hard-core and you have the time, resources, and cooperation, do offline analysis of the hard drive.

  10. Conduct manual, sampled server host integrity assessments. Self-explanatory.

  11. Conduct automated, sampled network host activity assessments. I questioned adding this step here, since you should probably always be doing this. Sometimes it can be difficult to find the time to review the results, however automated the data collection. The idea is to let your NSM system see if any of the traffic it sees is out of the ordinary based on algorithms you provide.

  12. Conduct manual, sampled network host activity assessments. This method is more likely to produce results. Here a skilled analyst performs deep individual analysis of traffic on a sample of machines (client and server, separately) to see if any indications of compromise appear.


In all of these cases, trend your measurements over time to see if you see improvements when you alter an input. I know some of you might complain that you can't expect to have consistent output when the threat landscape is constantly changing. I really don't care, and neither does your CEO or manager!

I offer two recommendations:

  • Remember Andy Jaquith's criteria for good metrics, simplified here.


    1. Measure consistently.

    2. Make them cheap to measure. (Sorry Andy, my manual tests violate this!)

    3. Use compound metrics.

    4. Be actionable.


  • Don't slip into thinking of inputs. Don't measure how many hosts are running anti-virus. We want to measure outputs. We are not proposing new controls.


Controls are not the solution to our problem. Controls are the problem. They divert too much time, resources, and attention from endeavors which do make a difference. If the indications I am receiving from readers and friends are true, the ideas in this post are gaining traction. Do you have other ideas?

0 komentar:

Posting Komentar