Tampilkan postingan dengan label metrics. Tampilkan semua postingan
Tampilkan postingan dengan label metrics. Tampilkan semua postingan

Selasa, 17 Juli 2007

NORAD-Inspired Security Metrics

When I was a second degree cadet at USAFA (so long ago that, of my entire class, only myself and three friends had 486 PCs with Ethernet NICs) I visited NORAD. I remember thinking the War Games set was cooler, but I didn't give much thought to the security aspects of their mission.

Today I remembered NORAD and considered their mission with respect to my post last year titled Control-Compliant vs Field-Assessed Security. In case you can't tell from the pithy title, the central idea was that it's more effective to measure security by assessing outcomes instead of inputs. For example, who cares if 100% of your systems have Windows XP SP2 if they are all 0wned by a custom exploit written just for your company? Your security has failed. Inputs are important, but my experience with various organizations is that they tend to be the primary means of "measuring" security, regardless of how well they actually preserve the CIA triad.

Let's put this in terms of NORAD, whose front page states:

The North American Aerospace Defense Command (NORAD) is a bi-national United States and Canadian organization charged with the missions of aerospace warning and aerospace control for North America. Aerospace warning includes the monitoring of man-made objects in space, and the detection, validation, and warning of attack against North America whether by aircraft, missiles, or space vehicles, through mutual support arrangements with other commands. Aerospace control includes ensuring air sovereignty and air defense of the airspace of Canada and the United States...

To accomplish the aerospace warning mission, the commander of NORAD provides an integrated tactical warning and attack assessment to the governments of Canada and the United States. To accomplish the aerospace control mission, NORAD uses a network of satellites, ground-based radar, airborne radar and fighters to detect, intercept and, if necessary, engage any air-breathing threat to North America.


What are some control-compliant or input metrics for NORAD?

  • Number of planes at the ready for intercepting rogue aircraft

  • Average pilot rating (i.e., some sort of assessment of pilot skill)

  • Radar uptime

  • Radar coverage (e.g., percentage of North American territory monitored)


These are all interesting metrics. You might see some comparisons to metrics you might track, like percentage of hosts with anti-virus.

Now consider: do any of those metrics tell you if NORAD is accomplishing its mission? In other words, what is the outcome of all those inputs? What is the score of this game?

Here are some field-assessed or outcome-based metrics.

  • Number of rogue aircraft penetrating North American territory (indicates a failure to deter activity)

  • Number of aircraft not detected by NORAD but discovered via other means to have penetrated North American territory (perhaps via intel sources; indicates a failure to detect activity)

  • Number of aircraft not repelled by interceptors (hopefully this would never happen!)

  • Time from first indication of rogue aircraft to launching interceptors (indicates effectives of pilot-to-plane-to-air process)


These metrics address the critical concern: accomplishing the mission.

Keep these in mind when you are devising metrics for your digital security program.

Rabu, 26 April 2006

Risk and Metrics

I ran across some thought-provoking articles in the April 2006 CIO Magazine. The editor's introduction summarizes a major problem with calculating IT spending:

As sophisticated as the technology and its countless uses have become, all too often the benchmark used to determine the proper level of an enterprise’s IT spending is alarmingly simplistic: the percentage of overall revenue for which IT accounts...

Benchmarking IT spending as a percentage of revenue is a truly useless metric. Unfortunately, according to Koch [mentioned next], it remains the most popular way to evaluate IT spending, and also unfortunately (as most of you already know), it doesn’t say anything about how effective or productive your spending is. Even more unfortunately, benchmarking by percentage of revenue casts IT in the role of a cost to be controlled, defining success simply as lowering the percentage over time.


This is a really amazing insight. How many of you see progress in security management through the eyes of reducing spending to zero? The "Koch" mention refers to the article The Metrics Trap...And How to Avoid It by Christopher Koch. As you might guess there is really no simplistic way to solve this problem. Koch's article includes gems like the following, though:

Joe Drouin... found that [his] company was spending less on IT as a percent of overall revenue than the industry average, which was about 1.5 to 2 percent.

Not one to look a gift horse in the mouth, Drouin played the metric for everything it was worth, highlighting it in every PowerPoint presentation he could during his first year as CIO...

At one point, the CEO, who believed that inexpensive IT was good IT, joked that he expected to see Drouin and his staff outfitted with T-shirts that had the percentage stamped across their chests in big, block numbers...

In this zero sum game, success is defined simply as lowering the percentage over time. "It's not clear how low it should go," says Drouin. "Joking with the CEO, I said, 'In your mind it should be zero.' We had a good laugh, but at what point do we decide it's at the right level and you don't drive it down further?"


That CEO's attitude disgusts me. Would you expect him to do the same for the human resources department? They don't bring in any customer revenue. How about finance and accounting? Now that creative bookkeeping can put the CEO is jail, that isn't a place that brings in customer revenue either. Yet, neither "cost center" is expected to reduce its percentage of overal revenue to zero.

At least as far as security goes, the inability to see the value of security spending relates to management's inability to perceive the risk of being exposed and vulnerable. I came across this insight in a recent issue of the Economist, featuring the article The New Paternalism (subscription probably required):

This acute sensitivity to losses is not the only bias behaviouralists have discovered. People also have great difficulty understanding risks. The weight a person gives to a scenario—flood, fire, winning the lottery—should depend on its likelihood. In fact, it depends on how easily it can be envisaged. People will pay more for air-travel insurance against "terrorist acts" than against death from "all possible causes."

Canny governments can work with the grain of this psychology. The grisly campaigns against smoking aim to put the dangers firmly in people's minds; to turn a statistical risk into a visceral image. They have been effective, perhaps too effective. There is some evidence that people now overestimate the risks of smoking.
(emphasis added)

In other words, management cannot imagine the destruction caused by security incidents. It is impossible for them to envisage an incident causing their company losing market share, intellectual property, or its ability to provide services. As a result, they base their decisions on laws, regulations, and what their peers are doing.

This explains the resources poured into worm defense a few years ago. When management's own computers are affected, when they see worm reporting on CNN, when a worm is the discussion over lunch -- they start to take the problem seriously. When a stealthy intruder has lodged himself inside a company, management has no clue how to handle the situation. In fact, most management has no clue how to handle existing rogue employees now. They turn to platitudes like "we trust our employees" because they can't fathom why someone would turn against their beloved company. After all, management has been treated really well!

I don't think spending-related metrics are of much use. Performance-related metrics are the only ones which I think have some value. Drilling network security operations teams (preventers, intrusion detectors, incident responders, etc.) to see if they stop, identify, and remove controlled threat simulators (vulnerability assessors, pen testers and red teams) is the best way to see if your money is being well spent.