You knew I had risk on my mind given my recent post
Economist on the Peril of Models. The fact is I just flew to Chicago to teach my last Network Security Operations class, so I took some time to read the
Risk Management Insight white paper
An Introduction to Factor Analysis of Information Risk (FAIR). I needed to respond to
Risk Assessment Is Not Guesswork, so I figured reading the whole FAIR document was a good start. I said in
Brothers in Risk that I liked RMI's attempts to bring standardized terms to the profession, so I hope they approach this post with an open mind.
I have some macro issues with FAIR as well as some micro issues. Let me start with the macro issue by asking you a question:
Does breaking down a large problem into small problems, the solutions to which rely upon making guesses, result in solving the large problem more accurately?If you answer yes, you will like FAIR. If you answer no, you will not like FAIR.
FAIR defines risk as
Risk - the probable frequency and probable magnitude of future lossThat reminded me of
Annual Loss Expectancy (ALE) = Annualized Rate of Occurrence (ARO) X Single Loss Expectancy (SLE)If you don't agree remove the "annual" terms from the second definition or add them to the FAIR definition.
I have always preferred this equation
Risk = Vulnerability X Threat X Impact (or Cost)because it is useful for showing the effects on risk if you change one of the factors,
ceteris paribus. (Ok, I threw the Latin in there as homage to one of my economics instructors.)
If you consider frequency when estimating threat activity and include countermeasures as a component of vulnerability, you'll notice that Threat X Vulnerability starts looking like ARO. Impact (or Cost) is practically the same as SLE, so the two equations are similar.
FAIR turns its definition into the following.
If you care to click on that diagram, you'll see many small elements that need to be estimated. Specifically, you can follow the
Basic Risk Assessment Guide to see these are the steps.
- Stage 1. Identify scenario components
- 1. Identify the asset at risk
- 2. Identify the threat community under consideration
- Stage 2. Evaluate Loss Event Frequency (LEF)
- 3. Estimate the probable Threat Event Frequency (TEF)
- 4. Estimate the Threat Capability (TCap)
- 5. Estimate Control strength (CS)
- 6. Derive Vulnerability (Vuln)
- 7. Derive Loss Event Frequency (LEF)
- Stage 3. Evaluate Probable Loss Magnitude (PLM)
- 8. Estimate worst-case loss
- 9. Estimate probable loss
- Stage 4. Derive and articulate Risk
- 10. Derive and articulate Risk
The problem with FAIR is that in every place you see the word "Estimate" you can substitute "Make a guess that's not backed by any objective measurement and which could be challenged by anyone with a different agenda." Because all the derived values are based on those estimates, your assessment of FAIR depends on the answer to the question I asked at the start of this post.
Let's see how this process stands up to some simple scrutiny by reviewing FAIR's
Analyzing a Simple Scenario.
A Human Resources (HR) executive within a large bank has his username and password written on a sticky-note stuck to his computer monitor. These authentication credentials allow him to log onto the network and access the HR applications he’s entitled to use...
1. Identify the Asset at Risk: In this case, however, we’ll focus on the credentials, recognizing that their value is inherited from the assets they’re intended to protect.We start with a physical security risk case. This simplifies the process considerably and actually gives FAIR the best chance it has to reflect reality. Why is that? The answer is that the physical world changes more slowly than the digital world. We don't have to worry about having solid walls being penetrated by a mutant from the X-Men movies or from the state of the credentials suddenly being altered by a patch or configuration change.
Identify the Threat Community: If we examine the nature of the organization (e.g., the industry it’s in, etc.), and the conditions surrounding the asset (e.g., an HR executive’s office), we can begin to parse the overall threat population into communities that might reasonably apply... For this example, let’s focus on the cleaning crew.That's convenient. The document lists six potential threat communities but decides to only analyze one. Simplification sure makes it easier to proceed with this analysis. It also means the result is so narrowly targeted to be almost worthless, unless we decide to repeat this process for the rest of the threat communities. And this is still only looking at a sticky note.
3. Estimate the probable Threat Event Frequency (TEF): Many people demand reams of hard data before they’re comfortable estimating attack frequency. Unfortunately, because we don’t have much (if any) really useful or credible data for many scenarios, TEF is often ignored altogether. So, in the absence of hard data, what’s left? One answer is to use a qualitative scale, such as Low, Medium, or High.
And, while there’s nothing inherently wrong with a qualitative approach in many circumstances, a quantitative approach provides better clarity and is more useful to most decision-makers – even if it’s imprecise.
For example, I may not have years of empirical data documenting how frequently cleaning crew employees abuse usernames and passwords on sticky-notes, but I can make a reasonable estimate within a set of ranges.
Recognizing that cleaning crews are generally comprised of honest people, that an HR executive’s credentials typically would not be viewed or recognized as especially valuable to them, and that the perceived risk associated with illicit use might be high, then it seems reasonable to estimate a Low TEF using the table below...
Is it possible for a cleaning crew to have an employee with motive, sufficient computing experience to recognize the potential value of these credentials, and with a high enough risk tolerance to try their hand at illicit use? Absolutely! Does it happen? Undoubtedly. Might such a person be on the crew that cleans this office? Sure – it’s possible. Nonetheless, the probable frequency is relatively low. (emphasis added)
Says who? Has the person making this assessment done any research to determine if inflitrating cleaning crews is a technique used by economic adversaries? If yes, how often does that happen? What is the nature of the crew cleaning this office? Do they perform background checks? Have they been infiltrated before? Are they owned by a competitor? Figuring all of that out is too hard. Let's just supply guess #1: "low."
4. Estimate the Threat Capability (Tcap): Tcap refers to the threat agent’s skill (knowledge & experience) and resources (time & materials) that can be brought to bear against the asset... In this case, all we’re talking about is estimating the skill (in this case, reading ability) and resources (time) the average member of this threat community can use against a password written on a sticky note. It’s reasonable to rate the cleaning crew Tcap as Medium, as compared to the overall threat population.Why is that? Why not "low" again? These are janitors we're discussing. Guess #2.
5. Estimate the Control Strength (CS): Control strength has to do with an asset’s ability to resist compromise. In our scenario, because the credentials are in plain sight and in plain text, the CS is Very Low. If they were written down, but encrypted, the CS would be different – probably much higher.It is easy to accept guess #3 because we are dealing with a physical security scenario. It's simple for any person to understand that a sticky note in plain site has zero controls applied against it, so the (nonexistent) "controls" are worthless. But what about that new Web application firewall? Or you anti-virus software? Or any other technical control? Good luck assessing their effectiveness in the face of attacks that evolve on a weekly basis.
6. Derive Vulnerability (Vuln)This value is derived using a chart that balances Tcap vs Control Strength. Since it is based on two guesses, one could decide if it is more or less accurate than estimated the vulnerability directly.
7. Derive Loss Event Frequency (LEF)This value is derived using a chart that balances TEF vs Vulnerability. We derived vulnerability in the previous step and estimated TEF in step 3.
8. Estimate worst-case loss: Within this scenario, three potential threat actions stand out as having significant loss potential – misuse, disclosure, and destruction... For this exercise, we’ll select disclosure as our worst-case threat action.This step considers Productivity, Response, Replacement, Fine/Judgments, Competitve Advantage, and Reputation, with Threat Actions including Access, Modification, Disclosure, and Denial of Access. Enter guess #4.
9. Estimate probable loss magnitude (PLM): The first step in estimating PLM is to determine which threat action is most likely. Remember; actions are driven by motive, and the most common motive for illicit action is financial gain. Given this threat community, the type of asset (personal information), and the available threat actions, it’s reasonable to select Misuse as the most likely action – e.g., for identity theft. Our next step is to estimate the most likely loss magnitude resulting from Misuse for each loss form.Again,
says who? Was identity theft chosen because it's popular in the news? My choice for guess #5 could be something completely different.
10. Derive and Articulate Risk: [R]isk is simply derived from LEF and PLM. The question is whether to articulate risk qualitatively using a matrix like the one below, or articulate risk as LEF, PLM, and worst-case.The final risk rating is another derived value, based on previous estimates.
The FAIR author tries to head off critiques like this blog with the following section:
It’s natural, though, for people to accept change at different speeds. Some of us hold our beliefs very firmly, and it can be difficult and uncomfortable to adopt a new approach. Ultimately, not everyone is going to agree with the principles or methods that underlie FAIR. A few have called it nonsense. Others appear to feel threatened by it.Apparently I'm resistant to "change" and "threatened" because I firmly hold on to "beliefs." I'm afraid that is what I will have to do when frameworks like this are founded upon someone's opinion at each stage of the decision-making process.
The FAIR document continues:
Their concerns tend to revolve around one or more of the following issues:
The absence of hard data. There’s no question that an abundance of good data would be useful. Unfortunately, that’s not our current reality. Consequently, we need to find another way to approach the problem, and FAIR is one solution.I think I just read that the author admits FAIR is not based on "good data," and since we don't have data, we should just "find another way," like FAIR.
The lack of precision. Here again, precision is nice when it’s achievable, but it’s not realistic within this problem space. Reality is just too complex... FAIR represents an attempt to gain far better accuracy, while recognizing that the fundamental nature of the problem doesn’t allow for a high degree of precision.The author admits that FAIR is not precise. How can it even be accurate when the derived values are all based on subjective estimates anyway?
Some people just don’t like change – particularly change as profound as this represents.I fail to see why FAIR is considered profound. Is the answer because the process has been broken into five estimates, from which several other values are derived? Why is this any better than articles like
How to Conduct a Risk Analysis or
Risk Analysis Tools: A Primer or
Risk Assessment and Threat Identification?
I'm sure this isn't the last word on this issue, but I need to rest before teaching tomorrow. Thank you for staying with me if you read the whole post. Obviously if I'm not a fan of FAIR I should propose an alternative. In
Risk-Based Security is the Emperor's New Clothes I cited Donn Parker, who is probably the devil to FAIR advocates. If the question is how to make security decisions by assessing digital risk, I will put together thoughts on that for a post (hopefully this week).
Incidentally, the fact that I am not a fan of FAIR doesn't mean I think the authors have wasted their time. I appreciate their attempt to bring rigor to this process. I also think the questions they ask and the elements they consider are important. However, I think the ability to insert whatever value one likes into the five estimations fatally wounds the process.
This is the bottom line for me: FAIR advocates claim their output is superior due to their framework. How can a framework that relies on arbitrary inputs produce non-arbitrary output? And what makes FAIR so valuable anyway -- has the result been tested against any other methods?