Tampilkan postingan dengan label philosophy. Tampilkan semua postingan
Tampilkan postingan dengan label philosophy. Tampilkan semua postingan

Minggu, 04 Maret 2012

Keep CIRT and Internal Investigations Separate

A recent issue of the Economist featured an article titled Corporate fraud: Mind your language -- How linguistic software helps companies catch crooks. It offered the following excerpts:

To spot staff with the incentive to steal (over and above the obvious fact that money is quite useful), anti-fraud software scans e-mails for evidence of money troubles...

Ernst & Young (E&Y), a consultancy, offers software that purports to show an employee’s emotional state over time: spikes in trend-lines reading “confused”, “secretive” or “angry” help investigators know whose e-mail to check, and when. Other software can help firms find potential malefactors moronic enough to gripe online, says Jean-François Legault of Deloitte, another consultancy...

Dick Oehrle, the chief linguist on the project, explains how it works. First, the algorithm digests a big bundle of e-mails to get used to employees’ language. Then human lawyers code the same e-mails, sorting things as irrelevant, relevant or serious. The human feedback and the computers’ results are then reconciled, so the system gets smarter. Mr Oehrle says the lawyers also learn from the computers (presumably such things as empathy and the difference between right and wrong).

To find employees with the opportunity to steal, the software looks for what snoops call “out of band” events: messages such as “call my mobile” or “come by my office” suggest a desire to talk without being overheard. E-mails between an employee and an outsider that contain the words “beer”, “Facebook” or “evening” can suggest a personal relationship...

Employers without such technology are “operating blind”, says Alton Sizemore, a former fraud detective at America’s FBI... [N]early all giant financial firms now run anti-fraud linguistic software, but fewer than half of medium-sized or small financial firms do...

Prospective users typically pay for a single “snapshot” search of 12 months of company records, according to APEX Analytix, a developer of the software in Greensboro, North Carolina. For a company with 10,000 employees, this costs about $45,000. Unless a company is very small, evidence of fraud almost always surfaces, convincing clients to sign up for a yearly package that costs three or four times as much as a spot-check, says John Brocar of APEX Analytix.

Why spend the money... If a company shows it has systems in place to detect this kind of thing, and starts investigating before outsiders do, it may have an easier time in court.

When I read this story it reminded me of my advice to keep CIRT and Internal Investigations separate. Notice the repeated mention of "lawyers" in the Economist story. There is no reason for this sort of technology or responsibility to reside in the Computer Incident Response Team. CIRTs should focus on external threats. Internal Investigations should focus on internal threats, e.g. employees, contractors, and other authorized parties who may perform unauthorized activities. II should collaborate closely with legal and human resources and should not use CIRT tools or techniques. This separation of duties was invaluable when I ran GE-CIRT because we could reassure constituents that our analysts focused on bad guys outside the company, not our own users.

Jumat, 18 Desember 2009

Notes from Tony Sager Keynote at SANS

I took a few notes at the SANS Incident Detection Summit keynote by Tony Sager last week. I thought you might like to see what I recorded.

All of the speakers made many interesting comments, but it was really only during the start of the second day, when Tony spoke, when I had time to write down some insights.

If you're not familiar with Tony, he is chief of the Vulnerability Analysis and Operations (VAO) Group in NSA.

  • These days, the US goes to war with its friends (i.e., allies fight with the us against a common adversary). However, the US doesn't know its friends until the day before the war, and not all of the US' friends like each other. These realities complicate information assurance.

  • Commanders have been trained to accept a certain level of error in physical space. They do not expect to know the exact number of bullets on hand before a battle, for example. However, they often expect to know exactly how many computers they have at hand, as well as their state. Commanders will need to develop a level of comfort with uncertainty.

  • Far too much information assurance is at the front line, where the burden rests with the least trained, least experienced, yet well-meaning, people. Think of the soldier fresh from tech school responsible for "making it work" in the field. Hence, Tony's emphasis on shifting the burden to vendors where possible.

  • "When nations compete, everybody cheats." [Note: this is another way to remember that with information assurance, the difference is the intelligent adversary.]

  • The bad guy's business model is more efficient than the good guy's business model. They are global, competitive, distributed, efficient, and agile. [My take on that is the financially-motivated computer criminals actually earn ROI from their activities because they are making money. Defenders are simply avoiding losses.

  • The best way to defeat the adversary is to increase his cost, level of uncertainty, and exposure. Introducing these, especially uncertainty, causes the adversary to stop, wait, and rethink his activity.

  • Defenders can't afford perfection, and the definition changes by the minute anyway. [This is another form of the Defender's Dilemma -- what should we try to save, and what should we sacrifice? On the other hand we have the Intruder's Dilemma, which Aaron Walters calls the Persistence Paradox -- how to accomplish a mission that changes a system while remaining undetected.]

  • Our problems are currently characterized by coordination and knowledge management, and less by technical issues.

  • Human-to-human contact doesn't scale. Neither does narrative text. Hence Tony's promotion of standards-based communication.


Thanks again to Tony and our day one keynote Ron Gula!

Sabtu, 26 Januari 2008

Corporate Digital Responsibility

I've started listening to the Economist Audio Edition on my iPod while running. Last week I listened to a special report on Corporate Social Responsibility. I was struck by the language used and issues discussed in the report. Here are a few excepts.

First, from Just good business:

Why the boom [in CSR initiatives]? For a number of reasons, companies are having to work harder to protect their reputation — and, by extension, the environment in which they do business...

CSR is now made up of three broad layers, one on top of the other. The most basic is traditional corporate philanthropy... [T]he second layer of CSR... is a branch of risk management... So, often belatedly, companies respond by trying to manage the risks. They talk to NGOs and to governments, create codes of conduct and commit themselves to more transparency in their operations. Increasingly, too, they get together with their competitors in the same industry in an effort to set common rules, spread the risk and shape opinion.

All this is largely defensive, but companies like to stress that there are also opportunities to be had for those that get ahead of the game. The emphasis on opportunity is the third and trendiest layer of CSR: the idea that it can help to create value...

That is just the sort of thing chief executives like to hear... Businesses have eagerly adopted the jargon of “embedding” CSR in the core of their operations, making it “part of the corporate DNA” so that it influences decisions across the company.

With a few interesting exceptions, the rhetoric falls well short of the reality.


Next, from The next question: Does CSR work?:

Three years ago a special report in The Economist acknowledged, with regret, that the CSR movement had won the battle of ideas. In the survey by the Economist Intelligence Unit for this report, only 4% of respondents thought that CSR was “a waste of time and money”. Clearly CSR has arrived...

[In one sense], the best form of corporate responsibility boils down to enlightened self-interest. And the more that firms embracing it are seen to be successful — through astutely managing risks and recognising opportunities — the more enlightened their leaders will be perceived to be. But do such policies really help to bring success? If not, the whole CSR industry has a problem. If people are no longer asking “whether” but “how”, in future they will increasingly want to know “how well”. Is CSR adding value to the business?

At present few companies would be able to tell. CSR decisions rely more on instinct than on evidence. But a measurement industry of sorts is springing up. Many big firms now publish their own sustainability reports, full of targets and commitments. The Global Reporting Initiative, based in Amsterdam, aspires to provide an international standard, with 79 indicators that it encourages companies to use. This may be a useful starting point, but critics say it often amounts to little more than box-ticking; worse, it can provide a cover for poor performers...


From A stich in time: How companies manage risks to their reputation:

Business leaders embrace corporate responsibility for a number of reasons... For some, though, it is public embarrassment and lawsuits that concentrate the mind... Trouble seems to come in waves, pounding industry after industry, each time for a different reason... Most of the rhetoric on CSR may be about doing the right thing and trumping competitors, but much of the reality is plain risk management. It involves limiting the damage to the brand and the bottom line that can be inflicted by a bad press and consumer boycotts, as well as dealing with the threat of legal action...

Time and again companies fail to see the problems coming. Only once they have had to deal with, say, a lawsuit or strong public pressure do they start to change their thinking...

For the moment, though, the biggest problem many companies have to deal with is something that has sprung from rapid globalisation. It is the risks associated with managing supply chains that spread around the world, stretching deep into China, India and elsewhere...

Firms can set standards of behaviour for suppliers, but they do not find it easy to enforce them... So inspection regimes are set to intensify, at a time when audit fatigue has already become a problem for suppliers...

Each industry has its own specific issues, but there are some common themes in how firms are approaching the risk-management side of CSR. One is to put in place proper systems for monitoring risk across the supply chain, including listing who the suppliers are, having well-established channels of communicating with them and auditing their compliance with ethics codes. Basic as it sounds, even many big companies fail to do this...

Beyond the basics, prudent companies include a CSR perspective when considering new projects...

Novo Nordisk, a Danish company that supplies a big share of the world's insulin, has written the “triple bottom line” — that is, striving to act in a financially, environmentally and socially responsible way — into its articles of association...


Finally, from Do it right:

One way of looking at CSR is that it is part of what businesses need to do to keep up with (or, if possible, stay slightly ahead of) society's fast-changing expectations. It is an aspect of taking care of a company's reputation, managing its risks and gaining a competitive edge. This is what good managers ought to do anyway. Doing it well may simply involve a clearer focus and greater effort than in the past, because information now spreads much more quickly and companies feel the heat...

If it is nothing more than good business practice, is there any point in singling out corporate social responsibility as something distinctive? Strangely, perhaps there is, at least for now. If it helps businesses look outwards more than they otherwise would and to think imaginatively about the risks and opportunities they face, it is probably worth doing. This is why some financial analysts think that looking at the quality of a company's CSR policy may be a useful pointer to the quality of its management more generally...

[I]n a growing number of companies CSR goes deeper than that and comes closer to being “embedded” in the business, influencing decisions on everything from sourcing to strategy. These may also be the places where talented people will most want to work.

The more this happens, ironically, the more the days of CSR may start to seem numbered. In time it will simply be the way business is done in the 21st century. “My job is to design myself out of a job,” says one company's head of corporate responsibility...


Is it obvious by now that you could replace CSR in all of these cases with "digital security"? Is it now time for a "quadruple bottom line" -- "striving to act in a financially, environmentally, socially, and digitally responsible way?

We in the digital security field need to talk to these CSR people and figure out how they are making progress. We share almost exactly the same goals but they are winning the battle of ideas. In digital security, too many companies "fail to see the problems coming. Only once they have had to deal with, say, a lawsuit or strong public pressure do they start to change their thinking."

Note: Prior to this blog post the only mention of "corporate digital responsibility" I could find via Google is a SEC filing for Bank Bradesco.

Kamis, 10 Januari 2008

Defensible Network Architecture 2.0

Four years ago when I wrote The Tao of Network Security Monitoring I introduced the term defensible network architecture. I expanded on the concept in my second book, Extrusion Detection. When I first presented the idea, I said that a defensible network is an information architecture that is monitored, controlled, minimized, and current. In my opinion, a defensible network architecture gives you the best chance to resist intrusion, since perfect intrusion prevention is impossible.

I'd like to expand on that idea with Defensible Network Architecture 2.0. I believe these themes would be suitable for a strategic, multi-year program at any organization that commits itself to better security. You may notice the contrast with the Self-Defeating Network and the similarities to my Security Operations Fundamentals. I roughly order the elements in a series from least likely to encounter resistance from stakeholders to most likely to encounter resistance from stakeholders.

A Defensible Network Architecture is an information architecture that is:

  1. Monitored. The easiest and cheapest way to begin developing DNA on an existing enterprise is to deploy Network Security Monitoring sensors capturing session data (at an absolute minimum), full content data (if you can get it), and statistical data. If you can access other data sources, like firewall/router/IPS/DNS/proxy/whatever logs, begin working that angle too. Save the tougher data types (those that require reconfiguring assets and buying mammoth databases) until much later. This needs to be a quick win with the data in the hands of a small, centralized group. You should always start by monitoring first, as Bruce Schneier proclaimed so well in 2001.

  2. Inventoried. This means knowing what you host on your network. If you've started monitoring you can acquire a lot of this information passively. This is new to DNA 2.0 because I assumed it would be already done previously. Fat chance!

  3. Controlled. Now that you know how your network is operating and what is on it, you can start implementing network-based controls. Take this anyway you wish -- ingress filtering, egress filtering, network admission control, network access control, proxy connections, and so on. The idea is you transition from an "anything goes" network to one where the activity is authorized in advance, if possible. This step marks the first time where stakeholders might start complaining.

  4. Claimed. Now you are really going to reach out and touch a stakeholder. Claimed means identifying asset owners and developing policies, procedures, and plans for the operation of that asset. Feel free to swap this item with the previous. In my experience it is usually easier to start introducing control before making people take ownership of systems. This step is a prerequisite for performing incident response. We can detect intrusions in the first step. We can only work with an asset owner to respond when we know who owns the asset and how we can contain and recover it.

  5. Minimized. This step is the first to directly impact the configuration and posture of assets. Here we work with stakeholders to reduce the attack surface of their network devices. You can apply this idea to clients, servers, applications, network links, and so on. By reducing attack surface area you improve your ability to perform all of the other steps, but you can't really implement minimization until you know who owns what.

  6. Assessed. This is a vulnerability assessment process to identify weaknesses in assets. You could easily place this step before minimization. Some might argue that it pays to begin with an assessment, but the first question is going to be: "What do we assess?" I think it might be easier to start disabling unnecessary services first, but you may not know what's running on the machines without assessing them. Also consider performing an adversary simulation to test your overall security operations. Assessment is the step where you decide if what you've done so far is making any difference.

  7. Current. Current means keeping your assets configured and patched such that they can resist known attacks by addressing known vulnerabilities. It's easy to disable functionality no one needs. However, upgrades can sometimes break applications. That's why this step is last. It's the final piece in DNA 2.0.


So, there's DNA 2.0 -- MICCMAC (pronounced "mick-mack"). You may notice the Federal government is adopting parts of this approach, as mentioned in my post Feds Plan to Reduce, then Monitor. I prefer to at least get some monitoring going first, since even incomplete instrumentation tells you what is happening. Minimization based on opinion instead of fact is likely to be ugly.

Did I miss anything?

Selasa, 18 Desember 2007

Does Failure Sell?

I often find myself in situations trying to explain the value of Network Security Monitoring (NSM). This very short fictional conversation explains what I mean. This exchange did not happen but I like to contemplate these sorts of dialogues.

NSM Advocate: I recommend deploying network-based sensors to collect data using NSM principles. I will work with our internal business units to select network gateways most likely to yield significant traffic. I will build the sensors using open source software on commodity hardware, recycled from other projects if need be.

Manager: Why do we need this?

NSM Advocate: Do you believe all of your defensive measures are 100% effective?

Manager: No. (This indicates a smart manager. Answering Yes would result in a line of reasoning on why Prevention Eventually Fails.)

NSM Advocate: Do you want to know when your defensive measures fail?

Manager: Yes. (This also indicates a smart manager. Answering No would result in a line of reasoning on why ignorance is not bliss.)

NSM Advocate: NSM will tell us when we fail. NSM sensors are the highest impact, least cost way to obtain network situational awareness. NSM methodologies can guide and validate preventative measures, transform detection into an actionable process, and enable rapid, low-cost response.

Manager: Why can't I buy this?

NSM Advocate: Some mainstream vendors are realizing a market exists for this sort of data, and they are making some impact with new products. If we had the budget I might propose acquiring a commercial solution. For the moment I recommend pursuing the do-it-yourself approach, with transition to a commercial solution if funding and product capabilities materialize.

Manager: Go forth and let your sensors multiply.


Now you know that it's fiction.

Notice the crux of the argument is here: Do you believe all of your defensive measures are 100% effective? As a statement, one would say Because prevention eventually fails, you should have a means to identify intrusions and expedite remediation. A manager hearing that statement is likely to respond like this.

Manager: Do you mean to tell me that all of the money I've spent on firewalls, intrusion prevention systems, anti-virus, network access control, etc., is wasted?

NSM Advocate: That money is not wasted. It's narrowed the problem space, but it hasn't eliminated the problem.

This is a tough argument to accept. When I worked at Foundstone the company sold a vulnerability management product. Foundstone would say "buy our product and you will be secure!" I worked for the incident response team. We would say "...and when you still get owned, call us." Which aspect of the business do you think made more money, got more attention, and received more company support? That's an easy question. How is a salesperson supposed to look a prospect in the eye and say "You're going to lose. What are you going to do about it?"

Many businesses are waking up to the fact that they've spent millions of dollars on preventative measures and they still lose. No one likes to be a loser. The fact of the matter is that winning cannot be defined as zero intrusions. Risk mitigation does not mean risk elimination. Winning has to be defined using the words I used to explain risk in my first book:

Security is the process of maintaining an acceptable level of perceived risk.

This definition does not eliminate intrusions from the enterprise. It does leave an uncomfortable amount of interpretation for the "acceptable level" aspect. You may have noticed that most of the managers one might consider successful are usually self-described or outwardly praised as being risk-takers. On the other side of the equation we have security professionals, most of whom I would label as risk-avoiders.

The source escapes me now, but a recent security magazine article observed that those closest to the hands-on aspects of security rated their companies as being the least secure. Assessments of company security improved the farther one was removed from day-to-day operations, such that the CIO and above was much more positive about the company's security outlook. The major factor in this equation is probably the separation between the corner office and the cubicle, but another could be the acceptable level of risk for the parties involved. When a CIO or CEO is juggling market risk, credit risk, geo-political risk, legal risk, and other worries, digital risk is just another item in the portfolio.

The difference between digital risk and many of the other risk types is the consequences can be tough to identify. In fact, the more serious the impact, the least likely you could be to discover the intrusion.

How is that possible? What causes more damage: a DDoS attack that everyone notices because "the network is slow," or a stealthy economic competitor whose entire reason in life is to avoid detection while stealing data?

Without evidence to answer the question are you secure?, managers practice management and defense by belief instead of management and defense by fact.

Rabu, 12 Desember 2007

Incident Severity Ratings

Much of digital security focuses on pre-compromise activities. Not as much attention is paid to what happens once your defenses fail. My friend Bamm brought this problem to my attention when he discussed the problem of rating the severity of an incident. He was having trouble explaining to his management the impact of an intrusion, so he asked if I had given any thought to the issue.

What follows is my attempt to apply a framework to the problem. If anyone wants to point me to existing work, please feel free. This is not an attempt to put a flag in the ground. We're trying to figure out how to talk about post-compromise activities in a world where scoring vulnerabilities receives far more attention.

This is a list of factors which influence the severity of an incident. It is written mainly from the intrusion standpoint. In other words, an unauthorized party is somehow interacting with your asset. I have ordered the options under each category such that the top items in each sub-list is considered worst, and the bottom is best. Since this is a work in progress I put question marks in many of the sub-lists.

  1. Level of Control


    • Domain or network-wide SYSTEM/Administrator/root

    • Local SYSTEM/Administrator/root

    • Privileged user (but not SYSTEM/Administrator/root

    • User

    • None?


  2. Level of Interaction


    • Shell

    • API

    • Application commands

    • None?


  3. Nature of Contact


    • Persistent and continuous

    • On-demand

    • Re-exploitation required

    • Misconfiguration required

    • None?


  4. Reach of Victim


    • Entire enterprise

    • Specific zones

    • Local segment only

    • Host only


  5. Nature of Victim Data


    • Exceptionally grave damage if destroyed/altered/disclosed

    • Grave damage if destroyed/altered/disclosed

    • Some damage if destroyed/altered/disclosed

    • No damage if destroyed/altered/disclosed


  6. Degree of Friendly External Control of Victim


    • None; host has free Internet access inbound and outbound

    • Some external control of access

    • Comprehensive external control of access


  7. Host Vulnerability (for purposes of future re-exploitation


    • Numerous severe vulnerabilities

    • Moderate vulnerability

    • Little to no vulnerability


  8. Friendly Visibility of Victim


    • No monitoring of network traffic or host logs

    • Only network or host logging (not both)

    • Comprehensive network and host visibility


  9. Threat Assessment


    • Highly skilled and motivated, or structured threat

    • Moderately skilled and motivated, or semi-structured threat

    • Low skilled and motivated, or unstructured threat


  10. Business Impact (from continuity of operations plan)


    • High

    • Medium

    • Low


  11. Onsite Support


    • None

    • First level technical support present

    • Skilled operator onsite



Based on this framework, I would be most worried about the following -- stated very bluntly so you see all eleven categories: I worry about an incident where the intruder has SYSTEM control, with a shell, that is persistent, on a host that can reach the entire enterprise, on a host with very valuable data, with unfettered Internet access, on a host with lots of serious holes, and I can't see the host's logs or traffic, and the intruder is a foreign intel service, and the host is a high biz impact system, and no one is on site to help me.

What do you think?

Senin, 26 November 2007

Controls Are Not the Solution to Our Problem

If you recognize the inspiration for this post title and graphic, you'll understand my ultimate goal. If not, let me start by saying this post is an expansion of ideas presented in a previous post with the succinct and catchy title Control-Compliant vs Field-Assessed Security.

In brief, too many organizations, regulators, and government agencies waste precious time and resources devising and auditing "controls," regardless of the effect these controls have or do not have on security. They are far too input-centric; they should become more output-aware. They obsess over recording conditions they believe may be helpful while remaining ignorant of the "score of the game." They practice management by belief and disregard management by fact.

Let me provide a few examples from one of the canonical texts used by the control-compliant crowd: NIST Special Publication 800-53: Recommended Security Controls for Federal Information Systems (.pdf). The following is an example of a control, taken from page 140.

SI-3 MALICIOUS CODE PROTECTION


The information system implements malicious code protection.

Control: Supplemental Guidance: The organization employs malicious code protection mechanisms at critical information system entry and exit points (e.g., firewalls, electronic mail servers, web servers, proxy servers, remote-access servers) and at workstations, servers, or mobile computing devices on the network. The organization uses the malicious code protection mechanisms to detect and eradicate malicious code (e.g., viruses, worms, Trojan horses, spyware) transported: (i) by electronic mail, electronic mail attachments, Internet accesses, removable media (e.g., USB devices, diskettes or compact disks), or other common means; or (ii) by exploiting information system vulnerabilities. The organization updates malicious code protection mechanisms (including the latest virus definitions) whenever new releases are available in accordance with organizational configuration management policy and procedures. The organization considers using malicious code protection software products from multiple vendors (e.g., using one vendor for boundary devices and servers and another vendor for workstations). The organization also considers the receipt of false positives during malicious code detection and eradication and the resulting potential impact on the availability of the information system. NIST Special Publication 800-83 provides guidance on implementing malicious code protection.

Control Enhancements:
(1) The organization centrally manages malicious code protection mechanisms.
(2) The information system automatically updates malicious code protection mechanisms.


At first read one might reasonably respond by saying "What's wrong with that? This control advocates implementing anti-virus and related anti-malware software." Think more clearly about this issue and several problems appear.

  • Adding anti-virus products can introduce additional vulnerabilities to systems which might not have exposed themselves without running anti-virus. Consider my post Example of Security Product Introducing Vulnerabilities if you need examples. In short, add anti-virus, be compromised.

  • Achieving compliance may cost more than potential damage. How many times have you heard a Unix administrator complain that he/she has to purchase an anti-virus product for his/her Unix server simply to be compliant with a control like this? The potential for a Unix server (not Mac OS X) to be damaged by a user opening an email through a client while logged on to the server (a very popular exploitation vector on a Windows XP box) is practically nil.

  • Does this actually work? This is the question that no one asks. Does it really matter if your system is running anti-virus software? Did you know that intruders (especially high-end ones most likely to selectively, steathily target the very .gov and .mil systems required to be compliant with this control) test their malware against a battery of anti-virus products to ensure their code wins? Are weekly updates superior to daily updates? Daily to hourly?


The purpose of this post is to tentatively propose an alternative approach. I called this "field-assessed" in contrast to "control-compliant." Some people prefer the term "results-based." Whatever you call it, the idea is to direct attention away from inputs and devote more energy to outputs. As far as mandating inputs (like every device must run anti-virus), I say that is a waste of time and resources.

I recommend taking measurements to determine your enterprise "score of the game," and use that information to decide what you need to do differently. I'm not suggesting abandoning efforts to prevent intrusions (i.e., "inputs.") Rather, don't think your security responsibilities end when the bottle is broken against the bow of the ship and it slides into the sea. You've got to keep watching to see if it sinks, if pirates attack, how the lifeboats handle rough seas, and so forth.

These are a few ideas.

  1. Standard client build client-side survival test. Create multiple sacrificial systems with your standard build. Deploy a client-side testing solution on them, like a honeyclient. (See The Sting for a recent story.) Vary your defensive posture. Measure how long it takes for your standard build to be compromised by in-the-wild Web sites, spam, and other communications with the outside world.

  2. Standard client build server-side survival test. Create multiple sacrificial systems with your standard build. Deploy them as a honeynet. Vary your defensive posture. Measure how long it takes for your standard build to be compromised by malicious external traffic from the outside world -- or better yet -- from your internal network.

  3. Standard client build client-side penetration test. Create multiple sacrificial systems with your standard build. Conduct my recommendation penetration testing activities and time the result.

  4. Standard client build server-side penetration test. Repeat number 3 with a server-side flavor.

  5. Standard server build server-side penetration test. Repeat number 3 against your server build with a server-side flavor. I hope you don't have users operating servers as if they were clients (i.e., browsing the Web, reading email, and so forth.) If you do, repeat this step and do a client-side pen test too.

  6. Deploy low-interactive honeynets and sinkhole routers in your internal network. These low-interaction systems provide a means to get some indications of what might be happening inside your network. If you think deploying these on the external network might reveal indications of targeted attacks, try that. (I doubt it will be that useful due to the overall attack noise, but who knows?)

  7. Conduct automated, sampled client host integrity assessments. Select a statistically valid subset of your clients and check them using multiple automated tools (malware/rootkit/etc. checkers) for indications of compromise.

  8. Conduct automated, sampled server host integrity assessments. Self-explanatory.

  9. Conduct manual, sampled client host integrity assessments. These are deep-dives of individual systems. You can think of it as an incident response where you have not had indication of an incident yet. Remote IR tools can be helpful here. If you are really hard-core and you have the time, resources, and cooperation, do offline analysis of the hard drive.

  10. Conduct manual, sampled server host integrity assessments. Self-explanatory.

  11. Conduct automated, sampled network host activity assessments. I questioned adding this step here, since you should probably always be doing this. Sometimes it can be difficult to find the time to review the results, however automated the data collection. The idea is to let your NSM system see if any of the traffic it sees is out of the ordinary based on algorithms you provide.

  12. Conduct manual, sampled network host activity assessments. This method is more likely to produce results. Here a skilled analyst performs deep individual analysis of traffic on a sample of machines (client and server, separately) to see if any indications of compromise appear.


In all of these cases, trend your measurements over time to see if you see improvements when you alter an input. I know some of you might complain that you can't expect to have consistent output when the threat landscape is constantly changing. I really don't care, and neither does your CEO or manager!

I offer two recommendations:

  • Remember Andy Jaquith's criteria for good metrics, simplified here.


    1. Measure consistently.

    2. Make them cheap to measure. (Sorry Andy, my manual tests violate this!)

    3. Use compound metrics.

    4. Be actionable.


  • Don't slip into thinking of inputs. Don't measure how many hosts are running anti-virus. We want to measure outputs. We are not proposing new controls.


Controls are not the solution to our problem. Controls are the problem. They divert too much time, resources, and attention from endeavors which do make a difference. If the indications I am receiving from readers and friends are true, the ideas in this post are gaining traction. Do you have other ideas?

Senin, 05 November 2007

Deflect Silver Bullets

That's quite an image, isn't it? It's ISS CEO Tom Noonan holding a silver bullet, announcing the Proventia IPS product in the October 2003 issue of ISS' Connect magazine. Raise your hand if you think IPS or anything else ISS has produced is a silver bullet. No takers?

I don't mention this to criticize ISS, specifically. Rather, I'd like to emphasize the importance of proper frames of reference when considering security.

Maybe this story will help explain my point. In the early 1990s as a cadet at camp USAFA I took at least 14 technical classes, including math, science, and engineering subjects. These core classes are the reason every cadet graduates with a BS and not a BA, regardless of the field of study. Remember, I was a history and political science double major, preparing for a career in Air Force intelligence. One of my fellow history majors asked our astronautical engineering professor why we had to sit through his class. I still remember his answer:

One day you'll meet with a defense contractor trying to sell you a new satellite system. He'll promise the world, saying things like "We can park that satellite right over Moscow in geosynchronous orbit to provide you imagery."

When you hear that I want you to ask "How is that possible? What is going to keep the satellite there?"

I want you to know how to think properly about that problem, even though you may have forgotten all the details by then.


(For those of you who forget your astronautical engineering, it's not possible to park a satellite in geosynchronous orbit anywhere except the equator, unless you're taking extreme measures to actively keep the device in place beyond what's required for normal station-keeping.)

I find that many of those performing digital security work, most generic IT managers, and nearly all nontechnical managers do not know how to think about security properly. They think it's possible to park a satellite over Moscow, Russia as easily as Quito, Ecuador. They have no conceptual framework for digital security. They are looking for digital security silver bullets even though no analog silver bullet has ever killed the pirates, petty bandits, organized criminals, foreign intelligence services, or any of the other threats who have plagued humanity for hundreds of years.

Sloppy thinking is our greatest vulnerability. Forget about user education; I recommend management education. Deflect silver bullets.

Rabu, 31 Oktober 2007

A Plea to the Worthies

You may have seen stories like Cybersecurity Experts Collaborate with subtitles like A think tank has tapped several heavyweight security experts to staff a commission that will advise the president. That story continues:

The Center for Strategic and International Studies (CSIS) wants the commission to come up with a list of recommendations that the new president who takes office in January 2009 "can pick up and run with right away," said James Lewis, director of the CSIS Technology and Public Policy Program. The commission, made up of 32 cybersecurity experts, plans to finish its work by the end of 2008. I am fairly confident that nothing of value will come from this group, but there is one task which could completely reverse my opinion. Rather than wasting time on recommendations that will probably be ignored, how about taking a step in a direction that will have real impact: security metrics. That's right. Spend the first day (or two, if you are a slow reader or can't sit still for long periods) reading Andy Jaquith's book. Next, and this is the crucial part:

Figure out how to play and score the game before you pretend to think you can improve the score.

What does this mean? Just a few ideas include:

  • Propose definitions for security, risk, threat, vulnerability, inside threat, external threat, and all the other words we use yet upon which we never agree. Hold hearings and invite real security people (not just digital security people) to express their views.

  • Propose some metrics and see how other operations define success. Hold hearings on the results of that process.

  • Apply metrics to some real organizations and gain a baseline set of numbers. Repeat the process at determined time intervals. Try to identify correlations and if possible causations. Be anonymous if necessary, but use a real methodology and not the self-selection applied by CSI/FBI and others.


Do you see where I am going here? At the end of the process we could have a framework for seeing just what is happening. I defy anyone to tell me just how bad or good our digital security situation is right now. Some say the sky is falling, others say we're happy! happy!, others say we're just as secure as we need to be to continue limping along. It is a proper role for a panel of worthies to help figure out how the game is played and then what the score is. It is a waste of time to make recommendations before those basic steps have been taken.

Senin, 29 Oktober 2007

Wake Up Corporate America

I am constantly hammered for downplaying the "inside threat" and focusing on external attackers. Several months ago I noted the Month of Owned Corporations as an example of enterprises demonstrating security failures exploited by outsiders. Thanks to Bots Rise in the Enterprise, it appears the external threat is finally getting more attention:

Who says bots are just for home PCs? Turns out bot infections in the enterprise may be more widespread than originally thought.

Botnet operators traditionally have recruited "soft" targets -- home users with little or no security -- and the assumption was that the more heavily fortressed enterprise was mostly immune. But incident response teams and security researchers on the front lines say they are witnessing significant bot activity in enterprises as well...

Rick Wesson, CEO of Support Intelligence, says the rate of botnet infection in the enterprise isn't necessarily increasing -- it just hasn't been explored in detail until recently. "What's changing is the perception. It's been underestimated, underreported, and underanalyzed," Wesson says. "Corporate America is in as bad shape as a user at home."

Wesson says his firm, which does security monitoring, instantly finds dozens of bot-infected client machines in an enterprise customer's network when it starts studying its traffic. "We find dozens of bot-compromised systems off the bat. The longer we stay in [there], the more we find."
(emphasis added)

Wake up, corporate America (and the world). When you open your eyes you're not going to like what you see, but dealing with the truth is better than pretending everything's ok.

Rabu, 24 Oktober 2007

Are You Secure? Prove It.

Are you secure? Prove it. These five words form the core of my recent thinking on the digital security scene. Let me expand "secure" to mean the definition I provided in my first book: Security is the process of maintaining an acceptable level of perceived risk. I defined risk as the probability of suffering harm or loss. You could expand my five word question into are you operating a process that maintains an acceptable level of perceived risk?

Let's review some of the answers you might hear to this question. I'll give an opinion regarding the utility of the answer as well.

For the purpose of this exercise let's assume it is possible to answer "yes" to this question. In other words, we just don't answer "no." We could all make arguments as to why it's impossible to be secure, but does that really mean there is no acceptable level of perceived risk in which you could operate? I doubt it.

So, are you secure? Prove it.

  1. Yes. Then, crickets (i.e., silence for you non-imaginative folks.) This is completely unacceptable. The failure to provide any kind of proof is security by belief. We want security by fact.

  2. Yes, we have product X, Y, Z, etc. deployed. This is better, but it's another expression of belief and not fact. The only fact here is that technologies can be abused, subverted, and broken. Technologies can be simultaneously effective against one attack model and completely worthless against another.

  3. Yes, we are compliant with regulation X. Regulatory compliance is usually a check-box paperwork exercise whose controls lag attack models of the day by one to five years, if not more. A compliant enterprise is like feeling an ocean liner is secure because it left dry dock with life boats and jackets. If regulatory compliance is more than a paperwork self-survey, we approach the realm of real of evidence. However, I have not seen any compliance assessments which measure anything of operational relevance.

  4. Yes, we have logs indicating we prevented attacks X, Y, and Z. This is getting close to the right answer, but it's still inadequate. For the first time we have some real evidence (logs) but these will probably not provide the whole picture. Sure, logs indicate what was stopped, but what about activities that were allowed? Were they all normal, or were some malicious but unrecognized by the preventative mechanism?

  5. Yes, we do not have any indications that our systems are acting outside their expected usage patterns. Some would call this rationale the definition of security. Whether or not this answer is acceptable depends on the nature of the indications. If you have no indications because you are not monitoring anything, then this excuse is hollow. If you have no indications and you comprehensively track the state of an asset, then we are making real progress. That leads to the penultimate answer, which is very close to ideal.

  6. Yes, we do not have any indications that our systems are acting outside their expected usage patterns, and we thoroughly collect, analyze, and escalate a variety of network-, host-, and memory-based evidence for signs of violations. This is really close to the correct answer. The absence of indications of intrusion is only significant if you have some assurance that you've properly instrumented and understood the asset. You must have trustworthy monitoring systems in order to trust that an asset is "secure." If this is really close, why isn't it correct?

  7. Yes, we do not have any indications that our systems are acting outside their expected usage patterns, and we thoroughly collect, analyze, and escalate a variety of network-, host-, and memory-based evidence for signs of violations. We regularly test our detection and response people, processes, and tools against external adversary simulations that match or exceed the capabilities and intentions of the parties attacking our enterprise (i.e., the threat). Here you see the reason why number 6 was insufficient. If you assumed that number 6 was ok, you forgot to ensure that your operations were up to the task of detecting and responding to intrusions. Periodically you must benchmark your perceived effectiveness against a neutral third party in an operational exercise (a "red team" event). A final assumption inherent in all seven answers is that you know the assets you are trying to secure, which is no mean feat.


Incidentally, this post explains why deploying a so-called IPS does nothing for ensuring "security." Of course, you can demonstrate that it blocked attacks X, Y, and Z. But, how can you be sure it didn't miss something?

If you want to spend the least amount of money to take the biggest step towards Magnificent Number 7, you should implement Network Security Monitoring.

Rabu, 10 Oktober 2007

Alternatives to "Expert Opinions"

If you read The Doomsday Clock you probably recognize I have a dim opinion of "expert opinion," especially by committee. At the risk of making a political statement, I rank expert opinion alongside central planning as some of the worst ways to make decisions -- at least where a large amount of complexity must be accommodated.

What is my alternative? I believe free markets are the best way to synthesize competing data points to produce an assessment. Does this sound familiar? If yes, you may be thinking of this 2003 story: The Case for Terrorism Futures:

Critics blasted policy-makers Tuesday for dropping a controversial plan to create a futures market to help predict terrorist strikes...

[S]upporters of the project point out that gathering intelligence is often a messy business, with payoffs to unsavory characters and the elimination of potential adversaries. The futures market, ugly as it may sound, doesn't involve any of those moral compromises, said Robin Hanson, one of the earlier promoters of the concept of trading floors for ideas and a PAM [Policy Analysis Market] project contributor. It's just a way of capturing people's collective wisdom...

Projects similar to PAM, like the Iowa Electronic Markets, which speculate on election results, have been surprisingly reliable indicators of what's going to happen next...

The price of orange juice futures has even been shown to accurately predict the weather...

Traders on the Hollywood Stock Exchange last year correctly picked 35 of the 40 Oscar nominees in the eight biggest categories, according to The New Yorker magazine...

Market mechanisms are more accurate than asking people their opinions because they're putting their money or reputation on the line," said Ken Killitz of the Foresight Exchange, which speculates on everything from the future of human cloning to the possibility that Roman Catholic priests will be allowed to marry. "It gives people an incentive to reveal what they know..."

[E]xchanges "tend to predict events really well when no one person knows the answer -- when information is distributed among many people with different knowledge bases," said Joyce Berg, a University of Iowa professor who helped organize the political trading floors...

Markets also bring together people with information about a particular subject in a way blue-ribbon panels of experts can't, added Hanson.

"You get people that know things about a subject, but don't have the credentials to say so," he said. "You get people who live in these areas (of the Middle East)."

There's also "less of an ability to spin" in markets than in policy debates, Hanson noted. "So you get what people actually think, not what they say."


I love this idea. The fact that intellectual pygmies in the Senate defeated it is a real shame.

I found many interesting articles on this subject by Robin D. Hanson from George Mason University and Oxford's Future of Humanity Institute; the latter offers a Global Catastrophic Risks program that is probably more interesting (but less marketing-savvy) than the Doomsday Clock.

If you're sufficiently motivated to start arguing against this idea, I will probably just point back into the literature (especially Hanson's) countering these complaints.

If you're wondering why I mention this at all, it ties into my mention of security breach derivatives in my post Excerpts from Ross Anderson / Tyler Moore Paper.

The Doomsday Clock

Tonight I finished watching a show called The Doomsday Clock, on the best TV channel (the History Channel, of course). I was vaguely aware of the clock, maintained by the Bulletin of the Atomic Scientists, but I didn't know the history of the project. According to Minutes to Midnight:
The Bulletin of the Atomic Scientists’ Doomsday Clock conveys how close humanity is to catastrophic destruction--the figurative midnight--and monitors the means humankind could use to obliterate itself. First and foremost, these include nuclear weapons, but they also encompass climate-changing technologies and new developments in the life sciences and nanotechnology that could inflict irrevocable harm.

Interesting -- you know what this is? It's a risk assessment. In my first book I defined risk as the probability of suffering harm or loss. The Doomsday Clock supposedly displays how close we are to world-ending catastrophe.

I find two aspects of the clock appealing.



First, as depicted by Information Aesthetics, the clock rapidly and clearly communicates its message. If you see fewer and fewer minutes until midnight, you sense something bad is about to happen. It's language-neutral and concise.



Second, the act of moving the hands and then tracking hand position over time provides a sense of risk trending. As depicted by Wikipedia above, you can get a historical reading of risk by watching the number of minutes to midnight rise and fall. The interval between the hand position changes is also significant.

The problem with the Doomsday Clock is the same problem found in many, if not most, risk assessments. It is more or less arbitrary. The creation of the clock and the initial position of its hands was completely arbitrary, in fact! The designer of the clock, artistic designer Martyl Langsdorf, invented the clock for the June 1947 issue of the Bulletin. She positioned the hands to be aesthetically pleasing, not to show how close we were to destruction. When you consider the amount of time she could have worked with (12 hours), limiting herself to a fifteen minute window set a precedent for the next sixty years. While the clock has moved outside this 15 minute window (for example, in 1991) the precedent was set too narrowly. What will the bulletin do when even greater threats exist -- move to second and then nano-second increments?

In response to the Soviet's 1949 detonation of their first atomic weapon, Bulletin founder and editor Eugene Rabinowitch told Langsdorf to move the hands from 7 minutes to midnight to 3 minutes to midnight. Again, this choice was basically to convey urgency. Only when the hands were moved on the magazine cover did readers start to appreciate the information conveyed by the clock.

From this point forward, the hands have moved back and forth as the Bulletin members and, more recently, outside parties have haggled about the position of the hands. I have a feeling these meetings would drive me crazy. It's a collection of people with opinions arguing about the location of hands on a clock created originally for artistic value. Still, as noted in my two "appealing" points, I think we can learn some lessons from the Doomsday Clock regarding the ability to quickly and powerfully communicate risk to others.

While researching this post I discovered that the ACLU jumped on the "clock bandwagon" with its Surveillance Society Clock. According to the ACLU, "It's six minutes before midnight as a surveillance society draws near within the United States." This is dumb for multiple reasons.

First, the ACLU chose a digital clock. I don't know about you, but for me a digital clock doesn't convey an amount of time as visually as an analog clock. It's like a speedometer; seeing it pegged to the right is more powerful than reading "101 MPH" or similar. Second, as Wired magazine astutely asked how do we know when we're there? It's tough to ignore Armageddon; it's easy to ignore a "surveillance state." Third, the ACLU painted itself into the same corner as the Bulletin did when it chose to set its initial time so close to midnight. What's the ACLU going to do with the clock when remote mind-reading is in use?

Be the Caveman Lawyer

A few weeks ago I recommended security people to at least Be the Caveman and perform basic adversary simulation / red teaming. Now I read Australia's top enterprises hit by laymen hackers in less than 24 hours:

A penetration test of 200 of Australia's largest enterprises has found severe network security flaws in 79 percent of those surveyed.

The tests, undertaken by University of Technology Sydney (UTS), saw 25 non-IT students breach security infrastructure and gain root or administration level access within the networks of Australia's largest companies, using hacking tools freely available on the Internet.

The students - predominately law practitioners - were given 24 hours to breach security infrastructure on each site and were able to access customer financial details, including confidential insurance information, on multiple occasions.

High-level business executives from the companies surveyed, rather than IT staff, were informed of the tests so the "day-to-day network security" of businesses could be tested.
(emphasis added)

Again, my advice is simple, but now it is modified. Be the Caveman Lawyer.

One other point from the article:

Most of the 21 percent of companies who passed the penetration tests owed their success to freeware Intrusion Detection Systems (IDSs), according to Ghosh.

Snort was mentioned earlier in the article. That means you can be a Cheap Caveman Lawyer and prepare for common threats.

Senin, 01 Oktober 2007

Someone Please Explain Threats to Microsoft

It's 2007 and some people still do not know the difference between a threat and a vulnerability. I know these are just the sorts of posts that make me all sorts of new friends, but nothing I say will change their minds anyway. To wit, Threat Modeling Again, Threat Modeling Rules of Thumb:

As you go about filling in the threat model threat list, it’s important to consider the consequences of entering threats and mitigations. While it can be easy to find threats, it is important to realize that all threats have real-world consequences for the development team.

At the end of the day, this process is about ensuring that our customer’s machines aren’t compromised. When we’re deciding which threats need mitigation, we concentrate our efforts on those where the attacker can cause real damage.

When we’re threat modeling, we should ensure that we’ve identified as many of the potential threats as possible (even if you think they’re trivial). At a minimum, the threats we list that we chose to ignore will remain in the document to provide guidance for the future.


Replace every single instance of "threat" in that section with "vulnerability" and the wording will make sense.

Not using the term "threat" properly is a hallmark of Microsoft publications, as mentioned in Preview: The Security Development Lifecycle. I said this in my review of Writing Secure Code, 2nd Ed:

The major problem with WSC2E, often shared by Microsoft titles, is the misuse of terms like "threat" and "risk." Unfortunately, the implied meanings of these terms varies depending on Microsoft's context, which is evidence the authors are using the words improperly. It also makes it difficult for me to provide simple substitution rules. Sometimes Microsoft uses "threat" when they really mean "vulnerability." For example, p 94 says "I always assume that a threat will be taken advantage of." Attackers don't take advantage of threats; they ARE threats. Attackers take advantage of vulnerabilities.

Sometimes Microsoft uses terms properly, like the discussion of denial of service as an "attack" in ch 17. Unfortunately, Microsoft's mislabeled STRIDE model supposedly outlines "threats" like "Denial of service." Argh -- STRIDE is just an inverted CIA AAA model, where STRIDE elements are attacks, not "threats." Microsoft also sometimes says "threat" when they mean "risk." The two are not synonyms. Consider this from p 87: "the only viable software solution is to reduce the overall threat probability or risk to an acceptable level, and that is the ultimate goal of 'threat analysis.'" Here we see confusing threat and risk, and calling what is really risk analysis a "threat analysis." Finally, whenever you read "threat trees," think "attack trees" -- and remember Bruce Schneier worked hard on these but is apparently ignored by Microsoft.


These sentiments reappeared in my review of Security Development Lifecycle: Microsoft continues its pattern of misusing terms like "threat" that started with "Threat Modeling" and WSC2E. SDL demonstrates some movement on the part of the book's authors towards more acceptable usage, however. Material previously discussed in a "Threat Modeling" chapter in WSC2E now appears in a chapter called "Risk Analysis" (ch 9) -- but within the chapter, the terms are mostly still corrupted. Many times Microsoft misuses the term risk too. For example, p 94 says "The Security Risk Assessment is used to determine the system's level of vulnerability to attack." If you're making that decision, it's a vulnerability assessment; when you incorporate threat and asset value calculations with vulnerabilities, that's true risk assessment.

The authors try to deflect what I expect was criticism of their term misuse in previous books. On p 102 they say "The meaning of the word threat is much debated. In this book, a threat is defined as an attacker's objective." The problem with this definition is that it exposes the problems with their terminology. The authors make me cringe when I read phrases like "threats to the system ranked by risk" (p 103) or "spoofing threats risk ranking." On p 104, they are really talking about vulnerabilities when they write "All threats are uncovered through the analysis process." The one time they do use threat properly, it shows their definition is nonsensical: "consider the insider-threat scenario -- should your product protect against attackers who work for your company?" If you recognize that a threat is a party with the capabilities and intentions to exploit a vulnerability in an asset, then Microsoft is describing insiders appropriately -- but not as "an attacker's objective."

Don't get me wrong -- there's a lot to like about SDL. I gave the book four stars, and I think it would be good to read it. I fear, though, that this is another book distributed to Microsoft developers and managers riddled with sometimes confusing or outright wrong ways to think about security. This produces lasting problems that degrade the community's ability to discuss and solve software security problems.


No one is going to take us seriously until we use the right terms. Argh.

Jumat, 28 September 2007

Be the Caveman

I just read a great story by InformationWeek's Sharon Gaudin titled Interview With A Convicted Hacker: Robert Moore Tells How He Broke Into Routers And Stole VoIP Services:

Convicted hacker Robert Moore, who is set to go to federal prison this week, says breaking into 15 telecommunications companies and hundreds of businesses worldwide was incredibly easy because simple IT mistakes left gaping technical holes.

Moore, 23, of Spokane, Wash., pleaded guilty to conspiracy to commit computer fraud and is slated to begin his two-year sentence on Thursday for his part in a scheme to steal voice over IP services and sell them through a separate company. While prosecutors call co-conspirator Edwin Pena the mastermind of the operation, Moore acted as the hacker, admittedly scanning and breaking into telecom companies and other corporations around the world.

"It's so easy. It's so easy a caveman can do it," Moore told InformationWeek, laughing. "When you've got that many computers at your fingertips, you'd be surprised how many are insecure."
(emphasis added)

So easy a caveman can do it? Just what happened here?

The government identified more than 15 VoIP service providers that were hacked into, adding that Moore scanned more than 6 million computers just between June and October of 2005. AT&T reported to the court that Moore ran 6 million scans on its network alone...

Moore said what made the hacking job so easy was that 70% of all the companies he scanned were insecure, and 45% to 50% of VoIP providers were insecure. The biggest insecurity? Default passwords.

"I'd say 85% of them were misconfigured routers. They had the default passwords on them," said Moore. "You would not believe the number of routers that had 'admin' or 'Cisco0' as passwords on them. We could get full access to a Cisco box with enabled access so you can do whatever you want to the box...

He explained that he would first scan the network looking mainly for the Cisco and Quintum boxes. If he found them, he would then scan to see what models they were and then he would scan again, this time for vulnerabilities, like default passwords or unpatched bugs in old Cisco IOS boxes. If he didn't find default passwords or easily exploitable bugs, he'd run brute-force or dictionary attacks to try to break the passwords.


So, we have massively widespread scanning, discovery of routers, and attempted logins. No kidding this is caveman-fu.

And Moore didn't just focus on telecoms. He said he scanned "anybody" -- businesses, agencies and individual users. "I know I scanned a lot of people," he said. "Schools. People. Companies. Anybody. I probably hit millions of normal [users], too."

Moore said it would have been easy for IT and security managers to detect him in their companies' systems ... if they'd been looking. The problem was that, generally, no one was paying attention.

"If they were just monitoring their boxes and keeping logs, they could easily have seen us logged in there," he said, adding that IT could have run its own scans, checking to see logged-in users. "If they had an intrusion detection system set up, they could have easily seen that these weren't their calls."
(emphasis added)

Didn't someone tell Robert Moore that "IDS is dead?" Apparently all of these victim companies heard it, and turned off their visibility mechanisms.

My advice? Be the caveman. Perform adversary simulation. This is the simplest possible way to pretend you are a bad guy and get realistic, actionable results.

  1. Identify all of your external IP addresses.

  2. Scan them.

  3. Try to log into remote administration services you find in Step 2.

  4. Report your findings to device owners when you gain access.


How difficult is that? This methodology is nowhere near to being effective against targeted threats who want to compromise you specifically, but they would work against this opportunistic threat.

PS: If I hear one more time that "scanning is too dangerous for our network" I will officially Lose It. Scanning of external systems happens 24x7. If you really don't want an authorized party to scan your external network, try setting up a passive detection systems like PADS and wait for a bad guy to ignore the fragility of your systems and scan them for you. Gather his results passively and then act on them.

Jumat, 21 September 2007

Pescatore on Security Trends

The article Spend less on IT security, says Gartner caught my attention. Comments are inline, and my apologies if Mr. Pescatore was misquoted.

Organisations should aim to spend less of their IT budgets on security, Gartner vice-president John Pescatore told the analyst firm’s London IT Security Summit on 17 September.

In a keynote speech, he said that retailers typically spend 1.5% of revenue trying to prevent crime, then still lose a further 1.5% through shoplifting and staff theft, costing 3% in total.


Digital security is not comparable to shoplifting. It is not feasible for shoplifters to steal every asset from an a company in a matter of seconds, or subtly alter all of the assets so as to render them untrustworthy or even dangerous. I would also hardly consider shoplifters an "intelligent adversary."

But Gartner’s research suggests that the average organisation spends 5% of its IT budget on security, even with disaster recovery and business continuity work excluded, and IT managers are tired of requests for more. Security has dropped from first (in 2005) to sixth (in 2007) in the firm’s annual survey of chief information officers’ technical concerns.

I concur with this, especially with regard to IPS and SIM/SEM/SIEM. Managers spent a lot of money several years ago on this technology and they are "still getting hacked."

Pescatore said that managers are not impressed by the claim that “security is a journey” without a destination. “Can you imagine, ‘profit is a journey’?” he asked, pointing out that other areas of IT are often able to offer their organisations more functionality for less money, or some other kind of business benefit.

This could be the single greatest problem I see in this whole article. Please tell me how profit is not a journey, unless the goal of your company is to 1) enjoy a really awesome quarter (or year, etc.) and then disappear; or 2) dash for the acquisition line and then cash out. The operative word in business is not profit but profitability. A stock price reflects future value. Turning strictly to the security aspect, I'd like to hear Mr. Pescatore or his upset managers describe when security can end. This statement is clearly troubling.

Growing efficiencies could be possible for IT security too: “I really don’t think most of us need more and people,” he said, if organisations moved to a model he called ‘Security 3.0’. In this, IT security would anticipate threats, rather than fight them after they hit.

This is another poor statement. As I wrote in Attacker 3.0, security is at 1.0 (and that's being generous) while we approach Web 2.0 and fight Attacker 3.0. No one is ahead of the threat and no one could ever be. Advanced attackers are digital innovators. By definition they cannot be anticipated.

Pescatore said ways to prevent problems rather than fight them include buying and building secure systems, which means considering security during procurement and development, and rejecting products which are not adequately protected. This might mean spending more initially, but prevention is cheaper than cure.

This is all true and sounds nice, but it has never worked and will never work. Everyone is so excited to see the government finally working with Microsoft to secure the operating system, but at this point who really cares? It's all about applications now.

In response to a question, Pescatore dismissed the idea that insider threats are growing: he believes that attacks generated by malicious insiders are stable at 20-25%. Half come from mistakes made by insiders, while around 30% of attacks are made solely by outsiders, the majority of whom are cybercriminals.

I love to see the insider threat fans squashed.

Let's hear another view on this speech from Security to drop out of CIO spending top ten:

Security pros need to get more proactive about dealing with threats and adopt strategies to persuade their colleagues to take on security spending as part of their projects, according to analysts Gartner.

The changes in roles for security specialists come as the internet security market enters what Gartner described as the third major stage of its development.

Always a sector of the industry that relishes one-upmanship, the Web 2.0 phenomenon is accompanied by Security 3.0. The first stage of security, according to Gartner, belongs to the time of centralised planning and the mainframe. The widespread use of personal computers ushered in reactive security to deal with threats such as malicious computer hackers and worms (security 2.0). Security 3.0 is characterised by an era of more proactive security, according to John Pescatore, a VP and distinguished analyst at Gartner.

Security 3.0 involves an approach to risk management that applies security resources appropriately to meet business objectives. Instead of bolting security on as an afterthought, Security 3.0 integrates compliance, risk assessment and business continuity into every process and application.

For security managers the process involves persuading their counterparts in, for example, application development to include security functions in their projects. In this way security expenditure in real terms can go up even as security budgets (as such) stay flat or modestly increase. Security budgets freed from firefighting problems can then be invested with a view to managing future risks.

"Even a reduced security budget does not necessarily mean reducing security-related spending," Pescatore said. "Security professionals need to think in terms of changing who pays for security controls," so they can "move upstream" and spend their time and resources on more demanding projects, he added.


Now this makes sense to me. I do not understand why security as it relates to applications should be treated separately from those applications. Security should be another consideration that is built into the application, along with performance and other features. Security as an operational discipline doesn't need to be integrated into other businesses, but including security natively in projects is the right way forward.

Gartner predicts that security spending will rise 9.3 per cent in 2007, but will drop out the first ten spending priorities for CIOs for the first time since the prolific internet worms of 2003. Malware threats these days have evolved into targeted attacks featuring malware payloads designed not to draw attention to themselves.

This "run silent, run deep" malware means that security is a less high-profile function than before, as improving business processes and reducing costs become the pre-eminent priorities for IT directors.


This is true and it is killing us. Security got plenty of attention when managers could see the sky was falling. In other words, when their email and their boss' email was inaccessible or filled with spam and malware, or they couldn't surf the Web because their pipe was filled by DoS traffic, security failures couldn't be ignored. Now enterprises are silently and completely owned, and no one cares.

Finally, a few more thoughts from Managing IT risk in unchartered waters of "Security 3.0":

Gartner research suggests that throwing money a security is not working. At the summit, the firm said that there is no correlation between security spending and the security level of a system. The firm added that progress in security should see a reduction in security spending, not increase it.

I agree with this. The reasons are complex, but a major problem is that managers have no idea if the money they apply makes any difference in their security posture. To the degree they measure at all, they measure inputs of questionable value and ignore the output. However, I don't see how Gartner can say that success in security means spending falls. This is not the so-called "war on drugs" where a raise in the price of a drug means interdiction could be restricting supply. Security spending is determined by management; it is not an output of the security process.

Overall, it must have been an interesting speech! I fear the overall take-away for managers will be the "spend less on security" and "employ fewer people" headlines. That may be appropriate if you know how spending and manpower affects security outputs, but that is not the case. I believe management is spending plenty of money on the wrong tools and potentially people, and directing resources to other functions would be more effective.

Kamis, 20 September 2007

Radiation Detection Mirrors Intrusion Detection

Yesterday I heard part of the NPR story Auditors, DHS Disagree on Radiation Detectors. I found two Internet sources, namely DHS fudged test results, watchdog agency says and DHS 'Dry Run' Support Cited, and I looked at COMBATING NUCLEAR
SMUGGLING: Additional Actions Needed to Ensure Adequate Testing of Next Generation Radiation Detection Equipment
(.pdf), a GAO report.

The report begins by explaining why it was written:

The Department of Homeland Security’s (DHS) Domestic Nuclear Detection Office (DNDO) is responsible for addressing the threat of nuclear smuggling. Radiation detection portal monitors are key elements in our national defenses against such threats. DHS has sponsored testing to develop new monitors, known as advanced spectroscopic portal (ASP) monitors.

In March 2006, GAO recommended that DNDO conduct a cost-benefit analysis to determine whether the new portal monitors were worth the additional cost. In June 2006, DNDO issued its analysis. In October 2006, GAO concluded that DNDO did not provide a sound analytical basis for its decision to purchase and deploy ASP technology and recommended further testing of ASPs. DNDO conducted this ASP testing at the Nevada Test Site (NTS) between February and March 2007.

GAO's statement addresses the test methods DNDO used to demonstrate the performance capabilities of the ASPs and whether the NTS test results should be relied upon to make a full-scale production decision.

GAO recommends that, among other things, the Secretary of Homeland Security delay a full-scale production decision of ASPs until all relevant studies and tests have been completed, and determine in cooperation with U.S. Customs and Border Protection(CBP), the Department of Energy (DOE), and independent reviewers, whether additional testing is needed.
(emphasis added)

Notice that a risk analysis was not done. Rather, a cost-benefit analysis was done. This is consistent with the approach I liked in the book Managing Cybersecurity Resources, although in that book the practicalities of assigning certain values made the exercise fruitless. Here the cost-benefit approach has a better chance of working.

Next the report summarizes the findings:

Based on our analysis of DNDO’s test plan, the test results, and discussions with experts from four national laboratories, we are concerned that DNDO’s tests were not an objective and rigorous assessment of the ASPs’ capabilities. Our concerns with the DNDO’s test methods include the following:

  • DNDO used biased test methods that enhanced the performance of the ASPs. Specifically, DNDO conducted numerous preliminary runs of almost all of the materials, and combinations of materials, that were used in the formal tests and then allowed ASP contractors to collect test data and adjust their systems to identify these materials.

    It is highly unlikely that such favorable circumstances would present themselves under real world conditions.

  • DNDO’s NTS tests were not designed to test the limitations of the ASPs’ detection capabilities -- a critical oversight in DNDO’s original test plan. DNDO did not use a sufficient amount of the type of materials that would mask or hide dangerous sources and that ASPs would likely encounter at ports of entry.

    DOE and national laboratory officials raised these concerns to DNDO in November 2006. However, DNDO officials rejected their suggestion of including additional and more challenging masking materials because, according to DNDO, there would not be sufficient time to obtain them based on the deadline imposed by obtaining Secretarial Certification by June 26. 2007.

    By not collaborating with DOE until late in the test planning process, DNDO missed an important opportunity to procure a broader, more representative set of well-vetted and characterized masking materials.

  • DNDO did not objectively test the performance of handheld detectors because they did not use a critical CBP standard operating procedure that is fundamental to this equipment’s performance in the field.

(emphasis added)
Let's summarize.

  • DNDO helped the vendor tune the detector.

  • DNDO did not test how the detectors could fail.

  • DNDO did not test the detectors' resistance to evasion.

  • DNDO failed to follow an important standard operating procedure.


I found all of this interesting and relevant to discussions of detecting security events.