Sabtu, 30 Agustus 2008

General Chilton on the Cyber Fight

A friend of mine defending .mil pointed me towards this article by Wyatt Cash: Cyber chief argues for new approaches. The "cyber chief" in question is Air Force General Kevin Chilton, a 1976 USAFA graduate and the first astronaut to achieve four stars. I'd like to share several excerpts:

The military’s commander of U.S. Strategic Command in charge of cyberspace, Air Force Gen. Kevin Chilton, warned that the underlying challenges and costs of operating in cyberspace often go unrecognized. And he proposed several measures to improve the security of the military’s non-classified networks.

“The hardest thing we’re challenged to do in cyberspace,” said Chilton, isn’t defending against cyberattacks. It is “operating the net under attack...”

“People talk about defending or exploiting cyberspace, but we don’t talk much about operating it if it’s under attack,” Chilton said. “It’s not easy work. And it’s not work to be taken on by amateurs.”

Chilton argued that many of the incidents that are billed as cyberattacks are more accurately just old-fashioned espionage — people looking for information who don’t necessarily represent military threats.

At the same time, the “exfiltration of data is huge” and is cause for concern, he said...

“Every time we have a problem or a virus is loaded, or someone comes in and takes over systems administration of a computer or a server, we have to take that system offline, scrub it, and sometimes throw it away. Guess what: That ain’t free,” said Chilton.

“We’re trying to get our arms around how much this is costing us every time someone breaks into our NIPRNet [unclassified but sensitive Internet protocol router network]. Some estimates are around $100 million a year; some people think that figure is low,” Chilton said...

Another step, which Chilton detailed with reporters after his presentation, would involve investing more heavily on sensor technology to filter and monitor data traffic. That would not only improve awareness and response times but ease the mounting burden of forensic work, Chilton said.

Chilton also proposed making “the operation and the defense of our network, the commander’s business,” arguing for commanders to hold people more accountable when network incidents occur.

Looking ahead, Chilton stressed the importance of increasing the number of people who are trained and equipped in the workings of cyberspace to be ready for attacks during a time of war.

“Like in any other domain, we need to train like we’re going to fight, and we’re in the fight every day already,” he said.


I am very interested in the cost question. I devised a "debt" scenario at work to describe this problem. It's common to try to justify a security program (product, process, and/or person) using loss avoidance terminology. However, not all of the losses one seeks to avoid will in fact be avoided. Speaking strictly from a system integrity point of view (to simplify this dicussion), each system that is compromised incurs a future cost. Again radically simplified, the most basic cost involves rebuilding the system from scratch (if one acts conservatively).

General Chilton undoubtedly wants to know how much that one process costs. Imagine that you defer that cost by not detecting and responding to the intrusion. Perhaps the intruder is stealthy. Perhaps you detect the attack but cannot respond for a variety of reasons (see Getting the Job Done). The longer the intrusion remains active, I would argue, the more debt one builds.

This should be easy to justify in a theoretical sense. For example, the longer an intrusion persists:

  1. The greater the likelihood the intruder finds and steals, alters or destroys something of value on the system

  2. The greater the likelihood the intruder will identify a way to compromise other systems

  3. The greater the likelihood that relevant log files from the beginning of the intrusion will expire

  4. The more difficult it could become for the IR team to determine the scope of the intrusion

  5. The more entrenched the intruder could become as he learns the inner workings of the victim's security and administration processes


I'm sure you could imagine other problems with having persistent intruders. For budget justification purposes, it would be helpful to quantify this financially. Perhaps it would be possible for teams who have spent money on outside IR consulting to reason backwards from the final bill to create a rough estimate of these costs?

I bet the $100 million figure is for clean-up costs alone. It doesn't factor the cost of the damage caused by an adversary power knowing how to detect American submarines in the Taiwan Strait, or knowing how to fool missiles fired by American jets, or any other costs in lives and hardware associated with a future battle with an enemy well-versed in American military technology.

Overall, it's great to see this much attention at the four-star level.

Jumat, 29 Agustus 2008

Splunk on Ubuntu 8.04

I've been using Splunk at work, so I decided to try installing the free version on a personal laptop. Splunk is a log archiving and search product which I recommend security professionals try. Once you've used it you will probably think of other ways to leverage its power. Anyone can use a free version that indexes up to 500 MB per day, so it's perfect for a personal laptop's logs. This machine runs Ubuntu 8.04.

By default Splunk installs into /opt. Unfortunately when I built this system, I didn't create a /opt partition, and / is too small. So, I decided to create a symlink in /var/opt and accept the rest of the defaults when installing Splunk.
 
root@neely:/usr/local/src# ls -d /opt
/opt
root@neely:/usr/local/src# rmdir /opt
root@neely:/usr/local/src# ln -s /var/opt /opt

Next I installed the .deb that Splunk provides. I've also used the .rpm on Red Hat Enterprise Linux.

root@neely:/usr/local/src# dpkg -i splunk-3.3.1-39933-linux-2.6-intel.deb
Selecting previously deselected package splunk.
(Reading database ... 142815 files and directories currently installed.)
Unpacking splunk (from splunk-3.3.1-39933-linux-2.6-intel.deb) ...
Setting up splunk (3.3.1-39933) ...
----------------------------------------------------------------------
Splunk has been installed in:
/opt/splunk

To start Splunk, run the command:
/opt/splunk/bin/splunk start

To use the Splunk Web interface, point your browser at:
http://neely:8000

Complete documentation is at http://www.splunk.com/r/docs
----------------------------------------------------------------------

That was easy. Next I start Splunk.

root@neely:/usr/local/src# /opt/splunk/bin/splunk start

Splunk Free Software License Agreement
THIS SPLUNK SOFTWARE LICENSE AGREEMENT (THE "AGREEMENT") GOVERNS ALL SOFTWARE PR
...edited...
ditions of this Agreement will remain in full force and effect.
Do you agree with this license? [y/n]: y
Copying '/var/opt/splunk/etc/myinstall/splunkd.xml.default'
to '/var/opt/splunk/etc/myinstall/splunkd.xml'.
Copying '/var/opt/splunk/etc/modules/distributedSearch/config.xml.default'
to '/var/opt/splunk/etc/modules/distributedSearch/config.xml'.
/var/opt/splunk/etc/auth/audit/private.pem
/var/opt/splunk/etc/auth/audit/public.pem
/var/opt/splunk/etc/auth/audit/private.pem generated.
/var/opt/splunk/etc/auth/audit/public.pem generated.

/var/opt/splunk/etc/auth/audit/private.pem
/var/opt/splunk/etc/auth/audit/public.pem
/var/opt/splunk/etc/auth/audit/private.pem generated.
/var/opt/splunk/etc/auth/audit/public.pem generated.


This appears to be your first time running this version of Splunk.
Validating databases...
Creating /var/opt/splunk/var/lib/splunk/audit/thaweddb
Creating /var/opt/splunk/var/lib/splunk/blockSignature/thaweddb
Creating /var/opt/splunk/var/lib/splunk/_internaldb/thaweddb
Creating /var/opt/splunk/var/lib/splunk/fishbucket/thaweddb
Creating /var/opt/splunk/var/lib/splunk/historydb/thaweddb
Creating /var/opt/splunk/var/lib/splunk/defaultdb/thaweddb
Creating /var/opt/splunk/var/lib/splunk/sampledata/thaweddb
Creating /var/opt/splunk/var/lib/splunk/splunkloggerdb/thaweddb
Creating /var/opt/splunk/var/lib/splunk/summarydb/thaweddb
Validated databases: _audit, _blocksignature, _internal, _thefishbucket, history, main,
sampledata, splunklogger, summary

Checking prerequisites...
Checking http port [8000]: open
Checking mgmt port [8089]: open
Verifying configuration. This may take a while...
Finished verifying configuration.
Checking index directory...
Verifying databases...
Verified databases: _audit, _blocksignature, _internal, _thefishbucket, history, main,
sampledata, splunklogger, summary

Checking index files
All index checks passed.
All preliminary checks passed.
Starting splunkd...
Starting splunkweb.../var/opt/splunk/share/splunk/certs does not exist. Will create
Generating certs for splunkweb server
Generating a 1024 bit RSA private key
.......++++++
...............................++++++
writing new private key to 'privkeySecure.pem'
-----
Signature ok
subject=/CN=neely/O=SplunkUser
Getting CA Private Key
writing RSA key

Splunk Server started.

The Splunk web interface is at http://neely:8000
If you get stuck, we're here to help. Feel free to email us at 'support@splunk.com'.

Now I point Firefox to port 8000 on the local machine.



Cool. I need to tell Splunk to log something, so I select Index Files and point it to /var/log.



Returning to the main screen, within seconds Splunk has indexed the measly 8 MB or so of logs I have in /var/log.



Now I'm ready to start searching. For fun I start typing 'samba' in the search box, and decide to look at 'sambashare' as Splunk shows me what's been indexed.



That's it. The big caveat here is that you need to protect the Web and administration ports (8000 and 8089 TCP) yourself -- the free Splunk doesn't even have authentication. There are several tutorials on the Web about that, mainly about firewalling those ports and then using a Web proxy or similar to access the ports locally.

Minggu, 17 Agustus 2008

Better Risk Management for Banking Industry

With the recent identify theft cases that are happening around the banking industry, a new regulation is going to be implemented for counter fight identity theft. Effective November 1, 2008, all federally regulated banks, credit card companies and other financial institutions will be required to be in full compliance with the Identity Theft Red Flags Rule, which is designed to financial services firms protect consumers' identities.. The goal of the rules is to "flag" attempted and actual identity theft early, thereby reducing consequences associated with identity theft.

Each institution's program must include policies and procedures for detecting, preventing and mitigating identity theft. Further, the program must set forth a list of red flag activities that signal possible identity theft and a response plan for when a flag is raised. In addition, each financial institution must update its program periodically to reflect changes in risks from identity theft and implement a risk management program as part of the ID Theft Red Flags regulation.

8 tips for a Better Risk Management:

1. Assess in detail the different products and service offering of a financial institution, and review which red flags and level of risk is applicable for that particular product or service offer for example, - "credit cards" need high level of monitoring as well as pose high risk as fraudulent activities are most likely.

2. Streamline automation and manual checks for red flag items where necessary.

3. Focus on the different channels through which these products and services are provided to end users. For example, online access over the internet is more risky when compared to physically going to the bank.

4. Spend different amount of attention on each product and service offering based on risk factor. High risk demands more attention.

5. Study the historical data of an institution for identifying fraud activities, patterns etc.

6. Integrate risk management to current security and privacy programs by adopting similar approach for conducting risk assessments for different departments within the enterprise and leveraging data from these individual risk assessments to another. This will help identify clearly which regulation has directly focused on the risk or red flag action item, without duplicating effort, then attacking and placing checks on the ones that are relevant.

7. Do not depend totally on the vendor or service bureau for putting checks and conducting their own risk assessment. Instead have a thorough risk assessment program initiated and implemented by the financial institution for its different service bureaus to ensure full proof check and updates.

8. Appoint a key person to take charge and ownership of the risk management process. This person will initiate annual risk program effectiveness, adopt a revision process, monitor and constantly analyze current industry situations and risk profile, appoint a committee for ensuring that appropriate program is deployed, making and proposing changes etc.
Upasana
The Hacka Man

SecureWorks on Building and Sustaining a Security Operations Center

I received an email notifying me of a Webcast by SecureWorks titled Building and Sustaining a Security Operations Center. I'd like to highlight a few aspects of the Webcast that caught my attention.

First, the slide below shows the functions that SecureWorks considers to be in scope for a SOC. I noticed it includes device management. I think that function is mostly integrated with regular "IT" these days, so your SOC might not have to worry about keeping security devices running. Configuration is probably best handled by the SOC however.



Second, I liked seeing a slide with numbers of events being distilled into incidents.



Third, I thought this slide made a good point. You want to automate the early stages of security operations as much as possible (90% tech), but the response processes tend to be very skill-intensive (which translates into higher overall salary costs, i.e., you may have fewer IR handlers, but they could cost more than the event analysts). The "Adapt" section on the right seems to depict that mature operations end up spending about half of their budget on tech and half on people. Mature operations realize that their people must keep up-to-date with attacks and vulnerabilities, or they fail to "adapt" and become dated and ineffective.



Finally, SecureWorks spent a lot of time talking about "co-sourcing," or having an in-house team meeting its core security competencies, while an outside group (like SecureWorks, hey!) fills in the gaps. This is the MSSP industry response to the recent trend for companies to move their security function back in-house. I think it makes sense, however. Using outside vendors for security intelligence and high-end attack and artifact analysis is a smart way to spend your money.

How to hack a Bank part 1?

This is going to be a very sensitive topic for the Banking industry, however I am not going to post any exploits or vulnerabilities of how to hack a bank, instead a high level overview of how to gain money from a bank. I am not going to write a long article on this as the story might go on and on.

Several months back, i was performing a penetration test for a large bank here. Although it was only a web penetration test, i was already starting to observe the banking environment, the technology used, the physical environment, their partners, ATM etc, to see if loopholes can discovered. Everyday at the bank, i made new friends and started talking to them to learn more about the banking environment and the job nature. At the end of the penetration test, I was thinking to publish an article of how to hack a bank, however, its either i am too lazy to do so or i can't be bothered. Today, I just feel like writing an article on it, just a sudden urge to do so.

In early days, the banking environment used to be a simple and closed environment whereby the only way to hack the bank is to rob the bank. There were no ATMs, no internet banking, no huge and complicated networks. To withdraw any money, the only way is to go to the bank's branch and fill up the withdraw form and provide your bank account passbook for updating purposes and the money is given to you. Mainframe is the backend system that does all the processing of the transactions, i think until this very day, it still prevails. Today, we are more advanced. We have internet banking without the need of any passbooks, we have ATMs, Credit and Debit cards, complex networks to interconnect multiple systems together, we have cash deposit machines, huge variations of databases and partners that might house the bank's data/information. So you see, it used to be maybe one or two doors opened. Today however, many possibilities are possible because of multiple doors being opened. We still have not factored in the physical site and environment. You might be surprise that this is one of the most easiest way to enter the bank.

A lot of people might think that hacking the bank is a tough job due to its tight security and controls, but you might be surprise that sometimes the weakest link is actually the easiest link. Stay tuned for part 2.

Disclaimer: The materials and information here are solely for educational purpose only. Do not attempt to hack a bank with knowledge acquired. Do not try at any bank.

The Hacka Man

Renesys on Threats to Internet Routing and Global Connectivity

When I attended the FIRST 2008 conference in Vancouver, BC in June, one of my favorite talks was Threats to Internet Routing and Global Connectivity by Earl Zmijewski from Renesys. I've always liked learning about the Big Internet, where 250,000+ routes are exchanged over BGP and 45,000 updates per minute is considered a "quiet" load! I was This was the first time I heard of Pretty Good BGP, summarized by the subtitle of the linked .pdf paper: Improving BGP by Cautiously Adopting Routes.

Thoughts on OMFW and DFRWS 2008

Last week I was very happy to attend the 2008 Open Memory Forensics Workshop (OMFW) and the Digital Forensic Research Workshop. Aaron Walters of Volatile Systems organized the OMFW, which consisted of about 40 attendees and a mix of panels and talks in 10 quick afternoon sessions. My first impression of the event was that the underground could have set digital forensics back 3-5 years if they had attacked our small conference room. Where else do you have Eoghan Casey, Brian Carrier, Harlan Carvey, Michael Cohen, Brendan Dolan-Gavitt, George Garner Jr., Jesse Kornblum, Andreas Schuster, Aaron Walters, et al, in the same room? I thought Brian Dykstra framed the situation properly when asking the following: "I know this is an easy question for all you 'beautiful minds,' but..."

Following the OMFW, I attended the first two days of DFRWS. I thought Secret Service Special Agent Ryan Moore started the conference well by describing his investigations of point-of-sale compromises (announced by the FBI as a retail hacking ring) (.pdf). This was probably the best .gov presentation I've seen in a while. I was impressed by the degree to which SA Moore used open source, because he wanted to show retailers they could vastly improve their security using low-cost methods.

The remainder of the DFRWS presentations were a mix of academic-style presentations, tool development updates, and reports on practical issues faced in the field. The academic presentations made an impression on me; I noticed that those sorts of talks are some of the closest we have to "computer science." For example, you develop a new algorithm or technique (perhaps to carve memory), and then test the method against a range of samples.

In other cases, researchers "simply" seek to understand how a system works. I say "simply," because you might think "Hey, it's a computer. It must be easy to figure out." Instead, researchers find that systems (OS, applications, whatever) don't do what their developers claim they do, or the internals are more complicated than at first glance, or any other number of permutations make the reality diverge sharply from the theory.

In terms of technical notes, OMFW and DFRWS contained many little tidbits. For example, I was reminded that one can alter the run-time configuration of any Windows system by writing directly to the registry in memory. Normally these changes are synced to disk every 5 seconds, but those can be avoided because direct memory access avoids the Windows APIs which would result in changes being saved. It was cool to hear about matching packets in memory with similar packets captured outside the system (via network sniffer) in order to improve attribution. (Those packets in memory can be associated with a user logged in at the time the memory was captured.)

Integrating evidence via framework tools is another theme. PyFlag looks great for this. I must congratulate the Volatility/PyFlag team for winning the 2008 DFRWS Challenge, a sort of CTF for defenders. (Notice this gets zero press. Defense is not glamourous.) Reading their team submission is a learning experience. Indeed, I was very impressed by the level of expertise applied to each challenge. I'd like to review the archives to see how previous events have been investigated.

Getting the Job Done

As an Air Force Academy cadet I was taught a training philosophy for developing subordinates. It used a framework of Expectations - Skills - Feedback - Consequences - Growth. This model appears in documents like the AFOATS Training Guide. In that material, and in my training, I was taught that any problem a team member might encounter could be summarized as a skill problem or a will problem. In the years since I learned those terms, and especially while working in the corporate sector, I've learned those two limitations are definitely not enough to describe challenges to getting the job done. I'd like to flesh out the model here.

The four challenges to getting the job done can be summarized thus:

  1. Will problem. The party doesn't want to accomplish the task. This is a motivation problem.

  2. Skill problem. The party doesn't know how to accomplish the task. This is a methods problem.

  3. Bill problem. The party doesn't have the resources to accomplish the task. This is a money problem.

  4. Nil problem. The party doesn't have the authority to accomplish the task. This is a mojo problem.


I have encountered plenty of roles where I am motivated and technically equipped, but without resources and power. I think that is the standard situation for incident responders, i.e., you don't have the evidence needed to determine scope and impact, and you don't have the authority to change the situation in your favor. What do you think?

Jumat, 15 Agustus 2008

Microsecurity vs Macrosecurity

I found the following insight by Ravila Helen White in
Information Security and Business Integration
to be fascinating:

Economists figured out long ago that in order to understand the economy, they would have to employ a double-pronged approach. The first approach would look at the economy by gathering data from individuals and firms on a small scale. The second approach would tackle analysis of the economy as a whole. Thus was born micro and macro economics.

We can make information security more consumable by taking a page from economics. If we divide information security in the same manner as economics (its analytical form), we get micro information security and macro information security.

Micro information security is the nuts and bolts that support an organization's information security practice. It's the technology, controls, countermeasures and tactical solutions that are employed day-to-day to defend against cyber threats. It's a step-by-step examination of information security for educational purposes and to facilitate discussion with our peers.

Macro information security is the big picture and can be utilized to keep management in the loop. It's the blueprint, framework, strategic plan, road map, governance and policies designed to influence and protect the enterprise. It's the bottom line.

Macro information security also extends externally to support partners and customers as well as ensure compliance with regulations. Internal organization extension includes support of convergence programs and includes alignment to business goals and objectives.

Macro information security enables security leaders to align themselves and the program(s) they oversee with the business. It bridges information security vernacular with traditional business acumen. When used correctly, macro information security can be the tool that equals success. And, success is being invited back to the table again and again.


I like this separation, although I am not as comfortable with the exact definitions. If you're fuzzy about the difference between microeconomics and macroeconomics, Wikipedia is helpful:

Microeconomics is a branch of economics that studies how individuals, households and firms make decisions to allocate limited resources, typically in markets where goods or services are being bought and sold.

Microeconomics examines how these decisions and behaviours affect the supply and demand for goods and services, which determines prices; and how prices, in turn, determine the supply and demand of goods and services.


Macroeconomics is a branch of economics that deals with the performance, structure, and behavior of a national or regional economy as a whole... Macroeconomists study aggregated indicators such as GDP, unemployment rates, and price indices to understand how the whole economy functions. Macroeconomists develop models that explain the relationship between such factors as national income, output, consumption, unemployment, inflation, savings, investment, international trade and international finance.

The differences are striking and the distinction helpful. I don't think anyone thinks of a microeconomist in a negative light because he or she doesn't dwell on the "big picture" macroeconomic view. It's simply two different ways to contemplate and explain economic activity.

We have a separation of sorts in the security world. Macrosecurity types like to think about aggregate risk, capturing metrics, and enterprise-wide security postures. Microsecurity types prefer to focus on individual networks, hosts, applications, operating systems, and hardware, along with specific attack and defense options.

I think I prefer microsecurity issues but spend time on the macro side when I have to justify my work to management.

The Limits of Running IT Like a Business

I liked this CIO Magazine article by Chris Potts: The Limits of Running IT Like a Business:

A rallying call of corporate strategies for IT in recent years has been to run the IT department "like a business." When the technology-centric first generation of IT strategies reached a point of diminishing returns, this next stage was both inevitable and beneficial....

But with these benefits come pitfalls, especially if you take the IT-is-like-a-business approach to extremes. If you've tried managing an internal IT department as a bona fide business you already know that you can't take that very far, for the obvious reason that your IT department isn't a business. It is, after all, a part of a business: a significant contributor to a value chain, not a self-contained value chain of its own. And the harder you try to create a separate value chain for IT, the harder it becomes for the IT department to become integrated with the business of which it is truly part.

A strategy founded on running the IT department like a business will reach a natural point of diminishing returns, if it hasn't already. Innovative companies have moved to the next-generation strategy, in which the CIO's purpose is not necessarily to run a traditional IT department at all. Her primary role is to provide corporate leadership to business functions which are investing in and exploiting IT in the context of their business strategies and operating plans...

There's a world of difference between running the IT department "like" a business, and trying to run it "as" one. It's amazing how one word can fundamentally alter strategy. Running IT like a business means adopting a businesslike mindset, processes and financial disciplines. Running it as a business means competing for revenue and investment in an open market, and going bankrupt if you run out of cash to cover your liabilities.

What happens if a CIO attempts to run her department as a business? Colleagues in other departments will perceive that the IT department wants to be treated like a supplier. If the CIO's chosen business is primarily to be a provider of operational IT services, then that what is her "customers" expect her to concentrate on...

The IT department might find another pitfall if it tries too hard to run itself as a business. The company's business units will be reluctant to fund any material investment by IT in anything that looks like branding, marketing, selling or upgrading the management systems that support the IT department's own productivity. Why should they? One of the primary cost advantages of an internal department is that it doesn't require all the capabilities a real supplier needs to compete in the open market. So the CIO is caught. She has placed herself in competition with bona fide external suppliers but without access to the investment that they have in order to compete as an equal...


I liked this article because I see this "internal business" model everywhere, particularly when security projects must justify their "ROI". Ugh.

Is This You Too?

Is this you too?

To understand what it's like to be a federal chief information security officer, consider Larry Ruffin. As CISO at the Interior Department, his job could be described as having little to do with being a chief and not much more about security.

Although he regards Interior's current information security as "far from inadequate," Ruffin and Chief Information Officer Michael Howell don't have a way to check that the department's network security is configured correctly or to monitor suspicious activity on a daily basis. Ruffin also has no authority and few resources to check on the security of employees' equipment, such as laptops, workstations and servers, or to monitor specific applications. He has to rely on verbal and written promises from Interior's bureau managers that they are complying with security policies. To a limited extent, Ruffin says, he conducts on-site checks of systems, which in the end offer little insight into the state of IT security departmentwide.

"How do you take control, when you don't [have authority over] the funds or maintain clear authority to make decisions? That stymies processes," Ruffin says. "We don't get clear approvals and don't feel empowered to make decisions that might have budgetary impacts. Those decisions can get made, but rarely."

Ruffin isn't alone. His experience is common to CISOs across government. Security budgets are paper thin, and CISOs rarely have the authority to enforce security policies down deep into individual department offices. Their job is one of frustration; they're aware of what's required to protect agency networks, but unable to get the job done. It's no wonder that more security analysts are warning of serious security breaches, if they have not occurred already...

The CISO job today is more of a policy- and compliance-reporting position than one that tests and monitors networks. And the job has limited power to oversee a department's systems. As a result, says Mike Jacobs, former information assurance director at the National Security Agency and now an independent consultant, the federal government is at its "weakest state ever" in terms of homeland security. "I'm struck with how little power and capability to influence the CISOs have," he says. "Most are left to cajole those who own the IT funds to do what needs to be done from a security standpoint. Few, if any, have direct responsibility."


This excerpt is from Top IT cops say lack of authority, resources undermine security by Jill R. Aitoro of GovExec.com.

Is This You?

Security person, is this you?

The pressure on the risk department to keep up and approve transactions was immense... In their [traders and bankers] eyes, we were not earning money for the bank. Worse, we had the power to say no and therefore prevent business from being done. Traders saw us as obstructive and a hindrance to their ability to earn higher bonuses. They did not take kindly to this. Sometimes the relationship between the risk department and the business lines ended in arguments. I often had calls from my own risk managers forewarning me that a senior trader was about to call me to complain about a declined transaction. Most of the time the business line would simply not take no for an answer, especially if the profits were big enough. We, of course, were suspicious, because bigger margins usually meant higher risk.

Criticisms that we were being “non-commercial”, “unconstructive” and “obstinate” were not uncommon. It has to be said that the risk department did not always help its cause. Our risk managers, although they had strong analytical skills, were not necessarily good communicators and salesmen. Tactfully explaining why we said no was not our forte. Traders were often exasperated as much by how they were told as by what they were told.

At the root of it all, however, was — and still is — a deeply ingrained flaw in the decision-making process. In contrast to the law, where two sides make an equal-and-opposite argument that is fairly judged, in banks there is always a bias towards one side of the argument. The business line was more focused on getting a transaction approved than on identifying the risks in what it was proposing. The risk factors were a small part of the presentation and always “mitigated”. This made it hard to discourage transactions. If a risk manager said no, he was immediately on a collision course with the business line. The risk thinking therefore leaned towards giving the benefit of the doubt to the risk-takers.

Collective common sense suffered as a result. Often in meetings, our gut reactions as risk managers were negative. But it was difficult to come up with hard-and-fast arguments for why you should decline a transaction, especially when you were sitting opposite a team that had worked for weeks on a proposal, which you had received an hour before the meeting started. In the end, with pressure for earnings and a calm market environment, we reluctantly agreed to marginal transactions.


This excerpt is from the 9 Aug 08 Economist story A personal view of the crisis: Confessions of a risk manager .

Kamis, 14 Agustus 2008

Reaction to Air Force Cyber Command Announcement

I've been writing about the proposed Air Force Cyber Command since the Spring of 2007. Since Bob Brewin broke the story that "the Air Force on Monday suspended all efforts related to development of a program to become the dominant service in cyberspace," I've been getting emails and phone calls asking if I had seen the story and what was my reaction. I provided a quote for Noah Shachtman's story Air Force Suspends Controversial Cyber Command.

A story published today in the Air Force Times said:

[New Air Force Chief of Staff Gen. Norton] Schwartz appeared to backtrack on the Air Force’s plan to stand up its new Cyber Command by Oct. 1. He said the mission will go forward, but that the organizational structure of the mission and how it will integrate with the Defense Department and U.S. Strategic Command are still being considered.

I would not be surprised if Gen Schwartz was told to play nicely with the other services. I don't expect to see any more commercials promoting Air Force cyber defense!

More Threat Reduction, Not Just Vulnerability Reduction

Recently I attended a briefing were a computer crimes agent from the FBI made the following point:

Your job is vulnerability reduction. Our job is threat reduction.

In other words, it is beyond the legal or practical capability of most computer crime victims to investigate, prosecute, and incarcerate threats. Therefore, we cannot independently influence the threat portion of the risk equation. We can play with the asset and vulnerability aspects, but that leaves the adversary free to continue attacking until they succeed.

Given that, it is disappointing to read State AGs Fail to Adequately Protect Online Consumers. I recommend reading that press release from the Center for American Progress and Center for Democracy and Technology for details.

I found this recommendation on p 25 interesting:

Consumers are paying a steep price for online fraud and abuse. They need aggressive law enforcement to punish perpetrators and deter others from committing Internet crime. A number of leading attorneys general have shown they can make a powerful difference. But others must step up as well. To protect consumers and secure the future of the Internet, we recommend that state attorneys general take the following steps...

Develop computer forensic capabilities. Purveyors of online fraud and abuse — and the methods they use — are often extremely difficult to detect. Computer forensics are thus needed to trace and catch Internet fraudsters. Attorneys general in Washington and New York invested in computer forensics and, as a result, were able to prosecute successful cases against spyware. Most states, however, have little in the way of computer forensic capability.

Developing this capability may not require substantial new funds. Rather, most important are human and intellectual resources. Even New York’s more intensive adware investigations, for instance, were done with free or low-cost software, which, among other things, captured screenshots, wiped hard drives, and tracked IP addresses and installation information through “packet sniffing” tools. Attorneys general must make investments in human capital so that such software can be harnessed and put to use.


When I teach, there are a lot of military people in my classes. The rest come from private companies. I do not see many law enforcement or other legal types. I'm guessing they do not have the funds or the interest?

Snort Report 18 Posted

My 18th Snort Report titled The Power of Snort 3.0 has been posted. From the article:

Service provider takeaway: Service providers will learn about Snort 3.0's new architecture and how it can be used as a platform for generic network traffic inspection tools.

Recently, I attended a seminar offered by Sourcefire, the company that supports Snort. Marty Roesch, Snort's inventor and primary developer, discussed Snort 3.0. In this edition of the Snort Report, I summarize Marty's plans and offer a few thoughts on the direction of Snort development.


Right now I am working on the next Snort Report, where I discuss how to get the latest Snort 3.0 beta running on Debian.

Kamis, 07 Agustus 2008

Black Hat USA 2008 Wrap-Up: Day 2

Please see Black Hat USA 2008 Wrap-Up: Day 1 for the first part of this two-part post.

Day two of the Black Hat USA 2008 Briefings began much better than day one.

  • Rod Beckström, Director of the National Cyber Security Center in DHS, delivered today's keynote. I had read articles like WhiteHouse Taps Tech Entrepreneur For Cyber Defense Post so I wasn't sure what to think of Mr. Beckström. It turns out his talk was excellent. If Mr. Beckström had used a few less PowerPoint slides, I would have classified him as a Edward Tufte-caliber speaker. I especially liked his examination of history for lessons applicable to our current cyber woes. He spoke to the audience in our own words, calling the US an "open source community," the Declaration of Independence and Constitution our "code," the Civil War a "fork," and so on. Very smart.

    For example, Mr. Beckström provided context for the photo at left of Union Intelligence Service chief Allan Pinkerton, President Lincoln, and Major General John A. McClernand during Antietam (late 1862). Using the photo Mr. Beckström explained the relationship between the intelligence community, the government, and the military. After I answered his question "what made the Civil War unique?" (answer: the telegraph), Mr. Beckström described how Lincoln was the first "wired" President and how electronic warfare against cables first began.

    In addition to talking about the French and Indian War and our own Revolution (e.g., Washington learned guerilla tactics, Benedict Arnold as insider threat), Mr. Beckström spoke about how to characterize our current problem. He said "offense is a lot easier than defense." Unlike Moore's Law, we don't have laws for the physics of networking, or the economics of networks or security, or how to do risk management. Mr. Beckström noted security is a cost (so much for "enablement") and that minimization of total cost C (where C equals cost of security S plus expected cost of a loss L) is the main goal. (I wonder if he's read Managing Cyber Security Resources?) Mr. Beckström said the CISO budget should be based on reducing estimated loss, but it's usually based on a percentage of the CTO or CIO's budget that's unrelated to any problem faced by the security team.

    Most interesting to me, Mr. Beckström explained how he believes investment in protocols (like security DNS, BGP, SMS/IP, even POTS) could be cheap while yielding large benefits. I will have to watch for developments there.

  • Next I saw Felix Lindner present Developments in Cisco IOS Forensics. It seemed like a lot of the ideas were present in his great talk from last year, but I liked this year's presentation anyway. FX is absolutely the authority on breaking Cisco IOS, he's an excellent speaker, and I learned a lot. FX discussed how attacks on IOS take the form of protocol, functionality/configuration, or binary exploitation attacks. Binary exploitation is of most interest to FX, and takes the form of binary modification of the runtime image, data structure patching, runtime configuration changes, and loading TCL backdoors. (The last is "widely used by people fired from ISPs"!)

    In order to gain some degree of visibility into binary exploitation attacks against IOS, FX recommends enabling core dumps. This does not affect performance (except slowing reboot time). Core dumps can be written to a FTP server, and will result from unsuccessful binary exploitation attempts or any time a router administrator invokes the "write core" command. Because there are over 100,000 IOS images in use today ("only" 15,000 or so are supported by Cisco), there is a high likelihood that a remote intruder will crash the router when trying a binary exploitation attack. Furthermore, it's possible to find the packets which caused the attack in the router's memory dump, since they will be in the queue of the attacked thread. I look forward to trying this and submitting a dump to Recurity Labs CIR.

  • Greg Conti and Erik Dean presented Visual Forensic Analysis and Reverse Engineering of Binary Data. I thought one of their slides (presented at left) was, unintentionally, a powerful partial summary of the skill sets needed for certain levels of analysis of binary data in our field. For example, I am very comfortable in the lowest portion where binary data represents network packets. I am trying to learn more about binary data as memory. I have worked with binary data as files, but there's a lot going on there (as I learned from a talk on Office forensics, noted below).

    Greg and Erik demonstrated two new tools they wrote for visual analysis of binary data. Most surprisingly, they showed actual images rendered from the memory of a Firefox crash dump file. An example appears at right. They displayed the image by plotting every three bytes of memory as a RGB entry. They also noted that one day we could expect to see security analysts sitting with recognition posters of common patterns (e.g., diffuse means encryption or compression). That reminded me of the surface-to-air missile (SAM) emplacement images I studied in intel school.

  • I joined Detecting & Preventing the Xen Hypervisor Subversions by Joanna Rutkowska and Rafal Wojtczuk. Joanna had to remove slides pending publication of a patch from Intel. (See my last post for notes on why the chipset is the new battleground.) She hinted that Intel is considering working with anti-virus vendors to run scans inside the chipset, which would be a bad idea. Joanna also talked about her company's product, HyperGuard, which can sit inside the Phoenix BIOS to perform integrity checking.

    Joanna's research started with Blue Pill as a means to put an OS inside a thin hypervisor (the Blue Pill), but her newest work involves attacking an existing hypervisor (like Xen). This is why her post 0wning Xen in Vegas! says Rafal will discuss how to modify the Xen’s hypervisor memory and consequently how to use this ability to plant hypervisor rootkits inside Xen (everything on the fly, without rebooting Xen). Hypervisor rootkits are very different creatures from virtualization based rootkits (e.g. Bluepill). This will be the first public demonstration of practical VMM 0wning.

    This presentation reminded me that I should have a permanent Xen instance running in my lab to improve familiarity with the technology.

  • Following Joanna's talk I enjoyed Get Rich or Die Trying - Making Money on the Web, the Black Hat Way by Jeremiah Grossman and Arian Evans. They showed many real and amusing cases of monetizing attacks.

  • Bruce Dang discussed Methods for Understanding Targeted Attacks with Office Documents. Everything for Bruce is "pretty easy," like writing custom Office document parsers to examine Office-based malware. Bruce ended his talk early so I moved next door to hear the Sensepost guys. I should have attended earlier -- it sounded like they created some extreme tunnels for getting data in and out of enterprise networks. They concluded by discussing how to load and execute binaries into the address space of SQL Server 2005 via SQL injection, because SQL Server 2005 has an embedded .NET CLR. Wow.


Overall, I think my conclusions from my last Black Hat Briefings still stand. However, I was surprised to see so much more action on the chipset level. I did not hear anything about the other extreme of the digital spectrum, the cloud. Perhaps that will be a topic next year, if the lawyers can be avoided?

Black Hat USA 2008 Wrap-Up: Day 1

Black Hat USA 2008 is over. I started the 6-day event by training almost 140 students during two 2-day editions of TCP/IP Weapons School. Both sessions went well. I'd like to thank Joe Klein and Paul Davis for helping students navigate the class entrance and exit processes, and for keeping the labs running smoothly.

In the year since I posted Black Hat Final Thoughts for last year's event, a lot has happened. (I also reported on Black Hat Federal 2006 here, here, and here, and Black Hat USA 2003. I attended Black Hat USA 2002 but wasn't blogging then.) In this post I will offer thoughts on the presentations I attended.

  • I started Wednesday by attending the keynote by Ian Angell, Professor of Information Systems at the London School of Economics. I want that hour of my life back. Quoting philosophers, looking only at failures and never successes, and pretending your cat can talk doesn't amount to a good speech. This was a low point of the Briefings, although there was enough humor to keep my attention.

  • I saw two talks by Sherri Sparks and Shawn Embleton from Clear Hat Consulting. The first was Deeper Door: Exploiting the NIC Chipset, and I describe the second below. They were probably my favorite talks of the entire conference, because they were clear, concise, and informative. And -- shocker -- I skipped the DNS madness in favor of the second of these talks, because the Internet still appears to be working.

    Deeper Door is a play on DeepDoor, the rootkit Joanna Rutkowska presented at Black Hat Federal 2006. DeepDoor hooks the Windows Network Driver Interface Specification (NDIS), whereas Deeper Door interacts directly with the LAN controller to read and write packets. Deeper Door works with the Intel 8255x 10/100 Mbps Ethernet Controller Family, and was tested with Intel PRO100B and S NICs. Deeper Door loads as a Windows driver and performs memory-mapped writes to the LAN controller to bypass monitoring and enforcement systems (i.e., host-based firewalls and the like) which assume that processes or applications must be responsible for transmitting packets.

    Transmitting traffic is fairly easy, and the demo showed sending hand-crafted UDP traffic past Windows and Zone Alarm firewalls without a problem. Receiving traffic is a little trickier, because receipt of a packet triggers a frame reception (FR) interrupt that will result in a check of the Interrupt Descriptor Table (IDT). One can use traditional techniques like hooking the IDT to get the packet to Deeper Door, or something like a SMM rootkit (described next) or a Blue Pill-like rookit to avoid hooking the IDT. When Deeper Door receives a packet, it can alter the contents to make the packet appear benign (i.e., not a command-and-control packet from the mother ship), or it can completely erase the packet so that the operating system doesn't see it.

    The speakers noted that disabling the NIC (via Windows interaction) doesn't stop Deeper Door; it can silently re-enable it. Even uninstalling the NIC (again via Windows) leaves the NIC in a state where it can send, but not receive, traffic.

    This presentation reinforced the lesson that relying on an endpoint to defend itself is a bad idea. I've talked about trying to collect traffic on endpoints for incident response purposes, but a rootkit using Deeper Door technology could completely hide suspicious traffic from any host-based sniffer! In 2005 I wrote Rookits Make NSM More Relevant Than Ever, and Deeper Door proves it.

  • I stayed for their next talk, A New Breed of Rootkit: The System Management Mode (SMM) Rootkit. SMM was publicly brought to the attention of security researchers in 2006 by Loïc Duflot, followed by Phrack's System Management Mode Hacks by BSDaemon, coideloko, and D0nand0n. Sparks and Embleton wrote a chipset-level keylogger and data exfiltrator that resides in SMM and sends 16 bytes at a time. Combined with their NIC-centric rootkit, it's impressive work. The SMM rootkit demonstrates that chipset-level data structures, like the I/O Redirection Table, are the newest targets of subversion.

    Sparks and Embleton claimed their SMM rookit wouldn't be effective on newer systems (say 2006 and on) because a bit in the SMRAM control register called D_LCK is set, but Joanna Rutkowska said a bug in Intel (to be fixed soon) makes clearing D_LCK on newer systems possible without a system reset.

  • Staying with the chipset-as-battleground theme, I next attended Insane Detection of Insane Rootkits, or a Chipset Based Approach to Detect Virtualization Malware, a.k.a. DeepWatch, by Yuriy Bulygin. Bulygin recommended using firmware on an Intel microcontroller to detect and remove hypervisor rootkits. Specifically, he puts his code in the microcontroller used for Intel Active Management Technology. As noted on that page, Intel® Active Management Technology requires the platform to have an Intel® AMT-enabled chipset [like vPro], network hardware [like the Intel® 82573E Gigabit Ethernet Controller] and software [on a server]. The platform must also be connected to a power source and an active LAN port. If you know nothing about AMT, I suggest checking it out; the security implications are staggering. Bulygin claimed to be able to detect SMM rootkits, which was an excellent defensive follow-on to the talks I had just seen.

    These three talks really emphasized a trend: the chipset is a new battleground. I would like to see a lot more information posted at the Intel Product Security Center. An indicator for me of their willingness to step up to the plate in this new era would be to see an advisory for the bug Joanna mentioned posted at their security site. Somehow I think more than 5 vulnerabilities have been fixed in their code since January 2007. Furthermore, you can include the BIOS as a related battleground. I am worried when I see vendors continue to add functionality into these low-level components. I plan to keep an eye on the Intel Software Network Blog for Manageability for further news.

  • Next I attended Xploiting Google Gadgets: Gmalware and Beyond by Tom Stracener and Robert (Rsnake) Hansen. They showed that Google continues to be evil because it thinks being able to track user actions via redirection is more important than fixing security vulnerabilities. I sat with Mike Rash and Keith Jones. Speaking after the talk, I learned Keith was as confused as I was. We concluded that if you're not pen testing Web apps for a living, you probably weren't able to follow all of the vulnerabilities in the presentation since they moved to quickly from issue to POC-as-movie to next issue.

  • I finished the day with MetaPost-Exploitation by Val Smith and Colin Ames. This briefing was disappointing. I think the material was better suiting for a training session where the students could have tried the techniques, rather than just watching them.


That's day 1. Please see my next post for day 2.

Senin, 04 Agustus 2008

Traffic Talk 1 Posted

I've started writing a new series for TechTarget SearchNetworkingChannel.com called Traffic Talk. The first edition is called DNS troubleshooting and analysis. I wrote it in early June, way before Dan Kaminsky's DNS revelations, so it has nothing to do with that affair. From the start of the article:

Welcome to the first edition of Traffic Talk, a regular SearchNetworkingChannel.com series for junior to intermediate networkers who troubleshoot business networks. In these articles we examine a variety of open source tools that expose and analyze different types of network traffic. In this edition we explore the Domain Name System (DNS), the mechanism that translates IP addresses to hostnames and back, plus a slew of other functions.