Jumat, 27 Juli 2007

Goodbye AIA

A friend from my AFCERT days left a comment indicating that the 33 IOS split into two different squadrons, the 33 NWS (the old AFCERT) and the 91 NWS. This prompted me to look at the organizational structure of my old Air Force units.

I realized that last month what used to be Air Intelligence Agency is now Air Force Intelligence, Surveillance and Reconnaissance Agency, according to this story. AFISR now works as a field operating agency for AF/A2, the Deputy Chief of Staff for Intelligence, Surveillance and Reconnaissance, Lt. Gen. David A. Deptula. AIA was part of 8th Air Force, but that experiment has been reversed.

It looks like AFISR has lost information operations duties since it's now an "ISR" agency. According to Air Force ISR Agency, the AF/A2 says:

"Air Intelligence Agency was traditionally focused on a particular intelligence discipline, signals intelligence," said General Koziol. "Now we are expanding our capabilities into geo-spatial-intelligence, imagery, human intelligence, and measurement and signature intelligence disciplines. As an integral member of our nation's combat forces, we are focused on integrating the information derived by those capabilities and delivering critical information to combatant commanders and national level decision makers."

That's news to me. I think AIA was doing those missions previously, but Deptula wants a single agency responsible for all of it. He gets that with AFISR. Information operations are now part of Air Force Cyber Command, which apparently will become active this fall.

I have mixed feelings about AIA's fate. Lt. Gen. Deptula is a three-star, which outranks previous top intel generals (who were two-stars). Putting a three-star with ISR responsibilities at HQ AF will probably give ISR greater attention. However, the word "intel" appears three times in Deptula's bio -- all in relation to his existing job. He's a career F-15 driver, so once again we have a pilot as the Air Force's "top intel guy." This is sad. Are there no good Air Force intel generals available? Hopefully the new AFISR commander, Maj. Gen. John C. Koziol, will be able to step up when Deptula moves on.

The only saving grace in this situation is that the king of all intel officers continues to be Gen. Michael Hayden, Director of the Central Intelligence Agency.

Basic Cisco Switches Auditing Guidelines

1. Always use VLAN to create collision domain to limit broadcast traffic. Remember that VLAN1 is the admin VLAN which is used for administrative purposes and avoid using VLAN1 to prevent hackers from plugging into unused ports to communicate with the rest of the network.

2. Avoid using autotrunking mode. Dynamic Trunking Protocol allows VLAN-Hopping attacks where hackers are able to communicate in various VLANs. Assign trunk interface to the native VLAN other than VLAN 1

3. Make sure Spanning Tree Protocol is mitigated from attacks. Enable portfast, bpdufiler, bpduguard, and root guard on the switches.

4. Disable all unused ports on the switch to prevent hackers from plugging into unused ports to communicate with the rest of the network.

5. Turn off VLAN Trunking Protocol if not in used. If required, VTP should be used with passwords enabled.

6. Review the network or configuration to limit thresholds for multicast and broadcast traffic on switch ports.

The Hacka Man

Kamis, 26 Juli 2007

Bejtlich Interviewed by TSSCI Blog

Marcin Wielgoszewski interview me for his TSSCI Blog. He asked me about my start in security, how to be a good analyst, and concerns for the future. Thanks to Marcin for asking solid questions.

Selasa, 24 Juli 2007

Remote Command Exec (FireFox 2.0.0.5)

These days, i am reading about web applications hacking and trying out several different stuffs. I happen to stumble across xs-sniper's page and read about his post on owning most major browsers. It appears that there is a problem with Cross Application Browser Scripting where a flaw in the URI handling behavior allows for remote command execution. Be sure to check out his post below:

http://xs-sniper.com/blog/remote-command-exec-firefox-2005/

The Hacka Man

Enterprise Visibility Architect

Last month in Security Application Instrumentation I wrote:

Right now you're [developers] being taught (hopefully) "secure coding." I would like to see the next innovation be security application instrumentation, where you devise your application to report not only performance and fault logging, but also security and compliance logging.

This is a forward-looking plea. In the meantime, we are stuck with numerous platforms, operating systems, applications, and data (POAD) for which we have zero visibility.

I suggest that enterprises consider hiring or assigning a new role -- Enterprise Visibility Architect. The role of the EVA is to identify visibility deficiencies in existing and future POAD and design solutions to instrument these resources.

What does this mean in real life? Here is an example. Let's say a company operates a FTP proxy for business use. Consider various stakeholders involved with that server and the sorts of visibility they might want:

  • Data center managers want physical accountability for the server. They also want to know how much power is consumed and how much heat is output.

  • Network administrators want to know how much bandwidth the server uses for OS and application updates. They also want to know how much bandwidth is used by data transfers and backup processes.

  • System administraors want to know if the asset is performing properly.

  • Users and asset owners want to know how much data is transferred (in the vent they are billed for this service).

  • Human resources administrators and legal want to know what files are transferred, potentially to identify fraud, waste, and abuse.

  • Auditors want to validate and asses secure configurations.

  • Security analysts want to resist, detect, and respond to incidents.

  • Forensic investigators want to know the state of the asset and the files transferred to investigate incidents.


These are all requirements that should be included in the design of the server before it is deployed. However, no one does all of this, and only a few organizations accomplish a few of these items. The role of the EVA is to ensure all of these requirements are built into the design.

When these requirements are not built into the design (as is the case with 95+% of all infrastructure, I would wager) it's the job of the EVA to work with concerned parties to introduce visibility through instrumentation. For example, how could the investigators' concern be met?

  • If the proxy supports logging, enable it. (This is usually not done because of "performance concerns." If the resource were appropriately sized prior to deployment to account for visibility, this would not be a problem.)

  • Add a passive device to monitor traffic to and from the proxy server. Application-aware monitoring tools like Bro's FTP Analyzer can record FTP control channel activities. (The resistance to this technique involves not wanting to configure a SPAN port, or lack of a SPAN port. I prefer taps, but inserting the tap requires scheduling downtime -- another sticking point.)

  • If the investigator only wants IP addresses for the endpoints, then NetFlow could be enabled on a router through which traffic to the FTP server passes. Note that NetFlow cannot be configured to only provide flow data for a specific port (like 21 TCP) so filtering would have to happen on the NetFlow collector. Using NetFlow effectively requires building a NetFlow collector. (Other concerns include loading the router, which could have been accounted for when the design for this business system was created.)

  • If the investigator only wants IP addresses for the endpoints, then a logging ACL could be enabled on a router through which traffic to the FTP server passes. Hits on this ACL could be exported via Syslog. Using Syslog requires building a Syslog server. (This option will also add load to the router.)

  • Depending on the architecture, intervening firewalls could also be configured to log connection details in the same manner that NetFlow or router ACLs do.


I believe that logging integrated into the application (i.e., the FTP process) is the best option when one is designing a new resource. When visibility is introduced after the asset is deployed, instrumenting it becomes more difficult.

If you hadn't guessed, I am becoming the de facto EVA in my job as director of incident response because I need data to detect and respond to incidents. However, all of the stakeholders are natural allies because they want to know more about various assets.

Thanks to the I want to believe generator for the image above.

Recent CVS Changes

This is a note for myself, so if you're looking for uber-security insights today, please skip this post. If you do stick with me and you can suggest ways to do this better, please share your comments.

Earlier this year I posted TaoSecurity CVS at Sourceforge and Committing Changes to CVS. Since posting my Sguil on FreeBSD scripts at TaoSecurity Sourceforge I needed to make a few changes. The system hosting my original files suffered a lightning strike, so I decided to retrieve the files from CVS and make changes.

Checking out the scripts can be done anonymously without a password. (Note there are some artificial line breaks in these and other lines.)

$ cvs -d:pserver:anonymous@taosecurity.cvs.sourceforge.net:/cvsroot/taosecurity
login
Logging in to :pserver:anonymous@taosecurity.cvs.sourceforge.net:2401
/cvsroot/taosecurity
CVS password:
$ cvs -d:pserver:anonymous@taosecurity.cvs.sourceforge.net:/cvsroot/taosecurity
co -P taosecurity_sguil_scripts
cvs checkout: Updating taosecurity_sguil_scripts
U taosecurity_sguil_scripts/README
...truncated...

When I checked out these files they had headers like this:

# $Id: README,v 1.2 2007/03/22 18:40:25 taosecurity Exp $ #

These headers are added by lines like this from the original files:

# $Id$ #

In order to turn these new checked out files into files that would have the proper headers, I replaced these specific lines in each file with the tag # $Id$ #.

I added several files to the scripts, but for purposes of documentation I'll show how I added one -- sguild_start.sh. I had to connect via SSH to do this.

$ export CVS_RSH=ssh
$ cvs -d:ext:user@taosecurity.cvs.sf.net:/cvsroot/taosecurity
add sguild_start.sh
user@taosecurity.cvs.sf.net's password:
cvs add: scheduling file `sguild_start.sh' for addition
cvs add: use 'cvs commit' to add this file permanently

$ cvs -d:ext:user@taosecurity.cvs.sf.net:/cvsroot/taosecurity
commit sguild_start.sh
user@taosecurity.cvs.sf.net's password:
RCS file: /cvsroot/taosecurity/taosecurity_sguil_scripts/sguild_start.sh,v
done
Checking in sguild_start.sh;
/cvsroot/taosecurity/taosecurity_sguil_scripts/sguild_start.sh,v <-- sguild_start.sh
initial revision: 1.1
done

I think I could have set a CVSROOT variable instead of specifying everything on the command line, perhaps like:

$ export CVSROOT=:ext:user@taosecurity.cvs.sf.net:/cvsroot/taosecurity

Setting that I could ignore the entire -d switch.

When I add or commit files I could add a -m "Comment" line to describe the change.

Currently my scripts assume installation using FreeBSD 6.2, using the packages in the packages-6.2-release directory. The only exception is the package for tcltls because it was not shipped with 6.2.

Jumat, 20 Juli 2007

Review of XSS Attacks Posted

Very shortly Amazon.com should post my four star review of Cross Site Scripting Attacks: XSS Exploits and Defense. Observe that no one (Amazon.com, Syngress) displays the actual cover for this book on their Web sites. From the review:

XSS Attacks earns 4 stars for being the first book devoted to Cross Site Scripting and for rounding up multiple experts on the topic. The authors are synonymous with attacking Web applications and regularly share their vast expertise via their blogs and tools. However, XSS Attacks suffers the same problems found whenever Syngress rushes a book to print -- nonexistent editing and uneven content. I found XSS Attacks to be highly enlightening, but I expect a few other books on the topic arriving later this year could be better.

Thanks to Syngress I have review copies of Snort Intrusion Detection and Prevention Toolkit and Stealing the Network: How to Own a Shadow, which I plan to read soon. More late nights in my future...

Glutton for ROI Punishment

My previous posts No ROI? No Problem and Security ROI Revisited have been smash hits. The emphasis here is on "smash." At the risk for being branded a glutton for ROI punishment, I present one final scenario to convey my thoughts on this topic. I believe there may be some room for common ground. I am only concerned with the Truth as well as we humans can perceive it. With that, once more unto the breach.

It's 1992. Happy Corp. is a collaborative advertisement writing company. A team of writers develop advertisement scripts for TV. Writers exchange ideas and such via hard copy before finalizing their product. Using these methods the company creates an average of 100 advertisement scripts per month, selling them for $1,000 each or a total of $100,000 per month.

Happy's IT group proposes Project A. Project A will cost $10,000 to deploy and $1,000 per month to sustain. Project A will provide Happy with email accounts for all writers. As a result of implementing Project A, Happy now creates an average of 120 scripts per month. The extra income from these scripts results in recouping the deployment cost of Project A rapidly, and the additional 20 scripts per month is almost all profit (minus the new $1,000 per month charge for email).

Now it's 1993, and Happy Corp. faces a menace -- spam. Reviewing and deleting spam emails lowers Happy's productivity by wasting writer time. Instead of creating 120 scripts per month, Happy's writers can only produce 110 scripts per month.

Happy's security group proposes Project B. Project B will cost $10,000 to deploy and $1,000 per month to sustain. (Project B does not replace Project A.) Project B will filter Happy's email to eliminate spam. As a result of implementing Project B, Happy returns to creating an average of 120 scripts per month. Profits have increased but they do not return to the level enjoyed by the pre-spam days, due to the sustainment cost of Project B.

I would say Project A provides a true return on investment. I would say Project B avoids loss, specifically the productivity lost by wasting time deleting spam.

I could see how others could make an argument that Project B is a productivity booster, since it does return productivity to the levels seen in the pre-spam days. That is the common ground I hope to achieve with this explanation. I do not consider that a true productivity gain because the productivity is created by the email system Project A, but I can accept others see this differently.

I think this example addresses the single biggest problem I have seen in so-called "security ROI" proposals: the failure to tie the proposed security project to a revenue-generating business venture. In short, security for "security's sake" cannot be justified.

In my scenario I am specifically stating that the company is losing revenue of 10 scripts per month because of security concerns, i.e., spam. By spending money on spam filtering, that loss can be avoided. Assuming the overall cost of Project B is less than or equivalent to the revenue of those lost 10 scripts per month, implementing Project B makes financial sense.

What do you think?

Kamis, 19 Juli 2007

Managing and Monetizing Victims

I'd like to briefly point you to two must-read articles, if you haven't seen them already. First, the Honeynet Project published Fast-Flux Service Networks. Basically, intruders have introduced availability and load balancing features into their bot networks by quickly changing the IP addresses of redirectors pointing to back end servers (a technique called "single flux"). They may also rapidly change the IP addresses of the authoritative domain name servers (called "double flux") to further complicate identifying and shutting down bot nets. I'd like to hear how many of you predicted this would happen before the technique was reported by the Honeynet Project this month. Of those that say "I knew," did you know about it a year ago, when it was first detected by the Honeynet Project? And if you have known about it or predicted it, what did you or your security team do to detect and/or mitigate the attack?

My point is the vast majority of enterprises have not known about this, and they have no way to know if they've been affected. However, if you've been implementing Network Security Monitoring for any decent period of time, you have a rich data source to mine for indications of this activity. Now that you know what to look for, you can see if you're affected. The power of NSM is keeping track of what's happening on your network so that you can perform investigations once you know where to look.

A news story on fast flux is Attackers Hide in Fast Flux.

Second, Prevx posted a blog entry titled Ransomware... Holding Corporate America Ransom! that outlines another extortion attempt whereby an intruder will encrypt a victim's data if $300 isn't paid. The fact that money is explicitly involved means law enforcement should be able to "follow the money" to find the attacker, but still consider this: what would your organization do if executives and/or users received such notifications? Worse, what if your data was simply deleted, encrypted, or subtly altered, nevermind outright stolen? In other words, you aren't extorted -- you're simply assaulted.

While ransomware is not a new phenomenon, many people do not stop to think of the damage that can be done by not maintaining control of one's assets. Some of you will say "oh, we'll restore from backups." What do you do if you have dozens, hundreds, thousands of users affected? My point is we have to treat compromise of the endpoint as a serious matter, not something that has little or no consequence.

A news story on ransomware is Your Money or Your Documents.

On a related note, check out New Proxy Bot Method and Sigs. Basically the Bleeding Threats team has detected malware that uses compromised hosts as a proxy back into the corporate network. David Bianco reminded me that the Metasploit Meterpreter's portfw function provides the same capability. In other words, once a host is compromised via a client-side attack and it reports back to its command server, the command server can use the new victim as a stepping stone to attack any other reachable part of the enterprise.

Knowing how all three of these attacks operates allows us to build attack profiles so we can better resist, detect, and respond to them when they occur.

Update: Check out Passive Monitoring of DNS Anomalies at CAIDA.

Thanks Chr1stian, Google Store flaw?

The other night i was talking to Chr1stian about XSS and google. We were chatting and suddenly the topic got more and more interesting. But anyway, Chr1stian is really a kind soul and a nice nice person to talk with. He taught me a lot of things which i don't understand and guide me slowy with each steps. Thank you Chr1stian for your patience, I can say that now i understand at least 90% of what you taught me. Also, we were talking about how security doesn't make money to flaws in google to google did not correct most of them holes that were reported by him.

I am sure that if i got a chance to test the google application, i will find more flaws, however because of my work schedule, i don't really have the time to play around. Anyway, i still wanna say thanks to Chr1stian, don't forget our deal. :)

The Hacka Man

Rabu, 18 Juli 2007

NoVA Sec and NoVA BUG

This is a quick note for those of you in the northern Virginia area. I am working on meetings for NoVA Sec and NoVA BUG (BSD Users Group). Please check out the most recent posts at each site for details and consider joining one or both groups. I'd like to grow our informal memberships so we have more potential speakers, especially on the BSD side. I keep posts about Sec and BUG to a minimum here because it's a geographically-based topic. Thank you.

Review Posted Plus NAC

July's been a great month for controversy on this blog, so I thought I would continue that them by posting word of my Amazon.com review of Endpoint Security. Yes, I've been reading a lot, and it's been keeping me up past midnight for a few weeks. I've been intensely interested in these recent books, so staying up late has been worthwhile.

Unfortunately, as you'll read in my three star review, you can skip Endpoint Security:

I really looked forward to reading Endpoint Security. I am involved in a NAC deployment, and I hoped this book could help. While the text does contain several statements that make sense (despite being blunt and confrontational), the underlying premise will not work. Furthermore, simply identifying and understanding the book's central argument is an exercise in frustration. Although Endpoint Security tends not to suffer any technical flaws, from conceptual and implementation points of view this book is disappointing.

I just finished this review, and it took a long time to write. Please read the review before commenting. I would like to mention one element of my review here, which contains a quote from Microsoft's article Planning for Network Access Quarantine Control:

On p 172 Microsoft says "Network Access Quarantine Control is not a security solution. It is designed to help prevent computers with unsafe configurations from connecting to a private network, not to protect a private network form malicious users who have obtained a valid set of credentials."

I've written about NAC several times for this blog, such as NAC Is Fighting the Last War. However, I find this Microsoft comment to be fairly realistic. Prior to that statement, quoted in Endpoint Security, Microsoft says this:

Because typical remote access connections only validate the credentials of a remote access user, a remote access client that connects to a private network can access network resources even if the configuration of the remote access client does not comply with corporate network policies. You can implement Network Access Quarantine Control to delay normal remote access to a private network until the configuration of the remote access client has been examined and validated by a client-side script.

The emphasis here is on configuration, i.e., compliance with policy, and not integrity of the endpoint. Compliance with policy != integrity or security (i.e., the state of not being compromised). NAC is incapable of validating the integrity of an endpoint. (There's the controversy for this post. Cue responses from NAC vendors.)

My point is that's ok, as long as your expectations are aligned with Microsoft's description of their capabilities. Now, it is possible for a compromised system to report that its configuration meets corporate policies while being under the control of a Romanian. I don't see the logic in trusting the endpoint to report its health, especially via a "client-side script." However, if your primary reason for deploying NAC is to get a better grip on configuration enforcement for endpoints, I think it has some merit -- it depends on the cost of the implementation and your goals for the process.

By the way, Wikipedia's entries for PID controller will help make sense of Endpoint Security's discussion of CLCP.

No Undetectable Breaches

PaulM left an interesting comment on my post NORAD-Inspired Security Metrics:

...what if the enemy has a stealth plane that we cannot detect via radar, satellite, wind-speed variance, or any other deployed means? And what if your intel doesn't tell us that such a vehicle exists? Then we have potentially millions of airspace breaches every year and our outcome metrics are not helping.

I'm not disagreeing with you that outcome metrics are ideally better data than compliance metrics. However, outcome metrics are difficult to identify and collect data on, and it can be difficult to discern how accurate your metrics actually are.

At least with compliance metrics, we can determine how good we are at doing what it is we say that we do. It has little relevance to operational security, but it's easy and the auditors seem to like it.


For the case of a single breach, or even several breaches, it may be possible for them to happen and be completely undetectable. However, I categorically reject the notion that it is possible to suffer sustained, completely undetectable breaches and remain unaware of the damage. If you are not suffering any damage due to these breaches, then why are you even trying to deter, detect, and respond to them in the first place?

Let me put this in perspective by considering labels attached to classified information as designated by Executive Order 12356:

(a) National security information (hereinafter "classified information") shall be classified at one of the following three levels:

  1. "Top Secret" shall be applied to information, the unauthorized disclosure of which reasonably could be expected to cause exceptionally grave damage to the national security.

  2. "Secret" shall be applied to information, the unauthorized disclosure of which reasonably could be expected to cause serious damage to the national security.

  3. "Confidential" shall be applied to information, the unauthorized disclosure of which reasonably could be expected to cause damage to the national security.


We want to protect the confidentiality of classified information to avoid the losses described above. What happens if we suffered sustained breaches (thefts) of Top Secret data? Are we not going to detect that our national security concerns are being hammered, since we are suffering "exceptionally grave damage"?

This is one way spies are unearthed. If your missions are constantly failing because the enemy seems to know your plans, then your suffering a breach you haven't detected it.

Finally, if you are suffering breaches and your input-based metrics aren't detecting them either, what good are they? Talk about a real waste of money. "It's easy and auditors seem to like it?" Good grief.

Selasa, 17 Juli 2007

NORAD-Inspired Security Metrics

When I was a second degree cadet at USAFA (so long ago that, of my entire class, only myself and three friends had 486 PCs with Ethernet NICs) I visited NORAD. I remember thinking the War Games set was cooler, but I didn't give much thought to the security aspects of their mission.

Today I remembered NORAD and considered their mission with respect to my post last year titled Control-Compliant vs Field-Assessed Security. In case you can't tell from the pithy title, the central idea was that it's more effective to measure security by assessing outcomes instead of inputs. For example, who cares if 100% of your systems have Windows XP SP2 if they are all 0wned by a custom exploit written just for your company? Your security has failed. Inputs are important, but my experience with various organizations is that they tend to be the primary means of "measuring" security, regardless of how well they actually preserve the CIA triad.

Let's put this in terms of NORAD, whose front page states:

The North American Aerospace Defense Command (NORAD) is a bi-national United States and Canadian organization charged with the missions of aerospace warning and aerospace control for North America. Aerospace warning includes the monitoring of man-made objects in space, and the detection, validation, and warning of attack against North America whether by aircraft, missiles, or space vehicles, through mutual support arrangements with other commands. Aerospace control includes ensuring air sovereignty and air defense of the airspace of Canada and the United States...

To accomplish the aerospace warning mission, the commander of NORAD provides an integrated tactical warning and attack assessment to the governments of Canada and the United States. To accomplish the aerospace control mission, NORAD uses a network of satellites, ground-based radar, airborne radar and fighters to detect, intercept and, if necessary, engage any air-breathing threat to North America.


What are some control-compliant or input metrics for NORAD?

  • Number of planes at the ready for intercepting rogue aircraft

  • Average pilot rating (i.e., some sort of assessment of pilot skill)

  • Radar uptime

  • Radar coverage (e.g., percentage of North American territory monitored)


These are all interesting metrics. You might see some comparisons to metrics you might track, like percentage of hosts with anti-virus.

Now consider: do any of those metrics tell you if NORAD is accomplishing its mission? In other words, what is the outcome of all those inputs? What is the score of this game?

Here are some field-assessed or outcome-based metrics.

  • Number of rogue aircraft penetrating North American territory (indicates a failure to deter activity)

  • Number of aircraft not detected by NORAD but discovered via other means to have penetrated North American territory (perhaps via intel sources; indicates a failure to detect activity)

  • Number of aircraft not repelled by interceptors (hopefully this would never happen!)

  • Time from first indication of rogue aircraft to launching interceptors (indicates effectives of pilot-to-plane-to-air process)


These metrics address the critical concern: accomplishing the mission.

Keep these in mind when you are devising metrics for your digital security program.

Senin, 16 Juli 2007

Another Review, Another Pre-Review

Amazon.com just posted my five star review of Network Warrior:

Network Warrior is the best network administration book I've ever read.

I spend most of my reading time on security books, but because I lean towards network security I like reading complementary sources on protocols and infrastructure.

Gary Donahue has written a wonderful book that I highly recommend for anyone who administers, supports, or interacts with networks. Network Warrior may be the best book I will read in 2007.

Yeah, I liked it that much. I devoured this book, staying up until 1 am or more several nights in a row.

I'm looking forward to reading Mark Kadrich's Endpoint Security. I think this book will directly affect how I approach some projects at work. I really hope it can help me better understand how to deal with endpoint security in 2007. It's taken me a while to get this book. For some reason it was published in "March 2007" but only available recently.

I'd like to briefly mention a new book that's great, but which I won't read and review: Exploiting Online Games: Cheating Massively Distributed Systems by Greg Hoglund and Gary McGraw. I reviewed drafts of this book and I think the underlying message behind the code is extremely important. To understand why, please read this post by Brian Chess. He makes a much better case than I could. Because I am so time-crunched, and I really do not care about the details of exploiting WoW, I am not going to review Exploiting Online Games. I will have a couple copies to share at Black Hat for students or teaching assistants who make my life easier in class!

The Web Application Hackers Handbook: Discovering and Exploiting Security Flaws

Sorry for the lack of updates. Recently, i had been reading a lot of books about web hacking and RFID and neglected blogging. Its due to work nature that i have to report what i do everyday. However, just yesterday, I had a small chat with the author of the famous burp proxy and realised that he published a book call "The Web Application Hackers Handbook: Discovering and Exploiting Security Flaws". According to him, this was what he said "Our book aims to be the most comprehensive and deep guide to hacking web applications available. It covers numerous advanced topics like blind SQL/other injection, obscure logic flaws, attacking multi-stage authentication, new attacks against webusers, ViewState tampering, decompilation of thick client components, source code review, use of bespoke automation, and many more." As usual, i would always buy books to read and this one is not to be missed. If someone can guarantee me that his book is good, with experience in developing tools and giving talks in blackhat, then i will spend that kind of money in buying his books. Well, let me know what you guys think?



The Hacka Man

Minggu, 15 Juli 2007

Security ROI Revisited

One of you responded to my No ROI? No Problem post with this question:

Just read your ROI blog, which I found very interesting. ROI is something I've always tried to put my finger on, and you present an interesting approach. Question: Is it not possible to 'make' money with security, or does it still come down to savings? Example:

- A hospital implements a security system that allows doctors to access patient data from anywhere. Now, instead of doing 10 patients a day they can do (and charge) 13 patients a day.

I'm not trying to sharp shoot you in anyway, I'm just trying to better understand the economics.


This is an excellent question. This is exactly the same concept as I stated in my August 2006 post Real Technology ROI. In this case, doctors are more productive at accessing patient data by virtue of a remote access technology. This is like installing radios for faster dispatch in taxis. In both cases security is not causing a productivity gain but security can be reasonably expected as a property of a properly designed technology. In other words, it's the remote access technology that provides a productivity gain, and doctors should expect that remote access to be "secure." In a taxi, the radio technology provides a productivity gain, and drivers should expect that system to be "secure."

I'm sure that's not enough to convince some of you out there. My point is you must identify the activity that increases productivity -- and security will not be it. Don't believe me? Imagine the remote access technology is a marvel of security. It has strong encryption, authorization, authentication, accountability, endpoint control, whatever you could possibly imagine to preserve the CIA triad. Now consider what happens if, for some reason, doctors are less productive using this system. How could that happen? The system is secure! Maybe the doctors all decide to spend tons more time looking at patient records so their "throughput" declines. Who knows -- the point is that security had nothing to do with this result; it's the business activity that increases (or in this example, decreases) that determines ROI.

What does this mean for security projects? They still don't have ROI. However, and this is a source of trouble and opportunities, security projects can be components of productivity enhancing projects that do increase ROI. This is why the Chief Technology Officer (CTO) can actually devise ROI for his/her projects. As a security person, you would probably have more success in budget meetings if you can tie your initiatives to ROI-producing CTO projects.

Wait a minute, some of you are saying. How about this example: if a consumer can choose between two products (one that is "secure" and one that is not), won't choosing the "secure" model mean that security has a ROI, because the company selling the secure version might beat the competition? In this case, remember that the consumer is not buying security; the consumer is buying whatever a product that performs some desired function, and security is an "enabler" (to use a popular term). If the two products are functionally equivalent and the same price, buying the "secure" version is a no-brainer because, even if the risk is exceptionally small, "protecting" against that risk is cost free. If the "secure" version is more expensive, now the consumer has to remember his/her CISSP stuff, like Annualized Rate of Occurrence (ARO) and Single Loss Expectancy (SLE) to devise an Annual Loss Expectancy (ALE), where

ARO * SLE = ALE

You then compare your ALE to the cost differential and decide if it's worth paying the extra amount for the "secure" product.

For those of you who still resist me, it's as simple as this: security is almost always concerned with stopping bad events. When you stop a bad event, you avoid a loss. Loss avoidance means savings, but no business can stay in business purely by saving money. If you don't understand that you will never be able to understand anything else about this subject. You should also not run a business.

The reason why you should pursue projects that save money is that those projects free resources to be diverted to projects with real ROI. Those of you who have studied some economics may see I am getting close to Frédéric Bastiat's Broken Window fallacy, briefly described by Russell Roberts thus:

Bastiat used the example of the a broken window. Repairing the window stimulates the glazier’s pocketbook. But unseen is the loss of whatever would have been done with the money instead of replacing the window. Perhaps the one who lost the window would have bought a pair of shoes. Or invested it in a new business. Or merely enjoyed the peace of mind that comes from having cash on hand.

Spending money on security breaches is repairing a broken window. Spending money to prevent security breaches is like hiring a guard to try to prevent a broken window. In either case, it would have been more productive to be able to invest either amount of money, and a wise investment would have had a positive ROI. This is why we do not spend time breaking and repairing windows for a living in rich economies.

However, like all my posts on this subject, I am not trying to argue against security. I am a security person, obviously. Rather, I am arguing against those who warp security to fit their own agenda or the distorted worldview of their management.

For an alternative way to talk to management about security, I recommend returning to my post Risk-Based Security is the Emperor's New Clothes where I cite Donn Parker.

Sabtu, 14 Juli 2007

No ROI? No Problem

I continue to be surprised by the confusion surrounding the term Return on Investment (ROI). The Wikipedia entry for Rate of Return treats ROI as a synonym, so it's a good place to go if you want to understand ROI as anyone who's taken introductory corporate finance understands it.

In its simplest form, ROI is a mechanism used to choose projects. For example, assume you have $1000 in assets to allocate to one of three projects, all of which have the same time period and risk.

  1. Invest $1000. Project yields $900 (-10% ROI)

  2. Invest $1000. Project yields $1000 (0% ROI)

  3. Invest $1000. Project yields $1100 (10% ROI)


Clearly, the business should pursue project 3.

Businesspeople make decisions using this sort of mindset. I am no stranger to this world. Consider this example from my consulting past, where I have to choose which engagement to accept for the next week.

  1. Spend $1000 on travel, meals, and other expenses. Project pays $900 (-10% ROI)

  2. Spend $1000 on travel, meals, and other expenses. Project pays $1000 (0% ROI)

  3. Spend $1000 on travel, meals, and other expenses. Project pays $1100 (10% ROI)


Obviously this is the same example as before, but using a real-world scenario.

The problem the "return on security investment" (ROSI) crowd has is they equate savings with return. The key principle to understand is that wealth preservation (saving) is not the same as wealth creation (return).

Assume I am required to obtain a license to perform consulting. If I buy the license before 1 January it costs $500. If I don't meet that deadline the license costs $1000. Therefore, if I buy the license before 1 January, I have avoided a $500 loss. I have not earned $500 as a result of this "project." I am not $500 richer. I essentially bought the license "on sale" compared to the post-1 January price.

Does this mean buying the license before 1 January is a dumb idea because I am not any richer? Of course not! It's a smart idea to avoid losses when the cost of avoiding that loss is equal to or less than the value of the asset being protected.

For example, what if I had to pay $600 to get a plane ticket from a far-away location to appear in person in my county to buy the license before 1 January? In that case, I should just pay the $1000 license fee later. For a $500 plane ticket, the outcome doesn't matter either way. For a $400 plane ticket, I should fly and appear in person. Again, in none of these situations am I actually richer. No wealth is being created, only preserved. There is no ROI, only potential savings.

What if I chose to avoid paying for a license altogether, hoping no one catches me? I've saved even more money -- $500 compared to the pre-1 January price, and $1000 compared to the post-1 January price. This is where the situation becomes more interesting, and this is where subjectivity usually enters the picture concerning expected outcomes.

Let's get back to ROI. The major problem the ROSI crowd has is they are trying to speak the language of their managers who select projects based on ROI. There is no problem with selecting projects based on ROI, if the project is a wealth creation project and not a wealth preservation project.

Security managers should be unafraid to avoid using the term ROI, and instead say "My project will cost $1,000 but save the company $10,000." Saving money / wealth preservation / loss avoidance is good.

Another problem most security managers will encounter is their inability to definitively say that their project will indeed save a certain amount of money. This is not the case for licensing deals, e.g., "Switching from Vendor X's SSL VPN to Vendor Y's SSL VPN will save $10,000" because the outcome is certain, breach of contract nonwithstanding. Certainty or even approximate probability is a huge hurdle for many security projects because of several factors:

  1. Asset value is often undetermined; in some cases, assets themselves are not even inventoried

  2. Vulnerabilities in assets are unknown, because new flaws are discovered every day

  3. The threat cannot be properly assessed, because they are unpredictable and creative


As a result, risk assessment is largely guesswork. Guesswork means the savings can be just about anything the security manager chooses to report.

If you look at my older posts on return on security investment you'll see some more advice on how to make your case for security spending without using the term "ROI".

It should be clear by now that ROSI or security ROI is nothing more than warping a defined business term to get attention during budget meetings. I saw the exact same problem in the Air Force. At one point those who flew combat missions were called "operators." Once Information Operations came into vogue, that community wanted to be called "operators" too. At one point a directive came down that intel folks like me were now "operators," just like combat pilots. That lasted about 10 minutes, because suddenly the combat pilots started using the term "trigger-pullers." "Fine," they thought. "Call yourselves operators. We pull triggers." Back to square one.

The bottom line is that security saves money; it does not create money.

Jumat, 13 Juli 2007

Bank Robber Demonstrates Threat Models

This evening I watched part of a show called American Greed that discussed the Wheaton Bandit, an armed bank robber who last struck in December 2006 and was never apprehended.

Several aspects of the story struck me. First, this criminal struck 16 times in less than five years, only once being repelled when he was detected en route to a bank and locked out by vigilant tellers. Does a criminal who continues to strike without being identified and apprehended bear resemblance to cyber criminals? Second, the banks did not respond by posting guards on site. Guards tend to aggravate the problem and people get hurt, according to the experts cited on the show. Instead, the banks posted greeters right at the front door to say hello to everyone entering the bank. I've noticed this at my own local branch within the last year, but thought it was an attempt to duplicate Wal-Mart; apparently not. Because the robber also disguises himself with a balaclava (pictured at right), the bank banned customers from wearing hoods, sunglasses, and other clothing that obscures the face in the bank.

Third, improved monitoring is helping police profile the criminal. Old bank cameras used tape that was continuously overwritten, resulting in very grainy imagery. Newer monitoring systems are digital and pick up many details of the crime. For example, looking at recent footage the cops noticed the robber "indexing" the gun by keeping his index finger away from the trigger, like we learned in the military or in law enforcement. They also perceived indications he wears light body armor while robbing banks. Finally, one of the more interesting aspects of the show was the reference to a DoJ Bank Robbery (.pdf) document. It contains a chart titled Distinguishing Professional and Amateur Bank Robbers, reproduced as a linked thumbnail at left.

I understand the purpose of the document; it's a way to determine if the robber is an amateur or a professional. This made me consider some recent posts like Threat Model vs Attack Model. A threat model describes the capabilities and intentions of either a professional bank robber or an amateur bank robber. An attack model describes how a robber specifically steals money from a particular bank. Threat models are more generic than attack models, because attack models depend on the nature of the victim.

Watching this show reminded me that security is not a new problem. Who has been doing security the longest? The answer is: physical security operators. If we digital security newbies don't want to keep reinventing the wheel, it might make sense to learn more from the physical side of the house. I think convergence of some kind is coming, at least at some level of the management hierarchy.

If you argue that the two disciplines are too different to be jointly managed, consider the US military. The key warfighting elements are the Unified Combatant Commands, which can be headed by just about any service member. Some commands were usually led by a general from a certain service, like the Air Force for TRANSCOM, but those arrangements are being unravelled. Despite the huge Army occupation in the Middle East, for example, the next CENTCOM leader is a Naval officer, and so is the next Chairman of the Joint Chiefs. Even the new head of SOCOM is Navy. This amazes me. When I first learned about Joint warfare, the joke was "How do you spell Joint? A-R-M-Y." Now it's N-A-V-Y.

For more on this phenomenon, please read Army Brass Losing Influence, which I just found after writing this post.

Perhaps we should look to a joint security structure to combine the physical and digital worlds? That would require joint conferences and similar training opportunities. Some history books with lessons for each side would be helpful too.

Thanks for the Memories Sys Admin Magazine

David Bianco clued me in to the fact that, after 15 years, Sys Admin magazine is shutting down. (I was on the road this week and found the issue in my mail when I returned.) The August 2007 issue, pictured at left, is the last. Appropriately for the digital security community, the issue topic is Information Security. I bought my first issue of Sys Admin in the fall of 1999, at the point where I was finally coming to grips with my work at the AFCERT. I had spent the previous year-plus climbing the steep learning curve associated with becoming a network security analyst and I was ready to learn more about system administration. Looking at the copy in my hands, I see where I underlined (using a straight edge, a practice I continue to this day) content I believed was useful. That issue featured articles like:

  • Maintaining Patch Levels with Open Source BSDs by Michael Lucas

  • Landmining the Cracker's Playing Field by Amy Rich

  • Hardening a Host by Dave D. Zwieback

  • Intrusion Detection Strategies and Design Considerations by Ronald McCarty

  • Practical Packet Sniffing by John Mechalas


No wonder I bought that issue! Michael Lucas, if you're reading this -- I marked the heck out of your article. It's one of the first artifacts I have of my involvement with FreeBSD.

After subscribing to the magazine for several years, I managed to get my first article into the April 2004 issue -- Integrating the Network Security Monitoring Model. This introduced NSM to a wide audience prior to the publication of my first book. That was followed in February 2005 with More Tools for Network Security Monitoring. I covered Dhcpdump, PADS, and SANCP. Funny, I'd forgotten all about Dhcpdump, but I might be able to use it for a certain problem. This demonstrates one of the main reasons I write -- I can't remember everything that might be helpful! In February 2006 I was confident enough to try writing about FreeBSD, so I contributed Keeping FreeBSD Up-to-Date. This detailed a variety of means to keep the FreeBSD OS up-to-date, including all the old methods plus new ones some people haven't heard about or are unwilling to try. It's nice to see many of these new methods integrated into the base OS in later versions of FreeBSD 6.x. My last article appeared in the August 2006 issue, called Tuning Snort. I talked about the essential tasks one should perform for any Snort installation.

I hope Sys Admin publishes a final CD with all of the magazine's issues. Sys Admin, thanks for the memories, for the learning, and for the opportunity to contribute.

Ivan Voras FreeBSD 7 Live CD

Ivan Voras posted word on his FreeBSD development blog that he built a FreeBSD 7 LiveCD. This is part of his 2007 Google Summer of Code project, finstall, a graphical FreeBSD installer that's also a live CD.

I think this is great. Booting the installer as a live CD lets a user see if FreeBSD recognizes hardware before committing to an installation. The user also gets to play with FreeBSD without making any changes to the production system. I downloaded the .iso and booted it in VMware to take the screen capture at left. Right now the system doesn't do much, and the keyboard mapping isn't English. (For example, obtaining the - key required me to hit the / key.) I am excited to see this and it would be great to have it ready for FreeBSD 7. I do not think it will happen, but we'll see.

Incidentally, this other SOC project looks neat: Super Tunnel Daemon:

IP can easily be tunneled over a plethora of network protocols at various layers, such as IP, ICMP, UDP, TCP, DNS, HTTP, SSH to name a few. While a direct connection may not always be possible due to a firewall, the IP packets could be encapsulated as payload in other protocols, which would get through. However, each such encapsulation requires the setup of a different program and the user has to manually probe different encapsulations to find out which of them works in a given environment.

The aim of this project is to implement the Super Tunnel Daemon, a tunneling daemon using plugins for different encapsulations and automagically selecting the best encapsulation in each environment.


That sounds like a nice capability for malicious users.

Disk Usage Pages Added to NSM Wiki

I just made several additions to David Bianco's excellent Network Security Monitoring Wiki. You'll see a new Disk Usage category on the lower right side under the Collecting Data header. I added this category because I'd like to see people contribute metrics on the amount of disk space used by various tools in production environments.

I created three more pages:

On each page I provided a sample methodology to collect disk usage information for each data type, and provided two examples of production sensors on small links collecting 14 days of data.

Please consider following the examples by adding your own numbers to each page. This will help guide partitioning and storage requirements for those trying to build and maintain NSM sensors. Thank you.

Rabu, 11 Juli 2007

Snort Report 7 Posted

My seventh Snort Report on Working with Unified Output has been posted. From the article:

In the last Snort Report we looked at output methods for Snort. These included several ways to write data directly to disk, along with techniques for sending alerts via Syslog and even performing direct database inserts. I recommended not configuring Snort to log directly to a database because Snort is prone to drop packets while performing database inserts. In this edition of the Snort Report I demonstrate how to use unified output, the preferred method for high performance Snort operation.

In the next edition I plan to discuss testing Snort.

Are the Questions Sound?


Dan Geer, second of the three wise men, was kind enough to share slides from his Measuring Security USENIX class. If I were not teaching at USENIX I would be in Dan's class.

One of the slides bothered me -- not for what Dan said, but for what was said to him. The slide is reproduced above, and the notes below:

These are precisely the questions that any CFO would want to know and we are not in a good position to answer. The present author was confronted with this list, exactly as it is, by the CISO of a major Wall Street bank with the preface “Are you security people so stupid that you cannot tell me....”

This particular CISO came from management audit and therefore was also saying that were he in any other part of the bank, bond portfolios, dervative pricing, equity trading strategies, etc., he would be able to answer such questions to five digit accuracy. The questions are sound.


I think Dan is giving the CISO too much credit. I think the questions are "semi-sound," and I think the CISO is the stupid one for using such a negative word to describe one of my Three Wise Men.

I'd like to mention several factors which make comparing the world of finance different from the world of digital security. I am recording these because they are more likely the kernel for future developed ideas, but I think they are legitimate points.

  • Business: Digital security is not a line of business. No one practices security to make money. Security is not a productive endeavor; security risk is essentially a tax instantiated by the evil capabilities and intentions of threats. Because security is not a line of business, the performance incentives are not the same as a line of business. Security has no ROI; proper business initiatives do. Only security vendors make money from security.

  • Accumulation: Digital security, as defined by preserving the confidentiality, integrity, and availability of information, cannot be accumulated. One cannot tap a reserve of security and later replenish it. Data that is exposed to the public Internet can seldom be quashed; data that has been corrupted at time of critical use cannot be changed later, thereby changing the past; and data that was not available at a critical time cannot be made available later, thereby changing the past.

    This is not the same with capital (i.e., money). Financial institutions are regulated and operated according to capitalization standards that dictate certain amounts of money to cover potential adverse events. Therefore, money can be stored as a counter to riskier behavior or decreased when pursuing less risky activities. Money at a single point in time is also homogenous; the first dollar of $100 is equally valuable as the hundreth dollar of $100. Information resources are not homogenous.

  • Assumptions: Assumptions make financial "five digit accuracy" possible. Consider the assumptions made by the Black-Scholes model, courtesy of Wikipedia, used to price options:



    dS_t = \mu S_t\,dt + \sigma S_t\,dW_t \,



    • It is possible to short sell the underlying stock.

    • There are no arbitrage opportunities.

    • Trading in the stock is continuous.


    • There are no transaction costs or taxes.

    • All securities are perfectly divisible (e.g. it is possible to buy 1/100th of a share).

    • It is possible to borrow and lend cash at a constant risk-free interest rate.

    • The stock does not pay a dividend (see below for extensions to handle dividend payments).


    The specifics of this equation are not important for this discussion, although those of you who also studied some economics may find plenty of ways to criticize it. (Remember the authors won the Nobel Prize for this equation and paper!) Consider what you could define if digital security practitioners were able to make such assumptions.

  • Accuracy: I just said "assumptions make five digit accuracy possible." This isn't really true. If financial five digit accuracy were possible, no markets could be sustained. Simply put, markets exist because two sides agree to a trade. One side sees the world in one way, and the other sees it differently. (This is why market-makers exist on trading floors. When too many traders see the world the same, market-makers provide liquidity to permit trading.) If trading houses all figure out how to make money with five digit accuracy, their advantage is not going to be sustained because no one will want to trade with anyone else -- they're all want to take the same positions.


These are a few thoughts. It would be nice to hear from people with digital security and financial trading experience to provide commentary. Thank you.

Selasa, 10 Juli 2007

Network Security Monitoring Case Study

I received the following email from a friend. He agreed to share his story in exchange for commentary from me and fellow blog readers. I've added comments inline.

I'm now responsible for cleaning up a mid sized company perimeter defences... To be honest, at first glance the task is a daunting one, thousands of users, dozens of dis-separate systems and gigabits of network traffic plus as part of the enterprise support team, I have other projects and duties to deliver on.

I managed to get time with the systems architect and run through a number of questions stolen from your books and other smart folks on the history and state of affairs of my new domain. My questions were answered or put in to context except for one.

"Why don't you have any monitoring tools in place on the edge and perimeter systems?"

The answer I received wasn't what I expected. He simple stated that no-one had the time or energy to take on this massive task. He was very pro about having monitoring, but bluntly realistic on the amount of time monitoring and response took up. The business has other priorities for the IT teams.

It's not that the company isn't security aware, they have good policies, skilled staff, harden systems and layer defences, but no monitoring.

I was stuck with the thought "If I don't understand what's happening on the network, how can I know what's right or wrong, what's normal or abnormal?"


This is exactly right. My friend's co-workers are probably practicing management by belief instead of management by fact. If you do not know for sure what is happening inside your company, how can you make informed decisions? Failure will occur when your luck runs out because you are not spending the time and resources necessary to acquire and maintain digital situational awareness.

I gathered all the documentation, policies and firewall rules together and tried to make sense of them. I re-read Extrusion Detection on my commute in to help me frame a plan of attack. I got all the basic system data of every system, check time and date stamps were all in sync and cleaned up the network maps.

Then, as silly as this seems, the obvious answer came to me on where to start: at the internal firewall internal interface.

I started to review the drop logs of the firewall for that interface, reasoning that if there is a problem on the network, it should more apparent in those logs than anywhere else.


This is also exactly right. There is no need to start buying or building new tools if you are not leveraging your existing data sources or enabling data sources not yet activated.

Those reviews provided some startling results, fortunately for us, the problems were only mis-configured systems attempting outbound access. Mind you, they were generating considerable amounts of traffic and some over already congested WAN links.

Then I laid out the ingress and egress rule sets to find out the motives and owners of those rules in the firewalls and started to peel back those unused or unneeded rule sets, while monitoring the drop logs. All with management approval and sign off, of course!


This is another great move -- a stride towards simplification.

I'm now moving in to the next stage of working with other teams to clean up their problem systems. Some are going to be harder than others as I've found VAX's, AIX, Sun systems and others bouncing off the rules. I haven't seen these types of systems in a long time and as a simple Windows administrator, I have to find time and the right people to plug these holes.

I have a very long way to go, only focused on a very small section of the entire network and no certain path forward on how to apply a reasonable monitoring system - yet :-)

It's a challenge I'm looking forward to and one I'm sure I'll learn from.

The only comment haunting me is the not having any real time allotted to doing monitoring work. I'm going to talk with management and a surprisingly security literate CIO on if this can be addressed.

I know that the conversations have to be based on preventing/mitigating business risk, but believe it's going to be a hard sell with the very long list of other projects being pushed on the team as priority.

If you have any words of advice or useful pointers on convincing them the time is well spent, I'd be grateful.


So this is the major question. How do you convince management or other functional areas that monitoring is important? It sounds to me like my friend has already scored some wins by freeing bandwidth used by misconfigured systems, simplifying firewall rules, and examining individual problematic hosts.

It's important to remember that there is no return on security investment. Security is a cost center that exists to prevent or reduce loss. It is not financially correct to believe you are "earning" a "return" by spending time and money to avoid a loss.

If I need to spend $1000 to hire a guard to protect my $10000 taxi, I am not earning a return on my investment -- I am preventing the theft of my taxi. If I invest that $1000 in a ticketing and GPS system that makes me more productive ferrying passengers (perhaps increasing my dollars per hour worked), then I have enjoyed a ROI once my $1000 expense is covered.

This is not to say that money should not be spent on security. Rather, time and money should be balanced against the perceived risk. The same taxi driver will spend money on insurance, and may indeed need to spend $1000 on a guard if the protection delivered is seen to be necessary. However, the guard does not make the taxi driver more productive. Security is a "business enabler" here but it does not deliver "ROSI."

I'm not a big fan of FUD but something may be happening to executive perception of digital security. I am starting to hear questions like "How do I know that asset X is safe?" or "How do I know that asset Y is not being attacked?" (Answers: "safe" is relative, and Y is being attacked if it's accessible to anyone.)

I don't have any quick answers for my friend, which is why I'm posting this story. What are you doing to convince management that monitoring is necessary? I personally plan to do exactly what my friend did, namely starting with existing assets and showing quick wins to build momentum for more extensive visibility initiatives. You?

Senin, 09 Juli 2007

More Engineering Disasters

I've written several times about engineering disasters here and elsewhere.

Watching more man-made failures on The History Channel's "Engineering Disasters," I realized lessons learned the hard way by safety, mechanical, and structural engineers and operators can be applied to those practicing digital security. >In 1983, en route from Virginia to Massachusetts, the World War II-era bulk carrier SS Marine Electric sank in high seas. The almost forty year-old ship was ferrying 24,000 tons of coal and 34 Merchant Mariners, none of whom had survival suits to resist the February chill of the Atlantic. All but three died.

The owner of the ship, Marine Transport Lines (MTL), blamed the crew and one of the survivors, Chief Mate Bob Cusick, for the disaster. Investigations of the wreck and a trial revealed the Marine Electric's coal hatch covers were in disrepair, as reported by Cusick prior to the disaster. Apparently the American Bureau of Shipping (ABS), an inspection organization upon which the Coast Guard relied, but funded by ship operators like MTL, had faked reports on the Marine Electric's status. With gaping holes in the coal hatches, the ship's coal containers filled with water in high seas and doomed the crew.

In the wake of the disaster, the Coast Guard recognized that ABS could not be an impartial investigator because ship owners could essentially pay to have their vessels judged seaworthy. Widespread analysis of ship inspections revealed many similar ships and others were unsound, and they were removed from service. Unreliable Coast Guard inspectors were removed. Finally, the Coast Guard created its rescue swimmer team (dramatized by the recent movie "The Guardian") to act as a rapid response unit.

The lessons from the Marine Electric disaster are numerous.

  1. First, be prepared for incidents and have an incident response team equipped and trained for rapid and effective "rescue."

  2. Second, be suspicious of reports done by parties with conflicts of interest. Stories abound of vulnerability assessment companies who find all of their clients "above average." To rate them otherwise would be to potentially lose future business.

  3. Third, understand how to perform forensics to discover root causes of security incidents, and be willing to act decisively if those findings demonstrate problems applicable to other business assets.

In 1931, a Fokker F-10 Trimotor carrying eight passengers and crew crashed near Kansas City, Kansas. All aboard died, including Notre Dame football coach Knute Rockne. At the time of the disaster, plane crashes were fairly common. Because commercial passenger service had only become popular in the late 1920's, the public did not have much experience with flying. The death of Knute Rockne caused shock and outrage.

Despite the crude state of crash forensics in 1931, the Civil Aeronautics Authority (CAA) determined the plane crashed because its wood wing separated from its steel body during bad weather. TWA, operator of the doomed flight, removed all F-10s from service and burned them. Public pressure forced the CAA, forerunner of today's Federal Aviation Administration, to remove the veil of secrecy applied to its investigation and reporting processes. TWA turned to Donald Douglas for a replacement aircraft, and the very successful DC-3 was born.

The crash of TWA flight 599 provides several sad lessons for digital security.

  1. First, few seem to care about disasters involving new technologies until a celebrity dies. While no one would like to see such an event occur, it's possible real change of opinion and technology will not happen until a modern Knute Rockne suffers at the hands of a security incident.

  2. Second, authorities often do not have a real incentive to fix processes and methods until a tragedy like this occurs. Out of this incident came pressure to deploy flight data recorders and more robust aviation organizations.

  3. Third, real inspection regulations and technological innovation followed the crash, so such momentum may appear after digital wrecks.

The final engineering disaster involves the Walt Disney Concert Hall in Los Angeles. This amazing, innovative structure, with a polished stainless steel skin, was completed in October 2003. When finished, visitors immediately realized a problem with its construction. The sweeping curves of its roof acting like a parabolic mirror, focusing the sun's ray like laser on nearby buildings, intersections, and sections of the sidewalk. Temperatures exceeded 140 degrees Fahrenheit in some places, while drivers and passersby were temporarily blinded by the glare.

Investigators decided to model the entire facility in a computer simulation, then monitor for the highest levels of sunlight over the course of a year. Using this data, 2% of the building's skin was discovered to be causing the reflection problems. The remediation plan, implemented in March 2005, resulted in sanding problematic panels to remove their sheen. The six-week, $60,000 effort fixed the glare.

The lessons from the concert hall involve complexity and unexpected consequences. Architect Frank Geary wanted to push the envelope of architecture with his design. His innovation caused a building that no one, prior to its construction, really understood. Had the system been modeled before being built, it's possible problems could have been avoided. This situation is similar to those involving enterprise network and software architects who design systems that no single person truly understands. Worse, the system may expose services or functionality never expected by its creators. Explicitly taking steps to simulate and test a new design prior to deployment is critical.

Digital security engineers should not ignore the lessons their analog counterparts have to offer. A commitment to learn from the past is the best way to avoid disasters in the future.

Sabtu, 07 Juli 2007

Yet Another Review and Pre-Review

Yes, I am on a roll. I admit to not reading every page of the book I just reviewed, however. I am not going to spend time learning about bare-metal HP-UX or AIX recoveries if I have no expertise in either subject (to check for mistakes) or desire to learn (because I do not admin either OS). Shortly Amazon.com will publish my four star review of Backup and Recovery by W. Curtis Preston. From the review:

W. Curtis Preston is the king of backups, and his book Backup and Recovery (BAR) is easily the best book available on the subject. Preston makes many good decisions in this book, covering open source projects and considerations for commercial solutions. Tool discussions are accompanied by sound advice and plenty of short war stories. If the author addresses the few concerns I have in his next edition, that should be a five star book.

I also received another book in the mail today, Secure Programming with Static Analysis by Brian Chess and Jacob West. I reviewed drafts of this book and was confident enough of the content to acknowledge involvement. This is part of Gary McGraw's Software Security Series at Addison-Wesley. I liked the last book in that line, Software Security.

Jumat, 06 Juli 2007

ARP Spoofing in Real Life

I teach various layer 2 attacks in my TCP/IP Weapons School class. Sometimes I wonder if students are thinking "That is so old! Who does that anymore?" In response I mention last year's Freenode incident where Ettercap was used in an ARP spoofing attack.

Thanks to Robert Hensing's pointer to Neil Carpenter's post, I have another documented ARP spoofing attack. Here a malicious IFRAME is injected into traffic by ARP spoofing a gateway. We cover that in my Black Hat class, both of which are now officially full.

Please remember that TCP/IP Weapons School is a traffic analysis class. I believe I cover the most complicated network traces presented in any similar forum. All you need to get the most out of the class is a laptop running a recent version of Wireshark. The class is not about demonstrating tools or having students run tools. Other classes do a better job with that sort of requirement. The purpose of this class is to become a better network security analyst by deeply understanding how certain network-based attacks work. I provide all of the information needed to replicate the attack if so desired, but that is not my goal.

Kamis, 05 Juli 2007

Another Review, Another Pre-Review

Amazon.com just published my five star review of Windows Forensic Analysis by Harlan Carvey. From the review:

I loved Windows Forensic Analysis (WFA). It's the first five star book from Syngress I've read since early 2006. WFA delivered just what I hoped to read in a book of its size and intended audience, and my expectations were high. If your job requires investigating compromised Windows hosts, you must read WFA.

In the mail today I received a copy of Fuzzing by ninjas Michael Sutton, Adam Greene, and Pedram Amini. H.D. Moore even wrote the foreword, for Pete's sake. However, I have some concerns about this book. I performed a technical review, mainly from the perspective of someone who wants to know more about how to do fuzzing. The drafts I read seemed to be more about how to build a fuzzer. Those of you who are jumping to hit the comment button -- I don't want to hear about "you learn how to fuzz by building a tool." Give me a chance to learn how to walk before I try to invent a new method of transportation! We'll see how the book reads in printed form when I review it.

Selasa, 03 Juli 2007

IPSec VPN in PIX/ASA

For those of you who wants to setup an IPSec VPN connection in the PIX/ASA firewall, below is a snaphot of the commands of how to do it.

crypto ipsec transform-set hacker esp-aes-256 esp-sha-hmac
crypto dynamic-map dynmap 20 set transform-set hacker
crypto map hacker 10 ipsec-isakmp
crypto map hacker 10 match address IPSEC_hackers
crypto map hacker 10 set peer 111.111.111.111
crypto map hacker 10 set transform-set hackerZ
crypto map hacker 20 ipsec-isakmp dynamic dynmap
crypto map hacker client authentication LOCAL
crypto map hacker interface outside
isakmp enable outside
isakmp key ******** address 111.111.111.111 netmask 255.255.255.255 no-xauth no-config-mode
isakmp identity address
isakmp nat-traversal 20
isakmp policy 10 authentication pre-share
isakmp policy 10 encryption aes-256
isakmp policy 10 hash sha
isakmp policy 10 group 1
isakmp policy 10 lifetime 86400
isakmp policy 20 authentication pre-share
isakmp policy 20 encryption 3des
isakmp policy 20 hash md5
isakmp policy 20 group 2
isakmp policy 20 lifetime 86400
vpngroup crm525gp address-pool vpnpool
vpngroup crm525gp idle-time 1800
vpngroup crm525gp max-time 86400
vpngroup crm525gp password ********
vpngroup helpgrp address-pool vpnpool2
vpngroup helpgrp idle-time 1800
vpngroup helpgrp max-time 86400
vpngroup helpgrp password ********

The Hacka Man

One Review, One Pre-Review

Amazon.com just published my four-star review of Exploiting Software. From the review:

I read Exploiting Software (ES) last year but realized I hadn't reviewed it yet. Having read other books by these authors, like McGraw's Software Security and Hoglund's Rootkits, I realized ES was not as good as those newer books. At the time ES was published (2004) it continued to define the software exploitation genre begun in Building Secure Software. However, I don't think it's necessary to pay close attention to ES when newer books by McGraw and Hoglund are now available.

I'm looking forward to reading Network Warrior by Gary A. Donahue. This book has the second-best subtitle of all of the technical books on my shelves:

Everything you need to know that wasn't on the CCNA exam

I quickly skimmed this book at USENIX and I think it will be valuable. I like books that take nontraditional look at networking issues.

If you're wondering what my favorite subtitle is, it appears in the nearly ten-year-old book The Next World War by James Adams, original founder of iDefense. The book makes silly mistakes (discussing the "Iraqi printer virus") but it was cool to see it talk about the AFCERT and name one of our lieutenants (who was there before I arrived). It was published in 1998 (not 2001 as indicated at Amazon.com) with the subtitle:

Computers are the Weapons and the Front Line Is Everywhere

That is still true today.

OpenPacket.org Developments

I am happy to report that work on OpenPacket.org is back on track, thanks to a new volunteer Web application developer.

Please read the rest of the story at the Openpacket.org Blog.

DNS Pinning Exposed

Christ1an wrote a very detailed article on Anti anti anti DNS Pinning or you can call it DNS pinning. For those who are still confused or still find it complicated to understand, this article actually explained it with a step by step approach with pictures attached. In it he mentioned the whole dns pinning issues and how it actually works to attack a web browser. Check it out here: http://christ1an.blogspot.com/2007/07/dns-pinning-explained.html

The Hacka Man

Senin, 02 Juli 2007

Asset-Centric vs Threat-Centric Digital Situational Awareness

As an Air Force officer I was taught the importance of situational awareness (SA). The surprisingly good (at least for now) Wikipedia entry describes SA as "knowing what is going on so you can figure out what to do" (Adam, 1993) and knowing "what you need to know not to be surprised" (Jeannot et al., 2003). Wikipedia also mentions fighter pilots who leveraged SA to win dogfights. When applied to information security, I like to use the term digital situational awareness (DSA).

In 2005 invented the term pervasive network awareness (PNA) for my book Extrusion Detection to describe one way to achieve a certain degree of SA:

Pervasive network awareness is the ability to collect the network-based information -- from the viewpoint of any node on the network -- required to make decisions.

PNA is inherently an asset-centric means to improve SA. PNA involves watching assets for indications of violations of confidentiality, integrity, and/or availability (the CIA triad). An asset-centric approach is not the only means to detect incidents, however.

During the past few years several firms have offered services that report indications of security incidents using threat-centric means. These services are not traditional managed security service providers (MSSPs) because they are not watching assets, per se, under the control or operation of a client. In other words, these firms are not placing sensors on company networks and watching for breaches involving monitored systems.

Rather, these next-generation firms seek and investigate infrastructure used by threats to perpetrate their crimes. For example, a threat-centric security firm will identify and analyze the command-and-control mechanisms used by malware or crimeware. The reporting mechanism will be mined for indications of hosts currently under unauthorized control. An example of this is the ongoing Mpack activity I mentioned in Web-Centric Short-Term Incident Containment.

These services improve digital situation awareness by taking a threat-centric approach. The ultimate threat-centric approach would be to monitor activities of the threats themselves, by instrumenting and observing their workplace, communications lines, and/or equipment. Since that is out of the reach of everyone except law enforcement (and usually beyond their reach unless they are extraordinarily lucky and persistent), watching command-and-control channels is the next best bet.

Asset-centric and threat-centric DSA are not mutually exclusive. In fact, threat-centric DSA is a powerful complement to asset-centric DSA. If a company subscribes to a threat-centric DSA service, the service may report that a company system has been compromised and is leaking sensitive data. If confirmed to be true, and if not detected by asset-centric means, the event shows the following:

  • Preventative measures failed (since the asset was compromised).

  • Asset-centric monitoring failed (since it was not detected).

  • Incident response must be initiated (since the compromised asset is not just vulnerable, but actually under the control of an unauthorized party).


With this new understanding, prevention and detection measures can hopefully be improved to reduce the chances of future incidents.

Please do not ask me for recommendations on any of these services; I am not trying to promote anyone. However, I have mentioned two such services before, namely Support Intelligence in Month of Owned Corporations and Secure Science in my review of Phishing Exposed.