Jumat, 30 September 2005

Last Days to Register for ShmooCon 2006 for $75

Today and tomorrow (1 October) are the last days to register for ShmooCon 2006 for $75. The conference will be held in Washington, DC on 13-15 January 2006. Starting 2 October the price doubles to $150. This is a very innovative conference that you simply cannot beat for the price. I will attend.

Excellent Article on FreeBSD ACLs

Dru Lavigne wrote an excellent article called Using FreeBSD's ACLs. She describes how to use File System Access Control Lists in a reader-friendly manner, complete with screen shots of the Eiciel GUI tool (in the ports tree). Great work Dru!

Kamis, 29 September 2005

Open Source Security in the Enterprise

This morning I briefed a client on the results of a Network Security Monitoring Assessment I performed for them. I model my NSM Assessment on the NSA-IAM, which uses interviews, observation, and documentation review to assess security postures. My NSM Assessment uses the same techniques to identify problems and provide recommendations for improving intrusion detection and NSM operations.

During one of the briefings the top manager asked for my opinion on using open source security tools. He wanted to know the guidelines I use to determine if an open source tool is appropriate for use in the enterprise. I told him I am more likely to trust open source products that are developed by companies with whom I have a relationship of some sort (like Snort and Sourcefire, Nessus and Tenable, or Argus and Qosient).

I was wondering what sorts of suggestions you might have governing open source security tools. The intent of the manager's question was to assess how I end up "trusting" open source tools. I believe the commercial tools should not be trusted simply because they are commercial, in an age where programming can be outsourced to parties ultimately unknown. What are your thoughts?

Rabu, 28 September 2005

Rootkits Make NSM More Relevant Than Ever



Federico Biancuzzi conducted an excellent interview with Greg Hoglund and Jamie Butler, authors of Rootkits: Subverting the Windows Kernel. I reviewed this book during publication for Addison-Wesley, but I don't plan to read it for personal education until I get deep into the programming part of my reading list. This is the sort of book that looks K-RAD on your bookshelf, telling those passing your cube that you've got m@d 31337 sk1llz. Doing something useful with the contents take some real mastery of Windows programming, especially device driver development and thorough knowledge of material in Microsoft® Windows® Internals, Fourth Edition.

The interview reminded me that network security monitoring is needed now more than ever. It is easy for host-centric security types to concentrate on defending the desktop. In reality the battle for the desktop PC has been lost. When intruders can completely control all aspects of a running system, there is almost no where else for defenders to go. The only places left are found in CPU microcode or outside the CPU itself, monitoring it via a hardware JTAG port as described in a recent Dr Dobbs Journal article.

If the desktop cannot be trusted then detection and prevention must be performed elsewhere, on a trusted platform outside of the intruder's, and more importantly, user's reaches. This can only be done at the network infrastructure. While the network will not yield as rich a collection of evidence about host exploitation, the data collected via network platforms bears a higher degree of trust.

I foresee a few roads ahead for corporate PC users, some of which may be taken simultaneously. We may see this at .mil or .gov earlier. One day arbitrary Web browsing and email communication with non-business-related parties will be forbidden. Alternatively (or simultaneously) PCs will be replaced by true non-Windows thin clients like Sun Ray 170s. Organizations adopting these practices will realize that they must do something to reduce the overall threat level (first option) and/or vulnerability level (second option).

Thoughts on EAL7 Rating

I read in the story Network appliance to get highest-ever security rating by Michael Arnone about the EAL7 Evaluation Assurance Rating achieved by the Tenix Datagate. An EAL7 system bears these qualities:

"Formally Verified Design and Tested. The formal model is supplemented by a formal presentation of the functional specification and high level design showing correspondence. Evidence of developer "white box" testing and complete independent confirmation of developer test results are required. Complexity of the design must be minimised."

My last post mentioned an introductory article on the Common Criteria, and I found an exceptional quote in that piece about EALs. Write Alex Ragen says:

"EAL is the level of confidence achieved by the TOE [Target of Evaluation, a product], and is a function of the SARs [Security Assurance Requirements] with which the TOE complies...

EALs refer to the level of confidence in the conclusions of the evaluation, and not to the level of secrity the product provides. In other words, you can have more confidence that a EAL4 product performs as advertised than an EAL2 product... But an EAL4 product will not necessarily provide more security."

This is an incredible insight. I guarantee I will encounter government managers who hunt for high EAL products because they think they provide "more security."

This is what the Tenix product does:

"Placed at each connection between unclassified and classified servers, Data Diode permits only one-way transmission of data from unclassified to classified networks."

According to Michael Arnone's article: "A senior technical consultant at Tenix said 'it’s physically impossible for data to go back the other way,' which ensures unparalleled security."

Oh boy, that sounds like a challenge! The main barrier to breaking that claim is getting equipment into the right hands.

I found the Tenix product listed on the NIAP in evaluation page and on the validated product page. The lab which tested the product is COACT. Here is the Tenix press release.

Senin, 26 September 2005

Common Criteria

I received the September issue of the ISSA Journal. It contains several useful articles, with the most helpful to me being a humanly readable summary of the Common Criteria by Alex Ragen. I don't think Mr. Ragen clearly states who needs to purchase Common Criteria-validated products however.

His article's first sentence states:

"On July 1, 2002, the US Department of Defense began to enforce National Security Telecommunications and Information Systems Security Policy (NSTISSP) #11 (issued in January 2000), which mandates that US government agencies purchase only those IT security products which have been validated in accordance with Common Criteria and/or FIPS 140-1 or FIPS 140-2 as appropriate."

He also says:

"As mentioned earlier, US government agencies now require Common Criteria certification."

This is not true. According to the Committee on National Security Systems FAQ:

"The policy mandates, effective 1 July 2002, that departments and agencies within the Executive Branch shall acquire, for use on national security systems, only those COTS products or cryptographic modules that have been validated with the International Common Criteria for Information Technology Security Evaluation, the National Information Assurance Partnership (NIAP) Common Criteria Evaluation and Validation Scheme (CCEVS), or by the National Institute of Standards and Technology (NIST) Federal Information Processing Standards (FIPS) Cryptographic Module Validation Program.

Additionally, subject to policy and guidance for non-national security systems, NSTISSP # 11 notes that departments and agencies may wish to consider the acquisition of validated COTS products for use in information systems that may be associated with the operation of critical infrastructures as defined in the Presidential Decision Directive on Critical Infrastructure Protection (PDD-63)." [emphasis added]

Those bold sections make all the difference. This means that systems operated by the Department of Commerce, for example, that are not "national security systems," do not have to be validated by the Common Criteria. While some people discuss the possibility that Common Criteria would be extended beyong NSS, there is definitely no mandate to do so.

Webroot State of Spyware Report

On a flight from San Franciso to Washington Dulles I managed to read the latest State of Spyware report from Webroot Software. I'm not sure how I got the heavy printed version. Maybe it was sent courtesy of Richard Stiennon, who is Vice President of Threat Research. (That's an interesting title.)

I thought the report was useful. It provides a broad look at spyware, and specifics on several examples. It contains an excellent section on spyware-related legislation. The report provides plenty of background for management who need justification to spend money on spyware defenses. I even bought into the idea that automated spyware defenses are required.

>On a related note, the Symantec Internet Security Threat Report Volume VIII is available for download. I have not read this one yet. It is a huge .pdf though. I believe a report like that complements material from organizations like Webroot. Symantec takes a broader look at Internet threats. It also examines vulnerabilities (which we know are not threats).

Minggu, 25 September 2005

Common Malware Enumeration

This article describes the Common Malware Enumeration project. CME is a sister project to Mitre's Common Vulnerabilities and Exposures (CVE) initiative. CME will "assign unique identifiers to high priority malware events." This is a great idea, because anti-virus vendors, security researchers, and OS/application vendors will be able to refer to a common name rather than their internal representations for malware. DHS is funding the CME project.

Kamis, 22 September 2005

Measuring Bandwidth Utilization on Cisco Switch Ports

Yesterday I spoke at the third Net Optics Think Tank in Santa Clara, CA. During the event one of the Net Optics product managers asked me about measuring bandwidth utilization on switch ports. I did not have an answer for him... until I took a look at the latest Packet magazine. The Q305 (.pdf) edition features a tip from Aurelio DeSimone on p. 13 mentioning the show controllers utilization command.

If anyone knows of a similar set of information via SNMP, please let me know via a comment here.

Here is sample output:

Switch> show controllers utilization
Port Receive Utilization Transmit Utilization
Fa0/1 0 0
Fa0/2 0 0
...truncated...
Total Ports : 12
Switch Receive Bandwidth Percentage Utilization : 0
Switch Transmit Bandwidth Percentage Utilization : 0
Switch Fabric Percentage Utilization : 0

This is just the sort of data I would like to see for SPAN ports. You can specify the SPAN port in your syntax (e.g., show controllers fastethernet0/1 utilization) to see how much traffic it is carrying to your sensor.

The current Packet issue also features excellent articles on new modularity features of Cisco IOS and an overview of 10 GB Ethernet and its seven (yes, seven) variants. (There appear to be more, actually.) That sort of information reminds me of my "second law of information technology," which is "complexity increases." The second law of IT is constantly fighting the second law of thermodynamics, which is "entropy increases."

John Ward Compiles Snort on Windows

Newsflash: compiling Snort on Windows is not the chore some people believe it to be. After reading my flailing attempt to use a beta Visual Studio to compile Snort 2.4.1 from source on my Windows 2000 laptop, John Ward stepped in and got the job done. John's a professionall programmer, but anyone who uses his approach will have the same results. Thanks for stepping up to the plate!

Selasa, 20 September 2005

Citadel Offers Product Security Warranty

Thanks to this SC Magazine story, I learned that Citadel Security Software is offering a performance warranty on their Hercules vulnerability management product. They say:

"The Hercules SecurePlus warranty guarantees the product’s performance against Citadel’s published service level objectives to deliver timely, accurate and effective vulnerability remedies for known exploits. Citadel’s service level objectives are the expected delivery times for the vulnerability remedies and associated security content produced by Citadel’s internal security team, the Remediation Security Group...

In the event of an information asset loss due to a successful compromise of a computer system where a remedy is available for the known exploit, you can receive reimbursement up to the amount of Hercules contract.

Citadel offers Hercules SecurePlus in collaboration with AIG, a pioneering leader in the cyber security insurance market. This ground-breaking warranty is available at no cost to Citadel customers and is valid for one year from the date of the Hercules license agreement."

There are probably enough loopholes through which one could drive a truck, but I do not recall any sort of warranty like this elsewhere. Citadel may have just pushed the bar a little higher for those who do not offer similar assurances.

FreeBSD 6.0-BETA5 Available

FreeBSD 6.0-BETA5 is available in the pub/FreeBSD/ISO-IMAGES-i386/6.0/ directory of some FreeBSD mirror FTP sites. I found it at the master site, but I expect to see it replicated elsewhere soon. I believe this will be the last BETA before RCs (perhaps RC1, RC2, and RC3) are produced. The release engineering team is putting a lot of work into this release. I can't wait to deploy it in production. I see 6.0 as more of a continuation of 5.x, and not a brand-new OS as happened with 4.x to 5.x.

Brian Krebs Discusses Sean Gorman

Yesterday's Security Fix post mentions work by Sean Gorman to map American critical infrastructure. Sean wrote a book titled Networks, Security And Complexity: The Role of Public Policy in Critical Infrastructure Protection based on his studies. I don't plan to buy this book since I cannot justify spending $75 on an academic text, but it does look interesting!

Senin, 19 September 2005

Compiling Snort on Windows

Many of you have undoubtedly read the snort-users thread where some people complain about not having Snort in compiled form as soon as Sourcefire releases Snort in source code form. Sourcefire released Snort 2.4.1, a vulnerability bug fix, on Friday. They only released an updated snort-2.4.1.tar.gz archive. There were no Linux RPMs or Win32 installation packages.

I decided to learn what was involved with compiling Snort on Windows. Right now I will say I did not finish the job. I am not a Windows programmer. I do not use Windows as a software development platform. Today was the first day I used the tools I describe below. The purpose of this post is to demonstrate that compiling Snort on Windows is not rocket science.

First, notice the snort-2.4.1.tar.gz archive has a src\win32 directory with these contents:

Makefile.in
WIN32-Code
WIN32-Includes
WIN32-Prj
WIN32-Libraries
Makefile.am

This looks promising. Let's see the contents of the WIN32-Prj directory.

snort_installer.nsi
build_releases.bat
snort_installer_options.ini
snort.dsw
snort.dsp
pcre.dll
LibnetNT.dll
snort.mak
snort.dep

snort.dsp is a Visual C++ project file. I don't have Visual C++ on my Windows 2000 laptop. A visit to MSDN shows Visual C++ Express Edition Beta 2 is free for download. I retrieve and install the program. After agreeing to convert Sourcefire's Visual C++ 6 files into a newer format, I am ready to try to "Build" Snort.

Along the way I read an error about a missing executable called mc. David Bianco in #snort-gui hypothesizes that mc means message compiler, a program available in the Windows® Server 2003 SP1 Platform SDK. Since the SDK works fine on Windows 2000, I install it. I also edit my system's environment variables so Windows knows where to find mc.exe in the future.

Once Visual Studio knows how to find mc.exe, it begins complaining about finding header files found in the C:\Program Files\Microsoft Platform SDK\Include directory like winsock2.h. Remember, I have never used Visual Studio before, and I have read no documentation. I figure the easiest way forward is to just copy the contents of the C:\Program Files\Microsoft Platform SDK\Include directory into the src\win32\WIN32-Includes directory. That problem is solved.

My next hurdle involves providing Snort with the WinPcap headers it needs. I retrieve WinPcap 3.0 in source code format since my test system uses WinPcap 3.0. Should I get Snort to compile I figure it should have the same version of WinPcap as installed on the laptop. I use the same *.h file copy trick to copy the contents of \winpcap\wpcap\libpcap\Win32\Include to src\win32\WIN32-Includes. I do the same for \winpcap\wpcap\libpcap\ .h files.

At this point I run into a problem caused by the Visual Studio project's insistence on building a version of Snort with database support. I figure the easiest once to build is a "release" version for MySQL (as opposed to a "debug" version).

During the build I see an error about mysql_time.h not being found. I download the Windows source for MySQL 4.0.26 only to find mysql_time.h is not in the mysql-4.0.26\include directory. I then download 5.0.12-BETA and see mysql-5.0.12-beta\include has mysql_time.h, just as I needed.

After taking care of relating library file locations, I had everything I needed to progress to the linking stage. Unfortunately, this was where my build process ended with the following errors:

Linking...
util.obj : error LNK2019: unresolved external symbol __imp__DeregisterEventSource@4 referenced in function _CreateApplicationEventLogEntry
syslog.obj : error LNK2001: unresolved external symbol __imp__DeregisterEventSource@4
util.obj : error LNK2019: unresolved external symbol __imp__ReportEventA@36 referenced in function _CreateApplicationEventLogEntry
syslog.obj : error LNK2001: unresolved external symbol __imp__ReportEventA@36
util.obj : error LNK2019: unresolved external symbol __imp__RegisterEventSourceA@8 referenced in function _CreateApplicationEventLogEntry
syslog.obj : error LNK2001: unresolved external symbol __imp__RegisterEventSourceA@8
misc.obj : error LNK2019: unresolved external symbol __imp__IsTextUnicode@12 referenced in function _print_interface
syslog.obj : error LNK2019: unresolved external symbol __imp__RegCloseKey@4 referenced in function _AddEventSource
win32_service.obj : error LNK2001: unresolved external symbol __imp__RegCloseKey@4
mysqlclient.lib(my_init.obj) : error LNK2001: unresolved external symbol __imp__RegCloseKey@4
syslog.obj : error LNK2019: unresolved external symbol __imp__RegSetValueExA@24 referenced in function _AddEventSource
win32_service.obj : error LNK2001: unresolved external symbol __imp__RegSetValueExA@24
syslog.obj : error LNK2019: unresolved external symbol __imp__RegCreateKeyA@12 referenced in function _AddEventSource
win32_service.obj : error LNK2019: unresolved external symbol __imp__RegQueryValueExA@24 referenced in function _ReadServiceCommandLineParams
win32_service.obj : error LNK2019: unresolved external symbol __imp__RegOpenKeyExA@20 referenced in function _ReadServiceCommandLineParams
mysqlclient.lib(my_init.obj) : error LNK2001: unresolved external symbol __imp__RegOpenKeyExA@20
win32_service.obj : error LNK2019: unresolved external symbol __imp__SetServiceStatus@8 referenced in function _SnortServiceCtrlHandler@4
win32_service.obj : error LNK2019: unresolved external symbol __imp__CloseServiceHandle@4 referenced in function _InstallSnortService
win32_service.obj : error LNK2019: unresolved external symbol __imp__CreateServiceA@52 referenced in function _InstallSnortService
win32_service.obj : error LNK2019: unresolved external symbol __imp__OpenSCManagerA@12 referenced in function _InstallSnortService
win32_service.obj : error LNK2019: unresolved external symbol __imp__RegCreateKeyExA@36 referenced in function _InstallSnortService
win32_service.obj : error LNK2019: unresolved external symbol __imp__DeleteService@4 referenced in function _UninstallSnortService
win32_service.obj : error LNK2019: unresolved external symbol __imp__OpenServiceA@12 referenced in function _UninstallSnortService
win32_service.obj : error LNK2019: unresolved external symbol __imp__RegDeleteKeyA@8 referenced in function _UninstallSnortService
win32_service.obj : error LNK2019: unresolved external symbol __imp__RegisterServiceCtrlHandlerA@8 referenced in function _SnortServiceStart@8
win32_service.obj : error LNK2019: unresolved external symbol __imp__StartServiceCtrlDispatcherA@4 referenced in function _SnortServiceMain
mysqlclient.lib(my_init.obj) : error LNK2019: unresolved external symbol __imp__RegEnumValueA@32 referenced in function _my_win_init
.\snort___Win32_MySQL_Release/snort.exe : fatal error LNK1120: 20 unresolved externals

I do not know how to fix these unresolved external symbols. Does anyone have any ideas?

At this point, I do not think I've done too badly for someone with zero Windows development experience!

Minggu, 18 September 2005

SecurityFocus SNMP Article

Thanks to Simon Howard for pointing me toward a new article by Mati Aharoni and William M. Hidalgo titled Cisco SNMP configuration attack with a GRE tunnel. The article shows the dangers of not denying packets from the Internet using spoofed internal addresses. The article builds on Mark Wolfgang's Exploiting Cisco Routers: Part 1, where an intruder uses an SNMP SET command to retrieve a router configuration file via TFTP. As Simon wrote in his email to me: "Applying an inbound ACL on the Ethernet0/0 interface denying any traffic from the 192.168.1.0 network would resolve this issue [in the article]."

On a related note, I am looking forward to the second edition of Essential SNMP, pictured at left.

Sabtu, 17 September 2005

Engineering Disaster Lessons for Digital Security

I watched an episode of Modern Marvels on the History Channel this afternoon. It was Engineering Disasters 11, one in a series of videos on engineering failures. A few thoughts came to mind while watching the show. I will provide commentary on each topic addressed by the episode.

  • First discussed was the 1944 Cleveland liquified natural gas (LNG) fire. Engineers built a new LNG tank out of material that failed when exposed to cold, torching nearby homes and businesses when ignited. 128 people died. Engineers were not aware of the metal's failure properties, and absolutely no defensive measures were in place around the tank to protect civilian infrastructure.

    This disaster revealed the need to (1) implement plans and defenses to contain catastrophe, (2) monitor to detect problems and warn potential victims, and (3) thoroughly test designs against possible environmental conditions prior to implementation. These days LNG tanks are surrounded by berms capable of containing a complete spill, and they are closely monitored for problems. Homes and businesses are also located far away from the tanks.

  • Next came the 1981 Kansas City Hyatt walkway collapse that killed 114 people. A construction change resulted in an incredibly weak implementation that failed under load. Cost was not to blame; a part that might have prevented failure cost less than $1. Instead, lack of oversight, poor accountability, broken processes, a rushed build, and compromise of the original design resulted in disaster. This case introduced me to the term "structural engineer of record," a person who assigns a seal to the plans used to construct a building. The two engineers of record for the Hyatt plans lost their licenses.

    I wonder what would happen if network architectures were stamped by "security engineers of record?" If they were not willing to afix their stamp, that would indicate problems they could not tolerate. If they are willing to stamp a plan, and massive failure from poor design occurs, the engineer should be fired.

  • The third event was a massive sink hole in 1993 in an Atlanta Marriott hotel parking lot. A sewer drain originally built above ground decades earlier was buried 40 feet under the parking lot. A so-called "safety net" built under the parking lot was supposed to provide additional security by giving hotel owners time to evacuate the premises if a sink hole began to develop.

    Instead, the safety net masked the presence of the sink hole and let it enlarge until it was over 100 feet wide and beyond the net's capacity. Two people standing in the parking lot died when the sewer, sink hole, and net collapsed. This disaster demonstrated the importance of not operating a system (the sewer) outside of its operating design (above ground). The event also showed how products (the net) may introduce a false sense of security and/or unintended consequences.

  • Next came the 1931 Yangzi River floods that killed 145,000 people. The floods were the result of extended rain that overcame levees built decades earlier by amateur builders, usually farmers protecting their lands. The Chinese government's relief efforts were hampered by the Japanese invasion and subsequent civil war. This disaster showed the weaknesses of defenses built by amateurs, for which no one is responsible. It also showed how other security incidents can degrade recovery operations.

    Does your organization operate critical infrastructure that someone else built before you arrived? Perhaps it's the DNS server that no one knows how to administer. Maybe its the time service installed on the Windows server that no one touches. What amateur levee is waiting to break in your organization?

  • The final disaster revolved around the deadly substance asbestos. The story began by extolling the virtues of asbestos, such as its resistance to heat. This extremely user-friendly feature resulted in asbestos deployments in countless products and locations. In 1924 a 33-year-old, 20-year textile veteran died, and her autopsy provided the first concrete evidence of asbestos' toxicity. A 1930 British study of textile workers revealed abnormally high numbers of asbestos-related deaths. As early as 1918 insurance companies were relucant to cover textile workers due to their susceptibility to early death. As early as the 1930s the asbestos industry suppressed conclusions in research they sponsored when it revealed asbestos' harmful effects.

    By 1972, the US Occupational Safety and Health Administration arrived on the scene and chose asbestos as the first substance it would regulate. Still, today there are hundreds of thousands of pending legal cases, but asbestos is not banned in the US. This case demonstrated the importance of properly weighing risks against benefits. The need to independently measure and monitor risks outside of a vendor's promises was also shown.


I believe all of these cases can teach us something useful about digital security engineering. The main difference between the first four cases and the digital security world is the failure in the analog world is blatantly obvious. Digital failures can be far more subtle; it may take weeks or months (or years) for secuirty failures to be detected, unlike sink holes in parking lots. The fifth case, describing asbestos, is similar to digital security because harmful effects were not immediately apparent.

Jumat, 16 September 2005

When a Wireless Adapter Is Not a Wireless Bridge

Several weeks ago I was looked for a way to provide my desk laptop with 802.11g connectivity. Sometimes I operate two or three systems on my desk. I thought it might be helpful to purchase an 802.11g wireless bridge. Using the bridge, I could connect those multiple systems via Ethernet to the bridge, and have the bridge speak 802.11g to my Linksys wireless access point.

I had not had good experiences with 802.11b Linksys WET11 bridges, so I turned to NetGear. I noticed they sold the WGE111 54 Mbps Wireless Game Adapter pictured upper left. I thought, "I can buy that, connect it to a hub, and then connect wired systems to the hub." With a price around $50 after rebate this seemed like a great deal, especially compared to the NetGear WGE101, for $80 or more, pictured upper right. A competing product from Linksys, the WET54G costs about $120. (I do like the WET54GS5 that has a five port switch built into it, but that costs about $150.)

It turns out that the WGE111 will not support my requirements unless I trick it. The WGE111 appears to keep track of the MAC address of the wired side device and will not let more than one system connect to the wireless network at a time. The way to fool the WGE111 to support more than one wired client is to put the wired systems behind a small NAT gateway router. I guess I got what I paid for!

Incidentally, I perused the reference manual for the WET54GS5 and learned it supports one-to-one port mirroring. In other words, you can copy the traffic on one port to one other monitoring port. That is a nice way to gain access to traffic in a switched environment. I would like to see similar features in other low-end switches.

For the moment I don't plan to buy any wireless bridges. It would be nice if I could use the WRT54G, the cheap ($60 or less) wireless workhorse, in bridging mode. I found a how-to that relies on third-party firmware; more details here. I might try this.

IPv6 as a Technology Refresh

I've written about government and IPv6 before. The article OMB: No new money for IPv6 by David Perera includes the following:

"Federal agencies have all the money they need to make a mandatory transition to the next generation of IP, a top Office of Management and Budget official said today.

'The good news, you have all the money you need. [IP Version 6] is a technology refresh' said Glenn Schlarman, information policy branch chief in OMB's Office of Information and Regulatory Affairs. Schlarman spoke at a Potomac Forum event on IPv6. 'You have to adapt, reallocate,' he added."

Moving from IPv4 to IPv6 is like transitioning from horse-drawn buggies to internal combustion engine-driven automobiles. Both carry passengers but the complexities, opportunities, and risks associated with cars make the upgrade far more than a "technology refresh."

The biggest single problem with IPv6 is network administrators are not familiar with it. 24 years after IP was presented in RFC 791 there are still people who do not understand the networks for which they are responsible. IPv6 is going to confuse this situation by an order of magnitude. Training is the only way to have a chance to successfully implement IPv6. Unfortunately, OMB is mandating from on high but not providing resources to get administrators trained to handle these new protocols.

I expect a wave of new intrusions during and after the transition to IPv6. Not only with the IPv6 network stacks will be directly exploited, but common misconfigurations will plague enterprises for years.

Thoughts on Software Assurance

Last night I attended a talk at my local ISSA chapter. The speaker was Joe Jarzombek, Director for Software Assurance for the National Cyber Security Division of the Department of Homeland Security. Mr Jarzombek began his talk by pointing out the proposed DHS reorganization creates an Assistant Secretary for Cyber Security and Telecommunications working for the Under Secretary for Preparedness.

This is supposed to be an improvement over the previous job held by Amit Yoran, where he lead the National Cyber Security Division, under the Information Analysis and Infrastructure Protection Directorate. According to this story, "Yoran had reported to Robert P. Liscouski, assistant secretary for infrastructure protection, and was not responsible for telecommunication networks, which are the backbone of the Internet." Mr Jarzombek said that people who are not Assistant Secretaries are "not invited to the table" on serious matters.

Turning to the main points of his presentation, Mr Jarzombek said the government worries about "subversions of the software supply chain" by developers who are not "exercising a minimum level of responsible practice." He claimed that "business intelligence is being acquired because companies are not aware of their supply chains."

The government wants to "strengthen operational resiliency" by "building trust into the software acquired and used by the government and critical infrastructure." To that end, software assurance is supposed to incorporate "trustworthiness, predictable execution, and conformance." Mr Jarzombek wants developers to "stop making avoidable mistakes." He also wants those operating critical infrastructure to realize that "if software is not secure, it's not safe. If software can be changed remotely by an intruder, it's not reliable." Apparently infrastructure provides think in terms of safety and reliability, but believe security is "someone else's problem."

I applaud Mr Jarzombek's work in this area, but I think the problem set is too difficult. For example, the government appears to worry about two separate problems. First they are concerned that poor programming practices will introduce vulnerabilities. To address this issue Mr Jarzombek and associates promote a huge variety of standards that are supposed to "raise the bar" for software development. To me this sounds like the argument for certification and accreditation (C&A). Millions of dollars and thousands of hours are spent on C&A, and C&A levels are used to assess security. In reality C&A is a 20-year-old paperwork exercise that does not yield improved security. The only real way to measure security is to track the numbers and types of compromise over time, and try to see that number decrease.

Second, the government is worried about rogue developers (often overseas and outsourced) introducing back doors into critical code. No amount of paperwork is going to stop this group. Whatever DHS and friends produces will be widely distributed in the hopes of encouraging its adoption. This means rogue developers can code around the checks performed by DHS diagnostic software. Even if given the entire source code to a project, skilled rogue developers can obfuscate their additions.

In my opinion the government spends way too much time on the vulnerability aspect of the risk equation. Remember risk = threat X vulnerability X asset/impact/cost/etc. Instead of devoting so much effort to vulnerabilities, I think the government should divert resources to deterring and prosecuting threats.

Consider the "defense" of a city from thieves. Do city officials devote huge amounts of resources to shoring up doors, windows, locks, and so forth on citizen homes? That problem is too large, and thieves would find other ways to steal anyway. Instead, police deter crime when possible and catch thieves who do manage to steal property. Of course "proactive" measures to prevent crime are preferred, so the police work with property owners to make improvements to homes and businesses where possible.

I asked Mr Jarzombek a question along these lines. He essentially said the threat problem is too difficult to address, so the government concentrates on vulnerabilities. That's not much of an answer, since his approach has to defend all of the nation's targets. My threat-based approach focuses on deterring and capturing the much smaller groups of real threats.

Mr Jarzombek then said that the government does pursue threats, but he "can't talk about that." Why not? I understand he and others can't reveal operational details, but why not say "Federal, state and local law enforcement are watching carefully and we will have zero tolerance for these kinds of crimes." Someone actually said those words, but not about attacking infrastructure. These words were spoken by Alberto Gonzales, US Attorney General, with respect to Katrina phishers.

This approach would have more effect against domestic intruders, since foreign governments would not be scared by threat of prosecution. However, if foreign groups knew we would pursue them with means other than law enforcement, we might be able to deter some of their activities. At the very least we could devote more resources to intelligence and infiltration, thereby learning about groups attacking infrastructure and preventing damaging attacks.

Since I'm discussing software assurance, I found a few interesting sites hosted by Fortify Software. The Taxonomy of Coding Errors that Affect Security looks very cool. The Fortify Extra is a newsletter, which among other features includes a "Who's Winning in the Press?" count of "good guy" and "bad guy" citations. This site is not yet live, but in October DHS will launch buildsecurityin.us-cert.gov. The Center for National Software Studies was mentioned last night.

Also, the 2nd Annual US OWASP Conference will be held in Gaithersburg, MD 11-12 October 2005.

Kamis, 15 September 2005

BSD Certification Group Publishes Usage Survey

The BSD Certification Group is looking for people to complete a BSD Usage Survey. The survey consists of 19 questions. It took me less than five minutes to complete it. You can read more about the survey in this press release and the news section. Please complete this survey if you use any of the BSDs. It will help us better design a BSD Certification for you. Thank you!

Also, the August newsletter has been published, and you can track BSD certification progress at our BSD Certification Group Blog.

Notes on Network Security Monitoring

I've been performing a network security monitoring assessment for a client this week. I use interviews, observations, and documentation review to provide findings, discussion, and recommendations for improving your incident detection and response operations.

During this process I was asked if I knew ways to measure packet loss on open source sensors. (This client uses FreeBSD, which is helpful!) Today I remembered work by Christian SJ Peron on bpfstat, available only on FreeBSD 6.0. bpfstat provides statistics like the following. Here I am running Tcpdump and Trafshow, and bpfstat is reporting packet collection information on interface sf0 every 1 second.

bpfstat -i 1 -I sf0
pid netif flags recv drop match sblen hblen command
1682 sf0 p--s- 6337 0 6337 3844 0 trafshow
780 sf0 p--s- 38405 0 38405 11380 0 tcpdump
1682 sf0 p--s- 7142 0 7142 22046 0 trafshow
780 sf0 p--s- 39210 0 39210 14588 0 tcpdump
1682 sf0 p--s- 7997 0 7997 12686 0 trafshow
780 sf0 p--s- 40065 0 40065 24316 0 tcpdump
1682 sf0 p--s- 8851 0 8851 3222 0 trafshow
780 sf0 p--s- 40919 0 40919 6412 0 tcpdump
1682 sf0 p--s- 9705 0 9705 26516 0 trafshow
780 sf0 p--s- 41773 0 41773 21108 0 tcpdump
1682 sf0 p--s- 10467 0 10467 7484 0 trafshow
780 sf0 p--s- 42535 0 42535 6516 0 tcpdump

recv, drop, and match are self-explanatory. Christian provided this explanation of flags:


p - promiscuous mode
i - BIOCIMMEDIATE has been set
f - BIOCSHDRCMPLT has NOT been set
s - BIOCSSEESENT has been set
a - packet reception generates a signal
l - descriptor has been locked (-CURRENT ONLY) BIOCLOCK

hblen refers to the BPF hold buffer and sblen refers to the store buffer. The first entry for Tcpdump, for example, indicate that the current size of Tcpdump's store buffer is 11380 bytes and the size of the hold buffer is zero.

I asked Christian "Do you know of any other ways to measure packet loss outside of the programs themselves? (In other words, upon exiting Snort reports packet
> loss.) This is an extremely interesting subject for people doing network
> security monitoring."

He replied:

"No, none that I know of. Which was the primary motivator behind what I did. The reason I added this functionality into FreeBSD was because we run a lot of various IDS and network monitoring processes. Some of them don't offer any stats at all while others require termination or signal delivery.

Rather than interrupting the processes, I figured since the kernel keeps real time counters in memory for these types of statistics, I would export them to userspace. The one exception was the "matched" counter, which I introduced.

You can expect to have packet loss, and now we have a non-intrusive way to monitor this packet loss.

It might also be worth noting that Rui Paulo from NetBSD thought this was a great idea and added support for this into the NetBSD and netstat. I am going to releasing bpfstat 2 which will compile on both NetBSD and FreeBSD.

CVS commit: src/sys/net
CVS commit: src/usr.bin/netstat

They also provided some good feedback for the netstat and PID processing."


My NSM Assessment client uses Bro to provide indicators for intrusion detection. I noticed the change log says Bro 0.9a10 was released on 6 Sep. The last version, 0.9a9, arrived 19 May. At some point, I intend to re-engage on Bro to learn what else it can provide in an operational environment. Unfortunately the security/bro port does not have a maintainer. That might be a good learning project.

Selasa, 13 September 2005

Vulnerability in Snort 2.4.0 and Older

I read this news about a vulnerability in Snort 2.4.0 and older versions. You're affected if you process a malicious packet while in verbose mode. This means running Snort using the -v switch. Typically this is only used to visually inspect traffic and not for intrusion detection purposes.

Through the FrSIRT advisory I learned about the discovery of this vulnerability by A. Alejandro Hernández Hernández. An exploit is available to crash Snort. Interrupting program flow to control the system is not indicated at this time. The researcher used Fuzzball2 to send weird packets with Selective ACKnowledgement (SACK) options through Snort and find the exploit condition.

I am impressed by Sourcefire's response to this issue, as shown by the disclosure timeline:

  • Flaw Discovered: 20/08/2005.

  • Vendor Notification: 22/08/2005.

  • Vendor Response: 23/08/2005.

  • Date Published: 11/09/2005.


Sourcefire should have credited the researcher in their vulnerability announcement, however.

You can either upgrade via CVS, wait for Snort 2.4.1, or not run Snort in verbose mode.

Senin, 12 September 2005

Sguil at RAID 2005

Thanks to Russ McRee, Sguil made an appearance in a poster session at the 2005 Eighth International Symposium on Recent Advances in Intrusion Detection (RAID). I attended RAID 2003. I've posted Russ' slides (.pdf, 5.8 MB) on the Sguil home page to conserve Russ' bandwidth. Russ advocates using Sguil and Aanval in tandem. I have never used Aanval, and it does not appear in the FreeBSD ports tree. I may still give it a try when I find time.

Register for 15 September ISSA-NoVA Meeting by Noon Tuesday

To my DC metro area readers: if you'd like to attend the local ISSA-NoVA chapter meeting on Thursday night, please RSVP by noon Tuesday. I plan to be there to hear Joe Jarzombek, Director for Software Assurance for the National Cyber Security Division of the Departmet of Homeland Security. The topic will be Software Assurance: A Strategic initiative of the US Department of Homeland Security to promote Integrity, Security, and Reliability in Software - Considerations for Advancing a National Strategy to Secure Cyberspace.

Wordy, but hopefully interesting. I will be the guy wearing a black or white polo shirt with the TaoSecurity logo. Socializing starts at 1730 at the Nortel PEC building in Fairfax, VA.

VMWare 5.5 Beta Available

I received an email today stating that VMWare Workstation 5.5 Beta is available. I am using Workstation 5 on Windows Server 2003 x86 Edition to support my Network Security Operations class. When students use SSH to connect to the class server, they are logging in to a FreeBSD server running in VMWare. (I also dual-boot the server with FreeBSD 6.0-BETAx using the amd64 port.

The key advances appear to be the following:

  • Support for 64-bit guest operating systems

  • Experimental support for 2-way Virtual SMP

  • New support for select host and guest operating systems and hardware - 32 and 64 bit


I am excited to see support for SMP (even if only for 2 processors) appear in a Workstation product. We are going to see more multi-core systems appearing in everyday desktops (even though most "normal" users should be using thin clients). :) 64-bit support is also welcome as that architecture moves out of the server world and onto developer desktops.

Jumat, 09 September 2005

Two Good SecurityFocus Articles

I just read two good columns at SecurityFocus. The first, A Changing Landscape, is by Red Cliff consultant, fellow former ex-Foundstone consultant, and Extrusion Detection contributing author Rohyt Belani. He theorizes about the rise of client-side attacks and their effect on statistics reported by CERT/CC.

The second article is an interview with FX of Phenoelit. He discusses exploiting Cisco IOS, which is fascinating.

Final Call for NYCBSDCON Preregistration

Brad Schonhorst reminds us that if you're near New York city, you might want to check out NYCBSDCon on 17 September 2005. Tomorrow (Saturday) is the last day to preregister for this event. I won't be able to attend due to work constraints, but I think this will be a great con!

Network Security Operations Class Description

Several people have asked for additional detail on the sorts of topics covered in my Network Security Operations class. Having spent several minutes composing this response, I figured others might want to see what I teach.

Day one is all network security monitoring. This day is mainly based on material in The Tao of Network Security Monitoring. We start with a case study and then a theory section to provide background. I follow by discussing techniques to access wired and wireless traffic. That's about half of day one. The second half introduces four sections on tools to collect and analyze statistical, session, hybrid, and full content data. All of these sections conclude with hands-on labs using equipment I provide. By the end of day one students should know what network data to collect, how to access it, and what tools to capture and analyze it.

Day two is all network incident response. This day is based on material I wrote for Extrusion Detection. I start with a case study and then background theory. I combine the techniques and tools during this day, since the tools for network IR aren't as discrete as those for generic monitoring. I provide sections on incident detection, containment, and resolution. We discuss ways to limit an intruder's freedom of maneuver, how to perform first, live, and general response, and then how to reconfigure the network to reject the intruder. Again, each section is backed up by labs. By the end of day two students should know how to identify intrusions, what steps to take immediately thereafter, and how to win against a determined intruder.

Day three is all network forensics. This day is based on material I wrote for Real Digital Forensics. Network forensics is an expansion of the tools introduced in day one as applied during the steps in day two. I teach students how to collect, preserve, analyze, and present network traffic to support "patch and proceed" or "pursue and prosecute" actions. This day seriously focuses on network analysis, but I ensure students know how to take the proper steps to turn collected packets into real network-based evidence. By the end of day three students should know how to use network-based evidence to complement host-based evidence during incident response.

Day four is all labs -- live fire exercises, you might say. Students use new traffic not contained in days one, two, or three, and they work intrusions from detection through remediation and beyond. The labs in days one, two, and three are designed to introduce students to key techniques and tools. The labs in day four are designed to build confidence and familiarity so the lessons learned are immediately applicable outside the class. I want students to leave day four believing they can use this knowledge to prevent, detect, and investigation real intrusions.

If you have any questions, please contact me via richard at taosecurity dot com. Remember I am offering my only scheduled public class the last week in September, starting Tuesday 27 November. ISSA-NoVA members who sign up no later than Friday 16 September (next week) pay only $1995. See me at the next ISSA-NoVA meeting on Thursday 15 November for details if you like.

Some of you have asked me to describe the differences between this public class and my upcoming full-day tutorials at USENIX LISA 2005 in San Diego, CA, from 6-8 December 2005:network security monitoring, incident response, and forensics. At USENIX, I have to scale back the hands-on material because I can't provide laptops, and there are many more students at USENIX. My public and private classes max out at 15. Also, there is no all-lab day at USENIX.

IATF Discusses Availability and Awareness

Yesterday I attended a meeting of the Information Assurance Technical Framework (IATF) Forum. I last attended an IATF meeting two years ago. According to this introduction (.pdf) document, the IATF Forum "is a National Security Agency (NSA) sponsored outreach activity created to foster dialog amongst U.S. Government agencies, U.S. Industry, and U.S. Academia seeking to provide their customers solutions for information assurance problems." Half of the attendees were government contractors, and a quarter were government civilians.

This meeting focused on two elements of the Information Assurance (IA) "cornerstones" DoD's Global Information Grid (GIG): the "Highly Available Enterprise" (HAE) and "Cyber Situational Awareness and Network Defense." (CSA/ND) The Government Accounting Office report The Global Information Grid and Challenges Facing Its Implementation (.pdf) provides a GIG overview. The NSA describes IA with respect to the GIG, and Craig Harber's December 2004 presentation The Information Assurance Component of the Global Information Grid (GIG) Integrated Architecture (.pdf) provides background on the IA cornerstones of the GIG. One slide from his presentation explains the four cornerstones:



The GIG is a long-term project, with deployment envisioned for 2020. This interview with NSA IA director Daniel G. Wolf explains NSA's role in the project and provides some information on GIG security initiatives.

One of the speakers yesterday works in the Joint Task Force Global Network Operations (JTF-GNO), which began life as the Joint Task Force Computer Network Defense (JTF-CND) when I was still in the Air Force. The JTF-GNO has a new security operations center, pictured at left and described in the article At the Heart of the Network. JTF-GNO also has a new commander; in July Air Force Lt Gen Charles E. Croom became director of the Defense Information Systems Agency (DISA) and commander of the JTF-GNO. I think it is a good idea that the person who owns DoD networks (DISA) is also in charge of defending them (JTF-GNO).

Military Information Technology magazine has a few other helpful articles on GIG and IA topics, like Global Network Guardians.

After deciphering all of the acronyms flying through the air, I found aspects of the meeting fascinating. For example, none of the CSA/ND speakers addressed network "intrusion prevention systems." Upon asking a panel about this issue, I learned there is interest in host-based "IPS", and research into determining if network IPS can be helpful.

One speaker from NSA, when describing his budget, said "IA is a hard sell... People do not fully appreciate the risk they are assuming." If even the .mil community doesn't appreciate network risks, what does that say about people with less sensitive information to protect?

To deal with limited resources, NSA is developing intellectual property that it will transfer to commercial vendors. The vendors will then sell finished products back to NSA. This is a cost-saving alternative to the traditional procurement strategy, where NSA designs, builds, and fields equipment completely in-house. To implement this plan NSA is trying to incorporate Internet standards into its own designs (like subsets of IPSec) while participating in the development of new standards.

One of the CSA/ND speakers noted that a study done 18 months ago found 20% of DoD bandwidth was used by unauthorized peer-to-peer file sharing applications. DoD has since taken steps to reduce and eliminate this traffic, since DoD sees it as illegal and a means to introduce malicious code into the enterprise. No one asked about legitimate p2p applications like retrieving OS .iso's via BitTorrent.

Another speaker noted the important of monitoring. He said DoD conducts "policy monitoring to verify its guards operate as expected." This is exactly the right attitude. Prevention is well and good, but monitoring but always be performed to determine if prevention methods are operating properly. Failure detection cannot be performed by the system deployed to prevent intrusions. I am glad the DoD "gets" this.

Finally, one speaker said "DoD can no longer just download Snort." That really shocked me, although I should not have been surprised. Another speaker said his biggest obstacle was getting 57 separate DoD services, agencies, and organizations to implement security properly and report their status. 57! No one thinks of DoD much beyond the four services, but there are many DoD components that act independently.

The next conference is scheduled for 12 October 2005 and will discuss "Managing and Controlling Enterprise Security." I may be teaching in California that week, so I don't plan to attend.

Selasa, 06 September 2005

VMWare Team LAN Appears Shared

Previously I wrote about my plans to incorporate VMWare into my classes. Originally I intended to use GSX Server. I thought I would give each student his or her own independent image. I assumed people would want to build their own sensors (from the ground up), and that required providing complete virtual machines.

Based on feedback here and in classes since that post, I've learned most people don't care about building sensors. They are more interested in analysis. Therefore, I decided students didn't need dedicated VMs. Therefore, I could run a few VMs with dedicated functions, and let students share systems as normal users. For example, in my last class a dozen students all logged in to a single FreeBSD image to perform analysis.

In the future, I plan to have multiple images running. For example, I plan to offer several complete Sguil installations. Students in groups of two or four might share one Sguil server. My current test environment uses VMWare Workstation 5 running 6 FreeBSD 5.4 REL images simultaneously.

Since VMWare 3.x I've wondered about the product's networking support. For example, if I provided a set of VMs with internal NICs, could they see each other's traffic? I decided to answer this question by putting my 6 FreeBSD VMs into a single VM "team", as shown.
One interface (lnc0 on each) is bridged so I can access the systems remotely. The second interface (lnc1 on each) is limited to the team and is addressed with an internal scheme. Here is the question: if freebsd54-rel_01 pings freebsd54-rel_02, will freebsd54-rel_03 see it? Here is the ping:

$ hostname
fbsd5401.taosecurity.com
$ ping -c 1 10.1.1.202
PING 10.1.1.202 (10.1.1.202): 56 data bytes
64 bytes from 10.1.1.202: icmp_seq=0 ttl=64 time=2.943 ms

--- 10.1.1.202 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/stddev = 2.943/2.943/2.943/0.000 ms

Here is what another system on the team sees:

fbsd5403# tcpdump -n -i lnc1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on lnc1, link-type EN10MB (Ethernet), capture size 96 bytes
08:19:58.946640 IP 10.1.1.201 > 10.1.1.202: icmp 64: echo request seq 0
08:19:58.946695 IP 10.1.1.202 > 10.1.1.201: icmp 64: echo reply seq 0

Yes. That is great. Life is much simpler now, since any machine can see any other machine on the same team. That facilitates setting up networks that can be monitored.

Minggu, 04 September 2005

Speaking at DoD Cybercrime in January

I learned I will be delivering two presentations at the DoD Cybercrime 2006 Conference in Palm Harbor, FL on 11 January 2006. I will present shortened versions of my network incident response and forensics classes. Last year I spoke about network security monitoring with Sguil and other open source tools. In 2000 I spoke at the first DoD Cybercrime conference in Colorado, delivering the AFCERT mission briefing.

Sabtu, 03 September 2005

Speaking at USENIX LISA in December

I just checked the training schedule for the next USENIX LISA (Large Installation System Administration) conference. I will teach network security monitoring, incident response, and forensics. These are each full-day tutorials, which begin on Tuesday 6 December and end Thursday 8 December. Early bird registration ends 18 November 2005. It looks like you can attend all three days for $1775. I am looking forward to teaching these classes because the USENIX crowd is always top-notch.

Don't forget my only scheduled public Network Security Operations class, which will be held in Fairfax, VA 27-30 September (that's three weeks from Tuesday)! If you're an ISSA-NoVA member and you register no later than Friday 16 September, you will save $1000 on the class price.

To register or for more information, email me: richard at taosecurity dot com. Thank you.

Jumat, 02 September 2005

Request for Comments on CERT and SEI Training

I have been taking a closer look at training offered by the CERT® Coordination Center and the Software Engineering Institute. Six years ago as an Air Force captain from the AFCERT I enjoyed the Advanced Incident Handling for Technical Staff. Now I may have a chance to teach or develop course materials for some of these courses. I am also considering the value of the
CERT®-Certified Computer Security Incident Handler
program.

Has anyone attended any of these courses recently? If yes, what do you think of them? If no, why not? What alternatives have you considered or attended?

Kamis, 01 September 2005

Thoughts on Cisco Packet Magazine

I like to read Cisco's quarterly Packet magazine. It's free, and it provides insight into developments by the world's networking (and one day, security) juggernaut. While waiting for car maintenance this morning, I managed to read much of the Quarter 2 2005 issue, devoted to Self-Defending Networks.

According to Cisco, they have been releasing Self-Defending Network components every few years. In 2000 they offered integrated security, followed by collaborative security in 2003. Now, in 2005, we have adaptive threat defense. The first term means security is part of Cisco products, such as routers and switches. The second term means these products should work together. Let's look closer at the third term, which Cisco claims will "protect every packet and every packet flow on a network."

I was skeptical when I saw the cover text. The phrase "eliminating the source of attacks" and the sentence "network security grows adaptive, reaching inside Web applications and excising attacks at their source" also worried me. When I read words like that, I imagine Cisco police forces banging down doors of attacker apartments in Bucharest or Beijing. That's how one really "excises" a threat!

As I read more about Cisco's plans, however, I realized they refer to containing malicious systems that connect to protected networks. For example, one article described how the CS-MARS [Security Monitoring, Analysis and Response System], developed by the former Protego Networks, "will send the network administrator the appropriate command to execute an action to excise the problem from the network at its source." In other words, Cisco gear will help identify and disable misbehaving network assets (or rogue visitors).

I found this article on wireless defenses interesting. It describes Cisco's approach to handling rogue wireless access points:

"With rogue access point suppression, the sensors detect wireless-device information, aggregate it, and pass it up to elements in the network that can correlate it and act upon it. When a wireless access point is detected on the network, the WLAN intrusion prevention system sends RF management frames that disassociate any clients that connect to it and attempt to trace and shut down the switch port to which the rogue is connected."

That seems cool. Only a few years ago I remember Mike Schiffman demonstrating libradiate at Black Hat by disassociating wireless clients. Now he works at Cisco!

I also learned a little about virtual routing and forwarding, which acts like "multiple routers within a single chassis."

Overall, I found Cisco's magazine very useful. I also subscribe to the free IP Journal, which I recommend. Whenever I read articles by Cisco, it reminds me I am not currently a networking engineer. That would require far more protocol and algorithm knowledge than I currently possess!

Speaking of Cisco, I will be speaking on 10 October 2005 to the Cisco Fall 2005 System Engineering Security Virtual Team Meeting in San Jose, CA. I will probably discuss network security monitoring and give a Sguil demo.

Feds Hurry, Slow Down

In my post Opportunity Costs of Security Clearances I ranted about needing security clearances for assessment work. Now I read Security clearance delays still a problem by Florence Olsen:

"Security clearance delays are the same, if not worse, than a year ago, before lawmakers made changes designed to help clear the backlog...

[N]ewly enacted reciprocity rules have made no dent in a problem that is creating mounting costs for high-tech companies. Those rules permit agencies to accept clearances initiated by other agencies."

Wonderful. Not only do agencies not trust employees, they don't trust other government agenices. That is understandable, but pathetic; jobs are left vacant because .gov entities want to play petty games. It gets worse:

"ITAA officials said 27 member companies that responded to a survey are coping with the backlog by hiring cleared employees from one another, sometimes paying premiums of up to 25 percent."

Great. This means the same cadre of cleared people are being shuffled among agencies. New blood with potentially more enthusiasm, skill, and (gasp) lower salaries (which would appeal to any commercial endeavor) are left to watch this dance from the outside.

So how bad are the delays?

"21 companies, said they had encountered delays of 270 or more days in getting top-secret clearances for employees. Last year, when ITAA conducted a similar survey, 70 percent reported equally lengthy delays.

The longest waits occurred in seeking clearances for employees to work at the CIA and the Defense Department."

So, in places were skilled security practitioners are most needed, they are not available.

Those that are already in place must cram before 2005 FISMA scores arrive. In my NSA-IAM class last week, a State Dept. worker told us he had to work a disaster recovery scenario on Sunday 28 August in order to meet his 31 August deadline.

Companies that cannot properly manage their workforces go out of business. Governments just lumber along. There is no mechanism to correct deficiencies, aside from massive intelligence or defense failures like 9/11, the Iraq war, and so on. Sorry for the depressing post, but I don't see a light at the end of this tunnel!

Pool IDS

By now you've probably heard the story about the 10-year-old girl in Wales who was saved by the Poseidon computer-aided drowning detection system. According to the vendor:

"[Poseidon] uses advanced computer vision technology to analyze activity in the pool, captured by a network of cameras mounted both above and below the surface of the pool. Poseidon helps lifeguards monitor swimmers' trajectories, and can alert them in seconds to a swimmer in trouble."

While reading comments at Slashdot, several of them reminded me of the value of digital intrusion detection systems. This one by a Poseidon user is very helpful if you want to know more about how Poseidon works.

For example, some critics complain about "false positives," meaning Poseidon sounds the alarm although no one is drowning. Poseidon alarms when a swimmer stops moving below the water for more than a few seconds. If the Poseidon programmers tell the device to alert when people appear to be drowning (i.e., motionless below water for a while), then it is not the device's fault when it alerts lifeguards of this fact.

It should not be Poseidon's fault if someone decides to "play dead" at the bottom of the pool!

If Poseidon alarms when everyone is moving, then that is an example of a real false positive. A false negative means no alarm when someone is drowning and motionless below the water.

Beyond the false positive debate, someone proposed a "drowning prevention system" based on the Poseidon alert. The idea was to raise a portion of the pool (!) under the motionless person, thereby elevating them above the water (!) to safety. This is an example of "prevention" being difficult or too costly. Wherever prevention is impossible, detection should be applied.

Finally, the Poseidon system demonstrates another feature of digital detection: human involvement. Poseidon sounds an alarm, to which human "analysts" (aka lifeguards) must respond. Time is of the essence. Here, "real time" does matter. However, a person could thrash underwater while drowning, and only become motionless after their lungs have filled with water. Still, an alert a few seconds later is better than no alert at all.

On a related note, consider the T.J. Hooper v. Northern Barge Corp. effect. This was the case where Judge Learned Hand (I am not making that up) essentially found tugboat owners negligent for not installing a newfangled "radio" technology (in 1932) that could have warned the boats of an impending storm. Radios were not mandatory at that time on boats, but Judge Hand "legislated from the bench" and essentially made them mandatory because they were so helpful. The previous link uses the same argument to advocate installing DDoS defenses, but one could extend the argument to hold pool owners negligent if they do not deploy Poseidon-like systems.