Kamis, 24 April 2008

Tactical Forensics Platform

Earlier I wrote about my proposed Tactical Network Security Monitoring Platform. Today I finally sat down and installed the operating systems I need on this system to create a portable tactical forensics and investigation platform. I did not want to use my main work laptop for this sort of work because I do not administer it. I needed my forensics platform to be separate from the corporate domain and totally under my control. I only feel comfortable attesting to the configuration of a system doing forensics if I built it from the ground up and I am the sole administrator.

For operating systems, I had three needs. I wanted Windows XP because the majority of commercial forensics software runs on Windows. I wanted Ubuntu Hardy Heron so I could have access to Linux forensics software and VMware Server. (Windows is also a possible VMware Server candidate, but I might install a copy of VMware Workstation on the Windows side.) I wanted FreeBSD 7.0 in case I needed to do packet capture and related network security monitoring tasks.

I decided to triple-boot these three operating systems. The box has three logical hard drives. Two are physical (147 GB each) and the third is a RAID 0 array resulting in a single HDD of 447 GB.

Before I got the following to work I had to experiment with various setups. The following is what I settled upon. I'm posting this information for future reference and for those who might want to try the same setup.

First I installed Windows XP on the only HDD it could see, one of the 147 GB HDDs. I thought this a little odd, but it suited my purposes. I rebooted and Windows started without incident.

Next I changed the default boot drive in the BIOS from the Windows HDD to the next HDD. I installed Ubuntu Hardy Heron Desktop on that second 147 GB HDD. I selected the "Advanced" option and told Ubuntu to install its bootloader into one of the drives (/dev/sdc, which turned out to be a problem) I was using for Linux.

When I tried rebooting, GRUB had created entries for Linux and Windows but neither worked. I realized for some reason the way the drives were ordered on the Ubuntu live CD/installer wasn't the same way they were seen by GRUB (or by Linux, once booted). I figured out this was the problem and manually changed the GRUB command line to boot properly into Linux. I needed to implement a similar fix for Windows. I'll show what the result was shortly. I made the changes to GRUB permanently before going to the next step.

Finally I installed FreeBSD 7.0, which saw the remaining 447 GB HDD as /dev/da0 and the other HDDs as /dev/ad4 and /dev/ad6. I didn't touch /dev/ad4 or /dev/ad6 but installed the FreeBSD bootloader into /dev/da0.

After a reboot I had to try various combinations to get GRUB to properly boot FreeBSD 7.0, but eventually I got that working too.

Here is how Linux's fdisk -l saw the computer:

root@nextcom01:~# fdisk -l

Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0f8004b1

Device Boot Start End Blocks Id System
/dev/sda1 * 1 19456 156280288+ 7 HPFS/NTFS

Disk /dev/sdb: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x8f8004b1

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 249 2000061 83 Linux
/dev/sdb2 * 250 747 4000185 82 Linux swap / Solaris
/dev/sdb3 * 748 3237 20000925 83 Linux
/dev/sdb4 3238 19457 130287150 5 Extended
/dev/sdb5 3238 4482 10000431 83 Linux
/dev/sdb6 4483 6972 20000893+ 83 Linux
/dev/sdb7 6973 7221 2000061 83 Linux
/dev/sdb8 7222 19457 98285638+ 83 Linux

Disk /dev/sdc: 479.9 GB, 479965741056 bytes
255 heads, 63 sectors/track, 58352 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0f800000

Device Boot Start End Blocks Id System
/dev/sdc1 * 1 58352 468712408+ a5 FreeBSD

Here is the GRUB menu I got working:
$ grep -v ^# /boot/grub/menu.lst 
default 0
timeout 10

title Ubuntu 8.04, kernel 2.6.24-16-generic
root (hd0,0)
kernel /boot/vmlinuz-2.6.24-16-generic root=UUID=a3bc8e2b-0678-440d-877f-cecedce8fa9b ro quiet splash
initrd /boot/initrd.img-2.6.24-16-generic
quiet

title Ubuntu 8.04, kernel 2.6.24-16-generic (recovery mode)
root (hd0,0)
kernel /boot/vmlinuz-2.6.24-16-generic root=UUID=a3bc8e2b-0678-440d-877f-cecedce8fa9b ro single
initrd /boot/initrd.img-2.6.24-16-generic

title Ubuntu 8.04, memtest86+
root (hd0,0)
kernel /boot/memtest86+.bin
quiet

title Other operating systems:
root

title Microsoft Windows XP Professional
root (hd2,0)
savedefault
map (hd0) (hd2)
map (hd2) (hd0)
chainloader +1

title FreeBSD 7.0
root (hd1,a)
savedefault
chainloader +1

I'll probably resize the Windows partition and add a D: drive. I just noticed I devoted the whole drive to C: during installation.

Update: I wasn't able to use the version of GParted available through Ubuntu (0.3.5 I think) to resize the C: partition but I did use the latest stable liveCD (0.3.6-7) to resize C: and create E: (D: was already the optical drive).

New Hakin9 Released

The latest issue of Hakin9 has been released. Several articles look interesting, including Javascript Obfuscation Techniques by David Sancho and an interview with Marcus Ranum. Hakin9 briefly interviewed Harlan Carvey and me. I've uploaded the one page of the interview if you'd like to read it.

First Issue of BSD Magazine Released

I received a copy of the new BSD Magazine yesterday by air mail from Poland, and I have to say it looks pretty cool. It contains an article I wrote explaining how to install Sguil 0.7.0 on FreeBSD 7.0. At the time I used a CVS version of Sguil and FreeBSD 7.0-BETA4, but the article is still relevant.

One caution: I discovered a bug in MySQL, which I logged as Optimizer does table scan for select count(*) w/5.1.22, .23, not 5.0.51, 5.1.11. You will encounter this bug if you follow the instructions in my magazine article. The work-around is to use MySQL 5.0.51a instead of 5.1.22, as shown in the magazine.

Dru Lavigne does a nice job detailing the magazine's table of contents.

Rabu, 23 April 2008

NoVA Sec Meeting 1930 Thursday 24 April 2008

The next NoVA Sec meeting will take place 1930 (7:30 pm) Thursday 24 April 2008 at Fishnet Security:

13454 Sunrise Valley Dr. Suite 230
Herndon, VA 20171
703.793.1440

Aaron Walters from Volatile Systems will discuss memory forensics.

Thank you to Fishnet and Aaron for their last-minute cooperation! I'm cross-posting this notice to get as many people notified as possible in the day before the meeting.

Kamis, 17 April 2008

CloudSecurity.org

What a great idea for a blog -- CloudSecurity.org:

This blog is dedicated to “Cloud Computing” from an IT security perspective.

Cloud Computing is a nebulous term covering an array of technologies and services including; Grid Computing, Utility Computing, Software as a Service (SaaS), Storage in the Cloud and Virtualization. There is no shortage of buzzwords and definitions differ depending on who you talk to.

The common theme is that computing takes place ‘in the cloud’ - outside of your organisations network.

Semantics aside, there is a much bigger question: what does it all mean from an IT security perspective?


One day (during my working career, I am positive) we will all either 1) be cloud customers or 2) work in the cloud. I am glad to see someone take a stand now to try to understand what that means from a security perspective.

You might also find Craig's other blog -- SecurityWannabe -- to be interesting. He did an interview with one of my Three Wise Men, Ross Anderson, to mark the publication of the likely candidate for Best Book Bejtlich Read in 2008: Security Engineering, 2nd Ed.

Selasa, 15 April 2008

Looking for Security-Assesor Friendly, Debian Dedicated Server

I'm looking for a dedicated server company that could provide a Debian environment suitable for running VMware Server. As a bonus it would be helpful to contract with a company that permits authorized outbound network scanning.

As an alternative, I may try colocation. I am looking for a box for security testing, and VMware may not be suitable. I may need a box that can run Xen, for example.

If you have any recommendations for dedicated server or colocation providers, please leave a comment or email me directly -- taosecurity at gmail dot com. Companies situated close to northern Virginia would be excellent. Thank you.

Senin, 14 April 2008

Run Apps on Cisco ISR Routers

Earlier this month we joked that the Sguil project was acquired by Cisco, such that Sguil would be integrated into Cisco platforms. Cisco routers already run Tcl, but now thanks to Cisco's new Application eXtension Platform, other possibilities are developing. According to Optimize Branch Footprint with Application Integration, Cisco says:

  • Linux-based integration environment with downloadable Software Development Kit (SDK)

  • Multiple applications support with the ability to segment and guarantee CPU, memory, and disk resources

  • Certified libraries to implement C, Python, Perl, and Java applications

  • Supported by Cisco 1841, 2800, and 3800 Series Integrated Services Route


Sun used to say The Network is the Computer. Cisco now states The Network as a Platform. In other words, why deploy another server or appliance if you can just run it on your Cisco router?

I am unsure how this will play out. I figure Cisco just wanted to add to the confusion caused by virtualization with their own take on consolidating platforms. At some point I see one giant box (labelled Skynet probably) with a massive antenna to which we all connect our dumb terminals via wireless.

I'd like to get a Cisco 2800 series ISR router to try this out... donations are welcome. :)

Remote Installation of the FreeBSD Operating System without a Remote Console

This looks interesting: Remote Installation of the FreeBSD Operating System without a Remote Console. I read about it on the author's blog. Daniel credits Colin Percival's Depenguinator with the idea, but he uses Martin Matuška's mfsBSD (memory file system) to create a FreeBSD image that can be written to a live remote system's hard drive, then booted and run from memory to allow full OS installation. I intend to give this a try, but if anyone beats me to it please let me know how it worked for you.

Minggu, 13 April 2008

Aaron Turner and Michael Assante on Freedom of the Cyber Seas

Thanks to Nick Selby I learned of a sequel to the great historical security paper Infrastructure Protection in the Ancient World. Michael Assante is back, joined by another security vet, Aaron Turner, discussing Freedom of the Cyber Seas. The authors compare the threat of naval piracy during the Jefferson administration with the current digital threat. Prior to Jefferson, US policy was to pay protection money to stop pirates seizing US goods.

Opposing John Adams' pirate payment policy, Jefferson championed the slogan coined by U.S. Representative Robert Goodloe Harper in 1789: "Millions for defense, not one cent for tribute." Jefferson was also a proponent of the Mare Liberum or "Freedom of the seas" doctrine first documented in international law by Dutch jurist Hugo Grotius in 1609. Freedom of the seas was of supreme importance to the success of the United States. If America could not deliver its goods and conduct free trade, the country could not survive economically.

Following his inauguration in 1801, Jefferson translated his anti-tribute rhetoric into policy by refusing to meet the Bashaw's demand for $225,000 from the new administration. The Bashaw declared war on the United States and cut down the flagpole flying the Stars and Stripes in front of the U.S. Consulate in Tripoli. Jefferson responded by sending a group of American warships to defend U.S. interests in the Mediterranean. From 1801 to 1805, U.S. Navy and Marine units engaged Barbary forces on both land and sea.


This is a great victory for the anti-piracy movement, and in theory I agree that there are lessons for digital security. I wrote about modern pirates in my post Pirates in the Malacca Strait because I believe in Taking the Fight to the Enemy. However, the way Jefferson's war ended will not work for digital security:

Finally, four years of hostilities culminated in the Battle of Derna, during which American forces routed the Tripolitans and forced the Barbary States to agree to a peace treaty, which was signed in Tripoli on June 10, 1805. The First Barbary War was the debut of American military forces' capability to project a U.S. president's policy beyond his own borders. (emphasis added)

This campaign worked because Jefferson got a set of state actors to sign a peace treaty. I don't see how we can do that with digital threats, from criminals to economic spies to nation state actors. Prosecutors have been fighting organized crime in the US for over a century. Companies have always competed in plain sight and in hidden areas. Intelligence actions have also been a constant throughout history.

To have a chance at success, I think our strategy needs to differentiate according to the threat. We'll have to pursuing an anti-crime strategy for the criminals, a counter-business intelligence strategy for the economic spies, and a counter-intelligence strategy for the foreign intel services.

I agree with the following:

The first step would be for the United States to develop a consistent policy that articulates America's commitment to assuring the free navigation of the "cyber seas." Perhaps most critical to the success of that policy will be a future president's support for efforts that translate rhetoric to actions--developing initiatives to thwart cyber criminals, protecting U.S. technological sovereignty, and balancing any defensive actions to avoid violating U.S. citizens' constitutional rights. Clearly articulated policy and consistent actions will assure a stable and predictable environment where electronic commerce can thrive, continuing to drive U.S. economic growth and avoiding the possibility of the U.S. becoming a cyber-colony subject to the whims of organized criminal efforts on the Internet.

It would be ironic if the Air Force Cyber Command became the force that patrolled and defended the "cyber seas". The Navy is too busy taking over the traditional Joint world from the Army to be able to counter the Air Force's cyber march.

Solera V2P Tap

It looks like Solera Networks built a virtual tap, as I hoped someone would. I mentioned it to Solera when I visited them last year, so I'm glad to see someone built it. I told them it would be helpful for someone to create a way for virtual switches to export traffic from the VM environment to a physical environment, so that a NSM sensor could watch traffic as it would when connected to a physical tap.

This picture describes what it does:

You can read more in this news post and product description. You can download it here. The V2P Tap requires ESX Server, which I do not run. If someone with ESX Server downloads the V2P Tap, please let me know how it works for you.

Jumat, 11 April 2008

More Aggressive Network Self-Defense

Some of you might remember this book from my 2005 review. I thought of it after reading Security Guru Gives Hackers a Taste of Their Own Medicine. From the article:

Malicious hackers beware: Computer security expert Joel Eriksson might already own your box.

Eriksson, a researcher at the Swedish security firm Bitsec, uses reverse-engineering tools to find remotely exploitable security holes in hacking software. In particular, he targets the client-side applications intruders use to control Trojan horses from afar, finding vulnerabilities that would let him upload his own rogue software to intruders' machines.

He demoed the technique publicly for the first time at the RSA conference Friday.


You might remember a similar story from Def Con 2005:

New research released at the DefCon conference suggests that not only is it important to apply patches to fix security flaws in commonly used computer software, but that patch installation is important for the very tools hackers and security professionals frequently use to break into (or test the security of) computer networks.

According to new findings by the venerable hacker ninjas known as the Shmoo Group, some of the most popular tools used by hackers and security professionals to infiltrate and test the security of targeted networks contain serious flaws that defenders could use to turn the tables on hackers.


Three years ago in my post about ANSD I wrote:

I disagree with the strike-back idea, as I believe it steps over the line into vigilante justices.

I'm less sure about that now. In the three years that have passed, security has gotten worse, government ability to deter and/or defeat intruders has not improved, and intruders have become more sophisticated. If we continue to sit on our hands waiting for the cavalry to arrive, it will be too late. (It already is too late for most companies anyway; they're owned.)

Disruption of the command-and-control mechanisms used to control compromised hosts is not something I recommend for everyone, but it would certainly push some attackers off-balance. They would suddenly start to incur some of the same costs that defenders spend on trying to develop more secure software. I think it's time for some of us to consider these offensive techniques.

Incidentally, the ActiveResponse.org site I mentioned in 2005 appears to be collecting links to papers and studies on active response.

Argus 3.0 Released

I just posted that my latest Snort Report covered Argus 3.0. Those of you who like to wait for release-grade software should be happy. This week, Carter Bullard published Argus 3.0, as announced on the Argus mailing list. This happened over two years since I posted Argus 3.0 Will Be Released Soon. This is great news and I look forward to learning more about the new features in this powerful application.

Snort Report 14 Posted

My 14th Snort Report titled Network session data analysis with Snort and Argus has been posted. The article doesn't talk about Snort (despite the title -- not mine!) but it does discuss Argus, the network session tool developed by Carter Bullard. From the start of the article:

This edition of the Snort Report departs from the standard format by introducing a data format and data collecting tool that can work alongside Snort. The data format is session data, and the tool is Argus 3.0.

Why session data?

The Snort intrusion detection system can identify suspicious and malicious activity by inspecting network traffic. Snort makes a judgment based on its analytical capabilities and notifies the operator of its decision by generating an alert. I call the output of this collect-inspect-report process "alert data."

While this is a good and necessary methodology, it has one important flaw. In most configurations, Snort is not told to report on what it sees if the traffic in question is deemed to be "normal." One might consider this aspect of Snort to be a benefit. Why generate an alert if the traffic is "normal" and not suspicious or malicious?

No alerting system can perfectly identify all suspicious or malicious activity. In many cases it's simply not possible -- especially on a packet-by-packet basis -- to identify a packet or stream as being worthy of an operator's attention. In those cases it makes sense to keep a log of the traffic. Recording traffic or characteristics of traffic for later analysis has recently been labeled retrospective network analysis (RNA), not to be confused with Sourcefire's Real-time Network Awareness. Others call recording traffic in this manner "network forensics," but that implies a degree of care and evidence handling that exceeds the methodology I present here.
When you collect data about traffic that Snort didn't consider to be suspicious or malicious, you have the opportunity to look back (hence the term "retrospective") to see what happened during an incident. How do you know to look back? Perhaps you receive a tip from law enforcement. Maybe a client reports odd activity. Or you perform a manual investigation and realize you'd like to know as much as possible about the network traffic of a certain host. In all of these situations, Snort might not have provided any clue that something was amiss.

Despite my attention to Snort in this series, I never deploy Snort as a stand-alone tool. I always supplement Snort with additional data sources. One of the most important supplementary data sources I collect is session data.


In my 15th Snort Report, already submitted to the publisher, I explain why IDS was never "dead." You might want to hear Marty Roesch's views on the subject in this video from RSA, where he also discusses Snort 3.0.

BusinessWeek on The New E-spionage Threat

I'd like to head off any more messages to me telling me to look at the following: The New E-spionage Threat, the cover story for this week's issue of BusinessWeek. I recommend also listening to the podcast, which is 18:23 long and a good resource for decision makers with iPods.

Jumat, 04 April 2008

OpenPacket.org 1.0 Is Live

Nearly three years after the initial post describing the idea , I am happy to report that OpenPacket.org 1.0 is ready for public use, free of charge.

The mission of OpenPacket.org is to provide quality network traffic traces to researchers, analysts, and other members of the digital security community. One of the most difficult problems facing researchers, analysts, and others is understanding traffic carried by networks. At present there is no central repository of traces from which a student of network traffic could draw samples. OpenPacket.org will provide one possible solution to this problem.

Analysts looking for network traffic of a particular type can visit OpenPacket.org, query the OpenPacket.org capture repo for matching traces, and download those packets in their original format (e.g., Libpcap, etc.). The analyst will be able to process and analyze that traffic using tools of their choice, like Tcpdump, Snort, Ethereal, and so on.

Analysts who collect their own traffic will be able to submit it to the OpenPacket.org database after they register.

Anonymous users can download any trace that's published. Only registered users can upload. This system provides a level of accountability for trace uploads.

Our moderators will review the trace to ensure it does not contain any sensitive information that should not be posted publicly. Besides appearing on the site, once a trace has been published you can receive notice of it via this published trace RSS feed.

If you have any doubt regarding the publication of a trace, do not try to submit it. When moderators are unsure of the nature of a trace, we will reject it. OpenPacket.org is not a vehicle for publishing enterprise data as contained in network traffic.

I would like to thank all the people who submitted suggestions and did feature testing via the openpacket-devel mailing list. If you have issues regarding usage of the site, consider subscribing to the openpacket-users mailing list or post to the OpenPacket.org Forums.

As time permits I will probably post more on how to use OpenPacket.org strictly on the OpenPacket Blog. I will minimize cross-posting to TaoSecurity Blog and OpenPacket Blog.

I save my final thanks for Sharri Parsell, our Web developer, and JJ Cummings for hosting OpenPacket.org. Without your work we would not have a site!

Review of Visible Ops Security Posted

Amazon.com just posted my four star review of Visible Ops Security by Gene Kim, Paul Love, and George Spafford. From the review:

I reviewed Visible Ops (VO) in August 2005, and I provided commentary on a draft of Visible Ops Security (VOS) to co-author Gene Kim. I liked VO, with a few caveats that apply to both VO and VOS. I have mixed feelings on VOS because the book seems more about preparations and less about operations. Security operations (SO) obviously include integration with developers and IT staff, but SO also requires action in the face of attack. If VOS is supposed to be about SO, it should address trying to prevent compromise *and* what to do when prevention fails.

Review of Economics and Strategies of Data Security

Dan Geer was kind enough to send me a copy of his new book Economics and Strategies of Data Security, published by his employer, Verdasys. The book is exceptionally well written and packed with the sorts of insights that make Dan one of my Three Wise Men. I'd like to present a few excerpts here, partially for my own easy reference but also because they might be useful to you. I recommend that anyone who reacts violently to these ideas try reading the book. It will take only an hour or two and you can vet your response against the full text, in context, and not these snippets.

In theory, there is no difference between theory and practice, but, in practice, there is. (prior to introduction)

That's why I dislike speculation on the effectiveness of security measures and prefer collecting evidence and performing tests.

[These changes to our computing models imply] that data must either become self-protecting (massive amounts of encryption and the conversion of passive data objects into miniature program fragments), or the endpoints where services and users come together will have to be very well instrumented indeed. We find the latter more plausible... [W]e can no longer locate the whole in whole, only the parts remain locatable and barely that. The sum -- that which is greater than the parts -- comes together evanescently and on demand, and therefore it is at the point of use that any protections have to be done. (pp 18-19)

I agree. Note that moving from the idea of "protecting data" to implementing it requires making some choices. I like the emphasis on visibility too.

Since the total workload for information security professionals is proportional to the cumulative sum of all attack vectors yet invented, but the total work factor for the attack side is proportional to the cost of creating a new attack tool, the professionalization of the attack class punctures the existing security equilibrium from a moderately symmetric one to a highly asymmetric one where the advantage is structurally more favorable to the attackers. (pp 30-31, emphasis added)

This is why defenders are losing, especially as we are tasked to do "more with less" as "mutlitalented specialists."

To protect intellectual property you must model the attacker as an insider and prevention must be your only goal because secret data is never unrevealed thus any loss of it is never mitigable. (p 32)

I think this falls in the "nice in theory" category. Thinking of the attacker as an insider makes sense if you define insider as someone who has assumed trusted status by virtue of their unauthorized access to enterprise resources. I don't think you can equate an external party with an insider unless that external party is a former employee or working with an employee, thereby leveraging true insider knowledge and not just network position.

As far as prevention being the only goal, prevention eventually fails. So, that's obviously everyone's goal, but it has never been attainable and never will be. Getting closer to it is best, I agree.

This change in accounting standards, if it transpires, would make a very important statement to those of us who worry with data security, viz., that to be an accounted-for asset means there is a value that must be associated with it and that as a balance sheet item the Boards of Directors of all listed firms would answer to misuse of the corporate asset that data represents... [O]nce data has a value and once that value appears on the balance sheet, then the interplay between the Boards of Directors and the CEOs of this world will include, amongst the other valuations to be protected and to be grown, the valuation of data... For the business side, the most important realization is that data is rising as a fraction of total corporate wealth. (pp 43-44, 147)

This section's discussion of data as a goodwill asset could really change the rules of the game. I recommend reading it closely.

[To secure an enterprise] let's presume that we are not starting from a known state, In such a situation, we likely do not know how data moves and our first action would be to start recording how data moves across the board so as to build a model of data movement in lieu of a model of total data state... This focus on the anomalous is precisely what you do when you don't know everything but you do know something... A data surveillance regime set to "record" only, i.e., not to intervene but merely to watch, is a first step...

[R]eal knowledge of how data moves is very much not the norm, and thus the first priority for the firm is to get a handle on what is normal. Of course, knowing what is of value, such as through a formal data classification exercise, is the gold standard, and that should be the preferred long-term outcome...

[W]hen you know nothing, permit-all is the only option.

When you know something, default-permit is what you can and should do.

When you know everything, default-deny becomes possible, and only then.


Because of the special characteristics of data, if you don't watch it then you won't have it.
(pp 47-4, 51)

That is pure gold. Monitor first, like Bruce said? Of course.

[P]roving a negative is impossible except in the case where all possible alternatives are known and each is examined... "Prove a negative" in this context means to be able to show a skeptical party that such and such a thing did not happen... As a matter of science, to prove that something did not happen toy must have every place where it couldhappen under surveillance...

The way it will pay you is that it, and it alone, will enable you to say "I can prove that X did not happen because I have records of everything that did happen, and X is not in there."
(p 53, 76, 77-78)

This reminds me of my last point in Are You Secure? Prove It.

[I]f you want to get ahead of the threat you have to either invest more than the opponent does, you have to be a fortune teller, or you have to understand that when you are losing a game you cannot afford to lose, then you have to change the rules. (p 71)

Funny, I don't think IBM qualifies for any of those three.

[T]he cost of protection... is the tax you elect to pay in the absence of an event, and the cost of cleaning up failures that may occur if you elect to pay the tax... "How much is everyone else spending?"... [I]f you are an outlier in that distribution then someone will ask why you are... [I]t is often better to look at data security not as a tax but as an investment... Nobody things of an investment and imagines perfection; no, they imagine (hope) that for a certain outlay there will be a corresponding return. Investment is a risk management practice that taxation never is; it trades one downside for another, and it is about odds. (pp 79-81)

Note Dan is not saying "investing" in security makes money. He says the uncertainty of the outlay is the controlling factor.

[T]he issue in data security is that our failure modes are not the random bad luck of physical breakage or discoordination between parts of the enterprise. Our opponents are sentient, and that -- sentient opponents -- makes all the difference... If a product does not have sentient opponents, then it is not a security product. (p 81)

Intelligent adversary...Intelligent adversary...Intelligent adversary...

[E]conomics favor an accountability model focused on the monitoring of information use rather than the gatekeeping of information access... Security that gets in the way is security that is circumvented, but an accountability system lets things go forward that must go forward. (pp 108-109)

This does not exactly square with "prevention as the only goal," so I agree with it.

If you like these excerpts, you'll like Dan's book!

Rabu, 02 April 2008

Scanless PCI, Hurray

Sometime ago, i mentioned something about PCI and its credibility. In short i was saying that are all those PCI certified companies safe from attacks just because they are PCI certified? Today we witnessed something better, more cost effective, faster, least intrusive and for the best part? It does not even cost a single cent as compared to hackersafe or qualys, unless you subscribe for additinal service. Well, i had not personally register for the service, but i guess it will be much more proficient with the current pci standards. The setup up is simple, just copy and paste the codes to your side and that will do it. Check out

http://www.scanlesspci.com/



The Hacka Man

Detection, Response, and Forensics Article in CSO

I wrote an article for CSO Online titled Computer Incident Detection, Response, and Forensics. It's online now, and it should appear in the next print edition as well. From the beginning of the article:

2008 is a special year for the digital security community. Twenty years have passed since the Morris Worm brought computer security to the attention of the wider public, followed by the formation of the Computer Emergency Team/Coordination Center (CERT/CC) to help organizations detect, prevent and respond to security incidents. Ten years have passed since members of the L0pht security research group told Congress they could disable the Internet in 30 minutes. Five years have passed since the SQL Slammer worm, which was the high point of automated, mindless malware. The Internet, and digital security, have certainly changed during this period.

The only constant, however, is exploitation. For the last twenty years intruders have made unauthorized access to corporate, educational, government, and military systems a routine occurrence. During the last ten years structured threats have shifted their focus from targets of opportunity (any exposed and/or vulnerable asset) to targets of interest (specific high-value assets). The last five years have shown that no one is safe, with attackers exploiting client-side vulnerabilities to construct massive botnets while pillaging servers via business logic flaws.


Read more here.

Selasa, 01 April 2008

Sguil Project Acquired by Cisco

Three years ago I posted Cisco Routers Run Tcl, I had no idea where that development could run. Last month when I posted Sguil 0.7.0 Released, I wanted to say more about the release, but I couldn't -- until now. I am happy to report the following.

Cisco Announces Agreement to Acquire Sguil™ Open Source Security Monitoring Project

Acquisition Furthers Cisco’s Vision for Integrated Security Products

SAN JOSE, Calif., and LONGMONT, Color., April 1st, 2008 – Cisco and the Sguil™ project today announced an agreement for Cisco to acquire the Sguil™ project, a leading Open Source network security solution. With hundreds of installations world-wide, Sguil™ is the de facto reference implementation for the Network Security Monitoring (NSM) model. Sguil™-based NSM will enable Cisco’s customer base to more efficiently collect and analyze security-related information as it traverses their enterprise networks. This acquisition will help Cisco to cement its reputation as a leader in the Open Source movement while at the same time furthering its long-held vision of integrating security into the network infrastructure.

Under terms of the transaction, Cisco has acquired the Sguil™ project and related trademarks, as well as the copyrights held by the five principal members of the Sguil™ team, including project founder Robert "Bamm" Visscher. Cisco will assume control of the open source Sguil™ project including the Sguil.net domain, web site and web site content and the Sguil™ Sourceforge project page. In addition, the Sguil™ team will remain dedicated to the project as Cisco employees, continuing their management of the project on a day-to-day basis.

To date, Sguil™ has been developed primarily in the Tcl scripting language, support for which is already present inside many of Cisco’s routers and switches. The new product, to be known as “Cisco Embedded Monitoring Solution (CEMS)”, will be made available first in Cisco’s carrier-grade products in 3Q08, with support being phased into the rest of the Cisco product line by 4Q09. Linksys-branded device will follow thereafter, though the exact deployment schedule has yet to be announced.

“We’re extremely pleased to announce this deal,” said Cisco’s Chief Security Product Manager Cletus F. Simmons. “For some time, our customers have told us that our existing security monitoring products did not extend far enough into their network infrastructure layer. Not only was it sometimes difficult to intercept and monitor the traffic, but there were often political problems at the customer site with deploying our Intrusion Detection Systems, as management had heard several years ago that they ere ‘dead’. Now, with Sguil™ integrated into all their network devices, they’ll have no choice!”

Although the financial details of the agreement have not been announced, Sguil™ developer Robert Visscher will become the new VP of Cisco Rapid Analysis Products for Security. “This deal means a lot to the Sguil™ project and to me personally,” Visscher explains. “Previously, we had to be content with simply being the best technical solution to enable intrusion analysts to collect and analyze large amounts of data in an extraordinarily efficient manner. But now, we’ll have the additional advantage of the world’s largest manufacturer of networking gear shoving it down their customers’ throats! We will no longer have to concern ourselves with mere technical excellence. Instead, I can worry more about which tropical island to visit next, and which flavor daiquiri to order. You know, the important things.”

About Cisco Systems

Cisco, (NASDAQ: CSCO), is the worldwide leader in networking that transforms how people connect, communicate and collaborate. Information about Cisco can be found at http://www.cisco.com. For ongoing news, please go to http://newsroom.cisco.com.

About Sguil™

Sguil™ is the leading Network Security Monitoring (NSM) framework. It is built for network security analysts by network security analysts. Sguil’s main component is an intuitive GUI that provides access to a wide variety of security related information, including real-time IDS alerts, network session database and full packet captures. Sguil™ was written by Robert “Bamm” Visscher, who was apparently too cheap to buy a book on Java or C.


I can't wait to see how well Sguil performs on Cisco routers. Stay tuned!