Rabu, 31 Agustus 2005

Changes Ahead for FreeBSD LiveCDs

There's plenty of activity in FreeBSD-land these days. Colin Percival has become the new FreeBSD Security Officer. FreeBSD 6.0-BETA3 is available, and we might see 6.0-RELEASE by late September.

I just learned of a new FreeBSD LiveCD by Matt Olander called BSDLive, which fits on a business card CD (media sleeve available). This is a great advocacy item. I booted the .iso in VMWare and saw it runs FreeBSD 5.4-RELEASE-p6. It boots into X.org 6.8.2.

I will be glad when the results of the logo contest are announced!

One cannot talk about FreeBSD LiveCDs without mentioning FreeSBIE. Unfortunately, the last official release happened in December. Sguil 0.5.3 is only one day younger! However, a look at the FreeSBIE mailing list shows that Dario Freni is busy working on integrating FreeSBIE into the main FreeBSD source tree. I do not think we will see this in 6.0, but perhaps 6-STABLE will have it shortly after 6.0.

Frenzy is an alternative FreeBSD LiveCD that I have not yet tried. Frenzy 0.3 is based on FreeBSD 5.2.1 (fairly old), but a version using 5.4 appears to be available soon.

I look forward to seeing FreeSBIE integrated into the source tree, as that will undoubtedly make it easier to create FreeBSD LiveCDs.

Selasa, 30 Agustus 2005

Interview with Def Con CTF Winning Team Member Vika Felmetsger

Earlier this month I congratulated the Def Con Capture The Flag winners from Giovanni Vigna's team. One of the contestants, Vika Felmetsger, was kind enough to answer questions about her experience and the role she played on team Shellphish. I thought I would publish Vika's thoughts in the hopes that she could provide an example of how one becomes a serious security practitioner.


Richard (R): What is your experience with security, and what are your interests?

Vika (V): I am starting my second year as a computer science Ph. D. student at UCSB, where I work as a research assistant in the Reliable Software Group (RSG).

Everybody in the group works on various computer security areas and my current focus is web application security. Even though now security is a part of my everyday life, I am still pretty new to this area.

As an undergraduate student at UCSB I learned some security basics, however, my real introduction to practical security, and hacking in particular, was last fall when I took "Network Security and Intrusion Detection," which is a class taught by my graduate advisor Prof. Giovanni Vigna.

In this class I learned various techniques that can be used to break the security of computer systems, how to detect attacks, and how to protect a system against possible attacks.

Most importantly, as a part of the classwork, every student was able to apply the learned techniques to write actual exploits to attack various vulnerabilities in real programs within a testbed network.

Also, during the class, I participated in two Capture The Flag (CTF) exercises (which are organized every year by Prof. Vigna) where, together with other students in the class, I could practice attacking other systems as well as defending my team's system. As a result, after that class, I had the background necessary to further develop my hacking skills on my own as well as be able to work on various security problems.

Later I was very lucky to be involved in setting up the UCSB International CTF which was organized by Prof. Vigna on June 10th, 2005. This provided me with a valuable experience being on the organizers' side and helped me to improve my system administration, networking, and network traffic analysis skills.

R: How did you join team Shellphish?

V: Hmmm, I did not really join the team ... Everybody in the RSG is a member of the Shellphish team :-).

R: Did you have a specific role on the team? If yes, can you describe it?

V: During the DefCon CTF I was a "human IDS." I was analyzing (using scripts and manually) network traffic in real time looking for attacks on our system. This helped the team to discover many successful attacks on our system, find out which particular vulnerabilities were exploited, patch the system, and even reuse some of the attacks against the other teams.

[Note: Against sophisticated intruders, only human analysts can prevail.]

R: What was it like to compete at Def Con? Did it meet your expectations?

V: I was dreaming about competing at DefCon the whole year and it certainly met my best expectations! :-) I don't have enough words to describe the feeling that I had sitting 3 days straight in front of the computer when I was absolutely consumed by the game. That is something everybody should experience for him/herself ;-).

I was very lucky to be a part of such an amazing team, to work together with the people whom I highly respect and from whom I have so many things to learn. What can be better?

When we came to DefCon this year, we did not care that much about winning, we simply wanted to enjoy ourselves doing the things that everybody in the team is fascinated with. And, it certainly worked out perfectly!

R: Do you plan to compete next year?

V: Of course.

R: What advice could you give to those who might like to compete, or have skills like yours?

V: Well, I am probably not the best person to give advices right now because I am still have a long way to go myself, but if you ask ;-) ...

Knowing theory is not enough, you need to practice everything that you read about hacking or security (I don't mean attacking real systems, of course ;-).

There are many ways to do it, for example, install known vulnerable software on your own machine and write an exploit for it.

Also, even if you don't think that you have enough skills to actually compete at Defcon, sign up for the quals anyway and try it for yourself.

From my own experience, I can say that I learned many practical things from this year quals, not to mention that it was incredibly fun :-). Also, what I am planning on working now is to improve my scripting skills which are very important when competing in real time.


Thanks to Vika for responding to my questions.

If you like these sorts of interviews, let me know. I plan to incorporate these sorts of stories into the TaoSecurity Podcast, when I get time to launch it.

Request for Help with OpenPacket.org

Earlier this month I announced work on OpenPacket.org, a free site providing quality network traffic traces to researchers, analysts, and other members of the digital security community.

We are looking for help in two areas:

  1. Open source content management systems (CMS) experience: We believe we will use a CMS to accept, moderate, and present traffic captures to users. We need help planning and deploying a CMS that will meet our needs.

  2. Open source database experience: We will use an open source database like MySQL or PostgreSQL, as compatible with the CMS we choose. We need help planning and deploying a database schema, and we will need guidance on configuring the database properly. Most of the OpenPacket.org crew has database experience as it relates to supporting intrusion detection sensors, but storing and retrieving the sorts of data we have in mind is probably outside our daily routine.


We have ideas for additional OpenPacket.org functionality, but providing ways to accept, moderate, and present traces in Libpcap format is the primary goal of our first version of OpenPacket.org.

If you are interested in helping with either subject, please email richard at taosecurity dot com.

If you have any comments, as always they are welcome here. Thank you.

Senin, 29 Agustus 2005

How Do You Use Taps?

How do you use taps? Specifically, do any of you use Net Optics taps? If yes, I would like to speak with you through email. I'm interested in your thoughts on any of these subjects:

  • How did you justify buying these products?

  • Did you encounter any installation issues?

  • How are you using taps?

  • What alternatives did you consider?

  • Did taps help you learn more about any intrusions, or help you prevent or mitigate intrusions?


I appreciate any feedback you might have. Please email richard at taosecurity dot com. Thank you.

Speaking at Net Optics Think Tank on 21 September

I will be speaking at the next Net Optics Think Tank at the Hilton Santa Clara in Santa Clara, CA on 21 September 2005. I will discuss network forensics, with a preview of material in my next two books, Real Digital Forensics and Extrusion Detection: Security Monitoring for Internal Intrusions. I had a good time speaking at the last Think Tank, where I met several blog readers.

Minggu, 28 Agustus 2005

Real Threat Reporting

In an environment where too many people think that flaws in SSH or IIS are "threats," (they're vulnerabilities), it's cool to read a story about real threats. Nathan Thornbourgh's story in Time, The Invasion Of The Chinese Cyberspies (And the Man Who Tried to Stop Them), examines Titan Rain, a so-called "cyberespionage ring" first mentioned by Bradley Graham in last week's Washington Post.

The Time story centers on Shawn Carpenter, an ex-Navy and now ex-Sandia National Laboratories security analyst. The story says:

"As he had almost every night for the previous four months, he worked at his secret volunteer job until dawn, not as Shawn Carpenter, mid-level analyst, but as Spiderman—the apt nickname his military-intelligence handlers gave him—tirelessly pursuing a group of suspected Chinese cyberspies all over the world. Inside the machines, on a mission he believed the U.S. government supported, he clung unseen to the walls of their chat rooms and servers, secretly recording every move the snoopers made, passing the information to the Army and later to the FBI.

The hackers he was stalking, part of a cyberespionage ring that federal investigators code-named Titan Rain, first caught Carpenter's eye a year earlier when he helped investigate a network break-in at Lockheed Martin in September 2003. A strikingly similar attack hit Sandia several months later, but it wasn't until Carpenter compared notes with a counterpart in Army cyberintelligence that he suspected the scope of the threat. Methodical and voracious, these hackers wanted all the files they could find, and they were getting them by penetrating secure computer networks at the country's most sensitive military bases, defense contractors and aerospace companies."

I read this and thought, "Whoa, this guy is saying too much. Game over for him." Then I read this:

"[T]he Army passed Carpenter and his late-night operation to the FBI. He says he was a confidential informant for the FBI for the next five months. Reports from his cybersurveillance eventually reached the highest levels of the bureau's counterintelligence division, which says his work was folded into an existing task force on the attacks. But his FBI connection didn't help when his employers at Sandia found out what he was doing. They fired him and stripped him of his Q clearance, the Department of Energy equivalent of top-secret clearance. Carpenter's after-hours sleuthing, they said, was an inappropriate use of confidential information he had gathered at his day job. Under U.S. law, it is illegal for Americans to hack into foreign computers.

Carpenter is speaking out about his case, he says, not just because he feels personally maligned—although he filed suit in New Mexico last week for defamation and wrongful termination. The FBI has acknowledged working with him: evidence collected by TIME shows that FBI agents repeatedly assured him he was providing important information to them. Less clear is whether he was sleuthing with the tacit consent of the government or operating as a rogue hacker. At the same time, the bureau was also investigating his actions before ultimately deciding not to prosecute him."

Now I understand why Time has all these details!

I would like more technical clarification of this point:

"When he uncovered the Titan Rain routers in Guangdong, he carefully installed a homemade bugging code in the primary router's software. It sent him an e-mail alert at an anonymous Yahoo! account every time the gang made a move on the Net. Within two weeks, his Yahoo! account was filled with almost 23,000 messages, one for each connection the Titan Rain router made in its quest for files."

What does this mean? It sounds like Carpenter took control of the routers and then, what?

I cite this story because it talks about how sophisticated threats operate:

"Carpenter had never seen hackers work so quickly, with such a sense of purpose. They would commandeer a hidden section of a hard drive, zip up as many files as possible and immediately transmit the data to way stations in South Korea, Hong Kong or Taiwan before sending them to mainland China. They always made a silent escape, wiping their electronic fingerprints clean and leaving behind an almost undetectable beacon allowing them to re-enter the machine at will. An entire attack took 10 to 30 minutes."

That's how professionals work.

Sabtu, 27 Agustus 2005

Teaching Pentagon Security Analysts with Special Ops Security

Prior to attending the IAM class this week, I spent two days teaching security analysts from the Pentagon with instructors from Special Ops Security. (The class was four days, but I was only present for the first two.) I think we offered some unique perspectives on security. Steve Andres, author of Security Sage's Guide to Hardening the Network Infrastructure spoke about hardening network infrastructure on day one. I taught network security monitoring on day two, with hands-on labs. Erik Birkholz, author of Special Ops: Host and Network Security for Microsoft, Unix, and Oracle taught methods to attack Windows systems on day three. Concluding with day four, SQL Server Security author Chip Andrews taught Web application security.

In addition to getting a copy of Erik's book, class attendees also received individually numbered challenge coins. This was Steve's idea. A challenge coin is usually a unit-specific coin that military members should carry at all times. The reasons why are documented at the previous link. As one might expect with the military, an excuse to buy a drink is usually involved. (The same goes for wearing a hat backwards, and so on.)

My coin is pictured here. Through a total act of good karma, Steve gave me coin 41. He didn't know that 41 was my favorite number (aside from the "94" that my USAFA training burned into my brain). I use 41 on my hockey jerseys since it was the number I was given on my high school cross-country team. Thanks Steve, and Special Ops Security! We'll most likely teach this multi-discplinary course again. Contact me via richard at taosecurity dot com if you're interested.

Thoughts on NSA IAM Course

Today I finished the NSA INFOSEC Assessment Methodology (IAM) class taught by two great instructors from EDS and hosted in the beautiful Nortel PEC building in Fairfax, VA. I attended because the rate offered by EDS through my local ISSA-NoVA chapter was an incredible bargain. I did not realize prior to the class that NSA posts the exact slides used to teach the course online.

The course was much more applicable to my line of work than I realized. I've decided to apply the methodology to the assessments I perform on customer network security monitoring / intrusion detection / prevention operations. Rather than use my own methodology, I plan to use the IAM system to perform hands-off assessments of the operations customers conduct to detect intrusions. I will be performing one of these assessments in the near future, so I look forward to applying lessons from IAM to this consulting work.

I am scheduled to attend the two-day INFOSEC Evaluation Methodology (IEM) class next month through ISSA-NoVA again. The IEM is a hands-on affair where technical means are used to discover vulnerabilities.

What the CISSP Should Be

Today I saw a new comment on my criticism of the ISC2's attempt to survey members on "key input into the content of the CISSP® examination." Several of you have asked what I would recommend the Certified Information Systems Security Professional (CISSP) exam should cover. I have a very simple answer: NIST SP 800-27, Rev. A (.pdf).

This document, titled Engineering Principles for Information Technology Security (A Baseline for Achieving Security), is almost exactly what a so-called "security professional" should know. The document presents 33 "IT Security Principles," divided into 6 categories. These principles represent sound security theories. For future reference and to facilitate discussion, here are those 33 principles.

  1. Security Foundation


    • Principle 1. Establish a sound security policy as the “foundation” for design

    • Principle 2. Treat security as an integral part of the overall system design.

    • Principle 3. Clearly delineate the physical and logical security boundaries governed by associated security policies.

    • Principle 4. Ensure that developers are trained in how to develop secure software.


  2. Risk Based


    • Principle 5. Reduce risk to an acceptable level. [Note: It does not say "eliminate risk;" smart.]

    • Principle 6. Assume that external systems are insecure. ["External" here means systems not under your control.]

    • Principle 7. Identify potential trade-offs between reducing risk and increased costs and decrease in other aspects of operational effectiveness. [The wording is poor. The idea is to identify situations where information owners decide to accept risks in order to satisfy other operational requirements.]

    • Principle 8. Implement tailored system security measures to meet organizational security goals.

    • Principle 9. Protect information while being processed, in transit, and in storage.

    • Principle 10. Consider custom products to achieve adequate security.

    • Principle 11. Protect against all likely classes of "attacks."


  3. Ease of Use


    • Principle 12. Where possible, base security on open standards for portability and interoperability.

    • Principle 13. Use common language in developing security requirements. [In other words, definitions matter.]

    • Principle 14. Design security to allow for regular adoption of new technology,
      including a secure and logical technology upgrade process.

    • Principle 15. Strive for operational ease of use.


  4. Increase Resilience


    • Principle 16. Implement layered security (Ensure no single point of vulerability).

    • Principle 17. Design and operate an IT system to limit damage and to be resilient in response.

    • Principle 18. Provide assurance that the system is, and continues to be, resilient in the face of expected threats.

    • Principle 19. Limit or contain vulnerabilities.

    • Principle 20. Isolate public access systems from mission critical resources (e.g., data, processes, etc.).

    • Principle 21. Use boundary mechanisms to separate computing systems and network infrastructures.

    • Principle 22. Design and implement audit mechanisms to detect unauthorized use and to support incident investigations. [In other words, from the network side, this means network security monitoring.]

    • Principle 23. Develop and exercise contingency or disaster recovery procedures
      to ensure appropriate availability.


  5. Reduce Vulnerabilities


    • Principle 24. Strive for simplicity.

    • Principle 25. Minimize the system elements to be trusted.

    • Principle 26. Implement least privilege. [Note: The text also recommends "separation of duties."

    • Principle 27. Do not implement unnecessary security mechanisms.

    • Principle 28. Ensure proper security in the shutdown or disposal of a system.

    • Principle 29. Identify and prevent common errors and vulnerabilities.


  6. Design with Network in Mind


    • Principle 30. Implement security through a combination of measures distributed
      physically and logically.

    • Principle 31. Formulate security measures to address multiple overlapping information domains.

    • Principle 32. Authenticate users and processes to ensure appropriate access control decisions both within and across domains.

    • Principle 33. Use unique identities to ensure accountability.



Given these principles, the next step is to devise practices or techniques for each. For example, Principle 26 states "Implement least privilege." Practices or techniques include (but are not limited to) the following, which represent my own thoughts; NIST does not reach to this level:

  • Create groups which provide functions needed to meet an operational requirement.

  • Operate mechanisms which allow temporary privilege escalation to accomplish specific tasks.

  • Assign systems administrators the primary task of administering systems. Assign security operators the primary task of auditing system use.


I recommend the exam not delve deeper into specific implementations or tools. One could imagine what those would be, however. Here are examples from FreeBSD; again, these are my thoughts:

  • Use the group functionality and assign privileges as required. (Windows might provide a better example, given the number of groups installed by default and their variety of privileges.)

  • Use sudo to execute commands as another (presumably more powerful) user.

  • Configure system logging though syslog and export logs to one or more remote, secure logging hosts under the control and review of the security team. Consider enabling process accounting via acct. Also consider implementing Mandatory Access Controls.


I do not think an exam like the CISSP should delve as deep as implementations or tools. Staying at the levels of theory/principle and techniques/practices is vendor-neutral, more manageable, and less likely to become obsolete as technologies change.

While I may not be happy with all of NIST's principles, they are much more representative of what the CISSP should address. As a bonus, this NIST publication already exists, and the sorts of people who haggle over principles like these tend to gravitate toward documentation from .gov institutions. Furthermore, one of the better CISSP exam prep guides references the older version of SP 800-27: The CISSP Prep Guide: Mastering the CISSP and ISSEP Exams, 2nd Edition, by Ronald L. Krutz and Russell Dean Vines. In fact, the exact chapter mentioning 800-27 principles (albeit the 2001 versions) is online (.pdf).

A Google search of cissp 800-27 only yields 48 hits, meaning not too many people are making the link. Krutz and Vines have, which is a great start.

What do you think?

Jumat, 26 Agustus 2005

Great Reporting by Brian Krebs

During the Mike Lynn affair I found Brian Krebs' reporting to be invaluable. Now he has provided an excellent story on the arrest of the Zotob and Mytob worm authors. I recommend you read the story linked from Brian's blog. Highlights include:

"Both of the suspects' nicknames can be found in the original computer programming code for Zotob, according to the FBI and Microsoft...

The author of the original Blaster worm remains at large, and Microsoft has offered a $250,000 bounty for information leading the arrest and conviction of that person...

[E]vidence indicates Ekici paid Essebar to develop the worms, which the two used for financial gain...

[T]he two men are alleged to have forwarded financial information stolen from victims' computers to a credit card fraud ring.

[P]olice who raided Essebar's home found a computer that contained the original programming instructions for the first version of the Zotob worm."

I am glad to see action against a different leg of the risk triad, namely threats. It's no use to only address vulnerabilities if the threats who exploit those vulnerabilities are free to constantly develop innovative new attacks.

Ryan Naraine also wrote a good article called Inside Microsoft's Zotob Situation Room.

Incidentally, Andy Sullivan of Reuters is another great "old media" reporter. He's written about Def Con and other issues.

BSD Certification Group Publishes Certification Roadmap

Yesterday the BSD Certification Group published the Certification Roadmap (.pdf). I realize I have been beaten by Slashdot on this story, but I have been either teaching or in training all week! (More on that when I have time -- I return to class tomorrow.) From the press release:

"The BSD Certification Group has decided that the associate level certification, followed by the professional level certification, will be rolled out in 2006. The associate certification targets those with light to moderate skills in system administration and maps to the Junior SAGE Job Description. The professional level certification is for those with stronger skills in BSD system usage and administration and maps to the Intermediate/Advanced SAGE Job Description."

Having participated in the internal voting process for this certification, I am pleased to see a two-cert approach. We will start with the junior cert; "the test activation goal for the associate level certification is April 5, 2006."

Kamis, 25 Agustus 2005

BBC News Understands Risk

This evening I watched a story on BBC News about the problem of bird flu. Here is the story broken down in proper risk assessment language.

  • Two assets are at risk: human health and bird health. We'll concentrate on birds in this analysis. Healthy birds are the asset we wish to protect.

  • The threat is wild migratory birds infected by bird flu.

  • The threat uses an exploit, namely bird flu itself.

  • The vulnerability possessed by the asset and exploited by the threat is lack of immunity to bird flu.

  • A countermeasure to reduce the asset's exposure to the threat is keeping protected birds indoors, away from their wild counterparts.

  • The risk is infection of domesticated birds by wild birds. All infected birds must be killed.


The TV story I watched contained this quote by reported Tom Heap:

"The lesson learned from foot-and-mouth [disease, which ravaged Europe several years ago] is to do your best to keep the disease out, but assume that will fail. Be ready to tackle any outbreak to prevent an epidemic."

Let's replace certain terms with the security counterparts:

"The lesson learned from the last time we were compromised is to do your best to keep intruders out, but assume that will fail. Be ready to respond to any intrusion to prevent complete compromise of the organization."

This is the power of using proper terminology. Lessons from other scientific fields can be applied to our own problems, and we avoid re-inventing the wheel.

Short History of Worms

I found Ryan Naraine's article From Melissa to Zotob to be a good summary of popular worms of the last few years.

I remember Melissa as a real wake-up call for the community. It hit on a Friday night, and the following Saturday morning my (soon-to-be) wife and I were getting engagement photos taken. My commanding officer called during the photo session and said all officers were being recalled to the AFCERT to "fight" the worm. That was an interesting weekend!

A comment in the latest SANS NewsBites by editor Rohit Dhamankar on Zotob makes a good point:

"The time from vulnerability announcement to release of [the Zotob] worm was one of the shortest seen in recent times. Patch announced August 9th (Tuesday); exploit code posted publicly August 11th (Thursday); worm started to hit on August 13th (Saturday).

Because [these] worms spread over 139/tcp or 445/tcp, [these] ports that cannot be firewalled without breaking some functionality in Windows environment. That means that even a single infected laptop brought inside an enterprise will infect all the other machines. Multiple intrusion prevention systems, as ubiquitous as switches, need to become as integral to networks."

In other words, some form of traffic inspection that filters for illegitimate traffic must be performed on every switch port to which a Windows system is connected. This is an argument for so-called "security switches." It is also an argument for hosts to be able to defend themselves.

Network Security Operations Class Discount for ISSA-NoVA Members

Are you a member of ISSA-NoVA? Would you like to attend my public Network Security Operations class next month, at Nortel PEC in Fairfax, VA from Tuesday 27 September through Friday 30 September? If so, I'm offering a one-time discount for you.

ISSA-NoVA members who sign up and pay for the class no later than Friday 16 September can attend the class for $1995 -- a $1000 discount. Contact me at richard at taosecurity dot com if you're interested, and visit my training page for more details on this 4-day, hands-on, technical class.

Senin, 22 Agustus 2005

Request for Lab Ideas

I previously announced my four day Network Security Operations class. I have planned some of the labs for the class, but I thought you might have ideas regarding the sorts of hands-on activities you would want to try.

The class consists of four days, covering network security monitoring, network incident response, and network forensics. Days one, two, and three each offer small labs at regular intervals to reinforce the lecture material. Day four is entirely lab-based.

One of my goals is to give each student his or her own environment for analysis. I am considering a mix of real, jailed, and virtual environments. The activities students want to try will drive how I implement the student work environment. For example, using my GSX server I believe I can support 16 simultaneous VMs. A single FreeBSD install might be able to support many more jails on its own. Real hardware could be problematic, but I might be able to use Soekris systems. VMs are attractive because they offer snapshot features, whereas real hardware needs to be re-imaged.

I don't intend to provide each student his or her own laptop. I prefer each student to bring a laptop to the class, and SSH from the laptop to his or her own environment. Alternatively, if VMWare GSX is used on the class server, the student could connect using the VMWare Virtual Console. That requires adding code to the student's own laptop (which needs to be running Linux or Windows), which I would prefer to avoid.

Another option involves building a custom live CD, perhaps using FreeSBIE. Each student could run a local FreeBSD instance on his or her laptop. I foresee problems with inadequate laptops, unrecognized hardware, and limited learning scenarios. That's still an option though.

I have been trying to imagine the sorts of activities I would want to try in a class covering these topics. I want students to try a wide variety of network analysis tools, like Tcpdump, Tethereal, Snort, Tcpflow, Ngrep, Flowgrep, Flow-tools, Argus, Tcpdstat, Capinfos, and so on. These can be implemented (especially when reading from saved Libpcap traces) fairly simply.

If I want to provide a more exotic environment, implementation becomes more difficult. For example, I would like to let each student experiment with Sguil. Should students be able to run tools that sniff live traffic in promiscuous mode? I'm also considering a section that describes how to set up a caged server using Pf. Implementing a bridging firewall setup to build a cage presents all sorts of issues.

Perhaps analysis is more important. In that case, deciphering network traffic might be the focus. That is easier to implement than creating a dynamic network environment. I am concerned that VMWare might not support an open (non-switched) network conducive to sniffing.

I've set a limit of 15 students per class for my private classes. However, when I teach at USENIX, I could have 30 or more students. Although I do not teach an all-lab day at USENIX, my other classes (NSM, NIR, NF) could have hands-on components if I plan them to accommodate large groups.

When I taught at Foundstone we provided every student his or her own Dell laptop, and the labs centered on students trying to break into laptop target ranges. Eventually we replaced the laptop targets with VMs.

So, what sorts of lab activities would you want to see in a class on NSM, NIR, and NF? What have you seen other classes do, and what did you like? I appreciate your feedback.

Air Force Personnel Database Owned

According to this Air Force Times story, personnel data for "about 33,300 officers and 19 airmen" was remotely accessed. The records include "Social Security numbers... marital status, number of dependents, date of birth, race/ethnic origin (if declared), civilian educational degrees and major areas of study, school and year of graduation, and duty information for overseas assignments or for routinely sensitive units."

The story quotes an Air Force spokesman:

"'Basically, we had an unauthorized user gain access to a single user account by stealing a password,' said Lt. Col. John Clarke, chief of the Systems Operations Division at the Air Force Personnel Center. 'Then they went in and accessed member information on roughly 33,000 military members.'"

I would like to know how a "single user account" was able to query records on 33,000 people. If this account belonged to a normal user (i.e., an Air Force member), some serious problem allowed that single account to look at other members' records. Alternatively, the user account could have belonged to someone with privileges to review records.

It sounds like my old unit helped with the response:

"Personnel officials went to the 8th Air Force network operations center for help and called in the network security experts at the Air Intelligence Agency. They also brought in the Air Force Office of Special Investigations and legal specialists."

Windows Remote Administration Options

This morning I worked with several remote administration tools on my Windows Server 2003 system. First I enabled the native Remote Desktop (aka Terminal Services) capability via My Computer -> Properties -> Remote

At this point I am only letting administrator connect remotely. Since administrator can connect remotely by default once the service is activated, I didn't need to make any other changes. Once Remote Desktop is listening, it will appear active on port 3389 TCP.

To access the Windows server remotely from Unix using the RDP protocol, I use Rdesktop. It's available in the FreeBSD ports tree as net/rdesktop. I like the option to change screen geometry, e.g., 'rdesktop -g 80% 192.168.2.2'.

To access the RDP server from my Windows 2000 laptop, I installed the MSRDPCLI.EXE package.

Next I tried RealVNC. This program has client and server components. I installed the entire package on the Windows server. Setup is fairly simple, and the server should be configured to accept clients that enter a pre-defined password. RealVNC starts two listening services. One can access a Web page with a Java-enabled VNC service on port 5800 TCP. The default native VNC server listens on port 5900 TCP.

To access the Windows server remotely from Unix using the VNC protocol, I use vncviewer, packaged with RealVNC. It's available in the FreeBSD ports tree as net/vnc. To send ctrl-alt-delete through VNC, I like to hit the F8 key to bring up a VNC menu, through which I select "Send ctrl-alt-del".

There is an importance difference between using RDP and VNC on Windows Server 2003. RDP is a Terminal Services application, which allows multiple users to remotely interact with the server. VNC takes control of the physical desktop. For example, I was able to run one RDP instance and then use VNC to connect to the server. Neither session was interrupted by the other. Also, RDP seems to be more efficient and responsive, although I have not sought to tweak VNC.

Note that (thanks to comments from Blog readers), I found Rdesktop allows one to "attach to console" if passed the -0 command, like this:

rdesktop -0 192.168.2.2

This eliminates the need to use VNC, in my opinion.

I noticed that I could not get VMWare to start a guest OS when I was logged in to the Windows Server 2003 box using RDP (not attached to the console). I could start a guest when logged in using VNC. I typically set a screen geometry because VNC doesn't appear to have a menu option for it, e.g. 'vncviewer geometry="1024X768"'.

With rdesktop -O, I can attach to the console and hence start VMs properly over RDP.

inally I installed OpenSSH. I considered installing the SSH for Windows package, which is a stripped-down Cygwin version. Since that program had not been updated in over a year (i.e., 3.8.1), I decided to use the complete Cygwin version. Nicholas Fong's guide was extremely helpful.

I followed all of his instructions. Remember to add a new environment variable where 'variable name' is CYGWIN and 'variable value' is ntsec tty. Also add ';c:\cygwin\bin' to the PATH.

I noticed a different set of prompts when I ran 'ssh-host-config', so here is how I proceeded.

$ ssh-host-config
Generating /etc/ssh_host_key
Generating /etc/ssh_host_rsa_key
Generating /etc/ssh_host_dsa_key
Generating /etc/ssh_config file
Privilege separation is set to yes by default since OpenSSH 3.3.
However, this requires a non-privileged account called 'sshd'.
For more info on privilege separation read /usr/share/doc/openssh/README.privsep.

Should privilege separation be used? (yes/no) yes
Warning: The following function requires administrator privileges!
Should this script create a local user 'sshd' on this machine? (yes/no) yes
Generating /etc/sshd_config file

Warning: The following functions require administrator privileges!

Do you want to install sshd as service?
(Say "no" if it's already installed as service) (yes/no) yes

You appear to be running Windows 2003 Server or later. On 2003 and
later systems, it's not possible to use the LocalSystem account
if sshd should allow passwordless logon (e. g. public key authentication).
If you want to enable that functionality, it's required to create a new
account 'sshd_server' with special privileges, which is then used to run
the sshd service under.

Should this script create a new local account 'sshd_server' which has
the required privileges? (yes/no) yes

Please enter a password for new user 'sshd_server'. Please be sure that
this password matches the password rules given on your system.
Entering no password will exit the configuration. PASSWORD=obscured

User 'sshd_server' has been created with password 'obscured'.
If you change the password, please keep in mind to change the password
for the sshd service, too.

Also keep in mind that the user sshd_server needs read permissions on all
users' .ssh/authorized_keys file to allow public key authentication for
these users!. (Re-)running ssh-user-config for each user will set the
required permissions correctly.

Which value should the environment variable CYGWIN have when
sshd starts? It's recommended to set at least "ntsec" to be
able to change user context without password.
Default is "ntsec". CYGWIN=

The service has been installed under sshd_server account.
To start the service, call `net start sshd' or `cygrunsrv -S sshd'.

Host configuration finished. Have fun!

Administrator@moog ~
$ net start sshd
The CYGWIN sshd service is starting.
The CYGWIN sshd service was started successfully.

Using SSH to access Windows is the most bandwidth-efficient remote access system possible.

ssh administrator@192.168.2.2
administrator@192.168.2.2's password:
Last login: Mon Aug 22 06:20:58 2005 from 192.168.2.5

Administrator@moog ~
$ df -h
Filesystem Size Used Avail Use% Mounted on
C:\cygwin\bin 9.8G 5.1G 4.7G 53% /usr/bin
C:\cygwin\lib 9.8G 5.1G 4.7G 53% /usr/lib
C:\cygwin 9.8G 5.1G 4.7G 53% /
c: 9.8G 5.1G 4.7G 53% /cygdrive/c
g: 60G 53G 7.3G 88% /cygdrive/g
h: 70G 50G 20G 73% /cygdrive/h

Administrator@moog ~
$ who
sshd_server tty0 Aug 22 06:20 (MOOG)
Administrator tty1 Aug 22 06:56 (192.168.2.5)

It appears the 'who' command is not aware of an existing RDP session.

I was hoping I would find an easy way to configure Windows through a serial port. Unfortunately, Windows is not Unix. The closest approximation I found was Emergency Management Services (EMS), which requires motherboard support. EMS is definitely not as simple as modifying /etc/ttys.

I'm concerned with remote administration because this Windows server is in my lab. Also, I am trying to imagine the best way to change the system's IP address remotely, or at least in a headless situation. For example: I bring my Shuttle Windows server to class. The box has no monitor. I decide I need to change the system's IP address to use DHCP in a classroom setting, or to set a new static IP address. Using my laptop, I connect to the server (using on of these methods) and change the IP.

I plan to use one or more of these netsh commands. I can run these as a batch file or through the scheduler.

netsh interface ip show config

netsh interface ip set address name="Local Area Connection" static 192.168.0.100
255.255.255.0 192.168.0.1 1

netsh -c interface dump > c:\location1.txt

netsh -f c:\location1.txt

netsh interface ip set address "Local Area Connection" dhcp

netsh interface ip set dns "Local Area Connection" static 192.168.0.200

netsh interface ip set dns "Local Area Connection" dhcp

These are not the actual commands I would run, only examples of what is possible.

From a Windows system, one other option exists: PsExec. For example:

C:\Program Files\pstools>psexec \\192.168.2.2 -u administrator -p "password"
cmd.exe /c cmd.exe

PsExec v1.58 - Execute processes remotely
Copyright (C) 2001-2005 Mark Russinovich
Sysinternals - www.sysinternals.com

Microsoft Windows [Version 5.2.3790]
(C) Copyright 1985-2003 Microsoft Corp.

C:\WINDOWS\system32>cd
C:\>ipconfig

Windows IP Configuration

Ethernet adapter Local Area Connection:

Connection-specific DNS Suffix . :
IP Address. . . . . . . . . . . . : 192.168.2.2
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 192.168.2.1


Does anyone have any suggestions?

Minggu, 21 Agustus 2005

NYCBSDCon

If you're near New York city, you might want to check out NYCBSDCon on 17 September 2005. The New York City BSD User Group is organizing the one-day event. Speakers on the agenda include Dru Lavigne, Michael Lucas, and Marshall Kirk McKusick. I believe I will attend since the drive from DC isn't too bad.

Comments on Network Anomaly Detection System Article

I was asked to comment on Paul Proctor's new article in the August 2005 Information Security magzine, titled A Safe Bet?. Paul is an analyst at Gartner now, but years ago he wrote an excellent book -- The Practical Intrusion Detection Handbook, which I reviewed five years ago.

Paul's article introduces network anomaly detection systems, shorted by the wonderful acronym NADS. Paul describes NADS thus:

"NADS are designed to analyze network traffic with data gathered from protocols like Cisco Systems's NetFlow, Juniper's cFlow or sources that support the sFlow standard. Data is correlated directly from packet analysis; and the systems use a combination of anomaly and signature detection to alert network and security managers of suspicious activity, and present a picture of network activity for analysis and response."

I find Paul's opinions to be sound:

"Despite vendor claims to the contrary, NAD is primarily an investigative technology. While it has the potential to detect zero-day and other stealthy attacks, confidence in its results remains a problem in enabling automated response mechanisms.

This isn't unlike the early versions of IDS/IPS products, which weren't reliable enough to enable automated responses. In this light, NAD is best used to detect, investigate and manually address suspected incidents and problems...

NADS may not be able to automatically detect and block with the confidence of an IPS signature, but neither can an IDS/IPS help an organization if the enabled signature set misses something."

I am glad to see someone defending a product for its investigative value and not for its preventative value. It appears someone else realizes that prevention eventually fails, anyway.

Paul also says:

"NAD devices are powerful knowledge tools for expert network operations people with enterprise-specific contextual knowledge. These systems can help enterprises learn about the traffic and behavior of their network."

That's exactly right. NADS improve network situational awareness. However:

"Even though they can catch detailed events, such as a new service opening up, a new protocol appearing or a new machine connecting to the network, these events are too common to have value in larger enterprises.

NADS shine where obvious behaviors — like when a worm-infected machine spewing attack traffic or a DoS attack — are under way."

Here is the true root of the problem. If one cannot define normal network behavior, perhaps due to the size of the network or an inherently dynamic nature, then a NADS won't be much help. In those cases, it will only detect "obvious behaviors," for which existing detection and prevention systems may be adequate.

Paul concludes the article by recognizing the importance of skilled operators:

"The value these systems offer for addressing more subtle behavior is dependent upon the knowledge and experience of the operator. Under the right circumstances, NADS provide a wealth of network behavior information (protocols, ports, services, throughput, latency, etc.) that can be used to understand what's really going on in your network."

This is another reason why network security analysts are not going to lose their jobs. Networks are only becoming more complex. There is no chance that an expert network or security administrator can be coded into a software appliance. If IPv6 is widely deployed, the need for skilled operators will only grow.

Jumat, 19 Agustus 2005

Excellent Anti-DDoS Story

If you haven't read How a Bookmaker and a Whiz Kid Took On an Extortionist — and Won, you're in for a treat. I stumbled across this today, and remembered reading it several months ago. I realized I never blogged the story. The technician at the heart of the story is Barrett Lyon, who began the Opte Project.

His company Prolexic takes an innovative approach to surviving DDoS attacks. He seems to redirect traffic aimed at his clients, filters attack traffic, and then sends it to the intended recipients. I imagine he employs some creative routing to do it. If Barrett notices this blog entry by the graphic at left I'm pulling from his site, maybe he'll share a few comments with us?

Thoughts on SANS .edu Security Debate

The 10 August 2005 issue of the SANS NewsBites newsletter featured this comment by John Pescatore:

"There has [sic] been a flood of universities acknowledging data compromises and .edu domains are one of the largest sources of computers compromised with malicious software. While the amount of attention universities pay to security has been rising in the past few years, it has mostly been to react to potential lawsuits do [sic] to illegal file sharing and the like - universities need to pay way more attention to how their own sys admins manage their own servers."

I agree with John's assessment, except for the last phrase that implies university sys admins "need to pay way more attention" to security. From my own view of the world, a lot of university system administrators read TaoSecurity Blog, attend my classes (especially USENIX), and read my books. I believe the fault lies with professors and university management who generally do not care about security and are unwilling to devote the will and resources to properly secure .edu networks.

The 17 August 2005 newsletter features a letter to the editor signed by eleven .edu security analysts. They take exception with Mr. Pescatore's comments. SANS is requesting comments on that letter. Here is my take on a few excerpts.

The letter states:

"Many of these schools are complex and most security implementations typically used at a corporate or government level don't fit a university model because a broader range of network activities is permitted on university networks, in large part due to a much more limited set of policies and controls compared to government and commercial entities."

The "broader range of network activities" is part of the problem. Most .edu networks apply very little inbound access control and hardly any outbound access control. (Sometimes that is reversed; one .edu I worked with implemented zero inbound control and single outbound control denying TFTP!)

Do .edu networks think the corporate world does not support a wide variety of protocols and services? I recently finished a traffic threat assessment for a client. I was surprised to see the number of protocols in use that I did not immediately recognize. This is no different from a .edu, except the .com had taken steps to restrict use of those protocols and services to defined partners. "I can't define who will access my data," a .edu might reply. If that is the case, the .edu has decided that anyone in the world can access potentially sensitive data. (See the section below on the "tenth planet" to read consequences of that stance.) In reality, the .edu is saying "it's too difficult" to define who should access data. That's a cop-out.

The "limited set of policies and controls" is not the fault of the administrators. It is the fault of management who refuse to reign in professors, or to force them to accept responsibility for operating insecure systems. If a professor is a prolific researcher, he or she is often given a "pass" to run whatever infrastructure he or she needs for research purposes. While research is obviously important, the professors and staff should realize that lack of security jeopardizes their research. How would they feel to know that a team of competing researchers, or even corporate spies, were stealing the next breakthrough in gene therapy from research systems?

We already know that so-called "tenth planet" discoverer Michael Brown was forced to rush his announcement for fear that "hackers" would reveal his work. I heard Mr. (Dr.?) Brown on NPR science Friday a few weeks ago, and he confirmed the story. He and his colleagues preferred to give an orderly press conference to inform the world of their discovery. Instead, Mr. Brown decided to rush the process. He feared a "hacker" would provide information on how to find the tenth planet to amateur astronomers, who might then take credit for its discovery! Security is not an inconvenience; it's a necessity.

The letter continues:

"Many times, the tools to secure these environments don't exist and changing the culture in these heterogeneous environments to one which promotes secure computing is very difficult."

Actually, all of the tools to secure a .edu exist. Almost all of them exist in open source form, too. Ten years ago this might not have been the case, but today one can employ open source countermeasures that in some cases exceed their commercial counterparts. The array of network-centric security capabilities offered by OpenBSD , for example, is amazing. Firewall? Pf. VPN? IPSec. Secure remote access? OpenSSH. Centralized time synchronization? OpenNTPD. I could continue at the host level if one needed a reliable platform for hosting Web sites, handling email, etc.

The tools exist, but the managerial will to implement them does not.

The letter continues:

"Our overall approach to our networking is about promoting research and information sharing and our security architecture needs to take that into account. Many schools uphold the concept of the 'End-to-End' nature of the original Internet for both research and communication of ideas. These ideas on full connectivity have merit and cannot be dismissed because the nature of faculty research or inter-university collaboration might rely on unfettered access to the Internet. The concept of a DMZ is not feasible for many schools compared to many in government and business which cannot live without one."

Immense multi-national organizations foster information sharing and research. While they admittedly are not perfect, many enterprises manage to maintain better security than .edu's. The "end-to-end" Internet is a myth that to which too many people cling. That model may have worked when the Internet was a private network, but "end-to-end" today places no barriers between your system and anyone else in the world with an IP address.

The majority of hosts are not designed, configured, or deployed in a self-defending manner. Hosts that cannot protect themselves must be supported by additional security resources. Even if a system could be operated indepedently (e.g., an OpenBSD server), without any network-based access control, this is not a tenable defensive model. The .edu world needs to understand that defense-in-depth is one of the best ways to compensate for weak host software, potential misconfiguration, and aggressive intruders.

Finally, "the concept of a DMZ" is not feasible for many organizations, not just .edu's. Security zones, which group hosts of similar security requirements, are now the best way to offer network-centric access control and monitoring.

What are your thoughts?

Kamis, 18 Agustus 2005

Windows Server 2003 x64 Enterprise Edition

I managed to install Windows Server 2003, Enterprise x64 Edition (64-bit) trial on my Shuttle SB81P. The only component that wasn't recognized natively was the BCM5751
NetXtreme Gigabit Ethernet Controller for Desktops
. I used the Windows Server 2003 (AMD x86-64) driver to get the NIC working. I'm lucky my FreeBSD dmesg output recognized this NIC accurately:

bge0: mem 0xd0000000-0xd000ffff irq 16 at device 0.0 on pci1
miibus0: on bge0
brgphy0: on miibus0
brgphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseTX, 1000baseTX-FDX, auto
bge0: Ethernet address: 00:30:1b:b6:96:75

The first time I booted Windows, I saw this message:



This was an interesting take on the idea of host-centric security. Microsoft could have started with no listening services, and let an administrator decide what to enable. Instead, Microsoft starts services by default, but blocks remote access to them until they are patched. This is a step in the right direction, but I am not happy with the underlying security model.

I decided to apply patches, after which I got this report:



This reminded me of the sign that said "Abandon all hope, ye who enter." At least that message should have had a red X or similar!

I installed IIS and then a 15-day trial version of VMWare GSX Server 3.2. I managed to install a FreeBSD 5.4 guest OS using 4 GB of HDD and 64 MB of RAM. I plan to push this box to see how many concurrent guests it can accommodate.

FreeBSD on Shuttle SB81P

I bought a new Shuttle SB81P to use as a VMWare GSX server in my Network Security Operations class. I bought the system to provide VMWare images which students could independently manipulate. This will make the class more hands-on without requiring much investment on the student's part. All I will ask is that the student brings a laptop with a Secure Shell client. If the student wants to directly interact with the VM, he or she can install the VMWare Virtual Console for Windows or Linux on a laptop. I cannot use FreeBSD to host GSX Server. I intend to try the Windows Server 2003, Enterprise x64 Edition (64-bit) trial version. If it works as promised I will buy the Standard Edition from a vendor like NewEgg.com.

Here are the Shuttle specifications:

  • Shuttle SB81P Intel Socket T(LGA775) Intel Pentium 4/Celeron INTEL 915G Barebone from NewEgg.com

  • Intel Pentium 4 640 Prescott 800MHz FSB 2MB L2 Cache LGA 775 EM64T Processor from NewEgg.com

  • 2x1GB 184-pin DIMM DDR PC3200 RAM Crucial.com

  • Two Western Digital Raptor WD740GD 74GB 10,000 RPM 8MB Cache Serial ATA150 Hard Drives from NewEgg.com

  • NEC Black IDE DVD Burner Model ND-3540A from NewEgg.com

  • NEC Black 1.44MB 3.5" Internal Floppy Drive from NewEgg.com

  • Shuttle PF60 XPC Carrying Case from NewEgg.com


Since the NYCBUG dmesg submission system isn't working, here is my dmesg output from the FreeBSD/amd64 port of FreeBSD 6.0-BETA2.
Here is how long it took to build a kernel:

--------------------------------------------------------------
>>> Kernel build for GENERIC completed on Thu Aug 18 07:07:04 EDT 2005
--------------------------------------------------------------
705.110u 308.834s 16:06.24 104.9% 4141+2880k 5402+3462io 185pf+0w

The CPU supports hyper-threading. Notice the C column in top output.

last pid: 23591; load averages: 0.08, 0.63, 0.60 up 0+00:32:16 07:09:38
22 processes: 1 running, 21 sleeping
CPU states: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
Mem: 9552K Active, 165M Inact, 191M Wired, 52K Cache, 213M Buf, 1579M Free
Swap: 4096M Total, 4096M Free

PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
468 richard 1 96 0 29356K 4012K select 0 0:02 0.00% sshd
392 root 1 96 0 9352K 3336K select 0 0:00 0.00% sendmail
464 root 1 4 0 29384K 3984K sbwait 0 0:00 0.00% sshd
254 root 1 96 0 3532K 1088K select 0 0:00 0.00% syslogd

Here are OpenSSL speed and nbench test results. I ran these tests after reading this thread.

Rabu, 17 Agustus 2005

New Issue of (IN)SECURE Magazine Features Book, Blog

Mirko Zorz of Help Net Security emailed to tell me that Issue 3 (.pdf) of (IN)SECURE Magazine is available for download. The new issue has a few kind words about my first book and this blog. The new issue also features an interview with Michal Zalewski, a discussion of so-called Unified Threat Management (UTM) "solutions" (groan), and other helpful articles.

Comment on Draft NIST Publications

While reading the blog of Keith Jones I learned of a variety of new draft NIST pubs that are open for comment from the general public. You may want to review one or more to provide feedback. I found the following drafts interesting (all are .pdf):

  • 800-40 Version 2, Creating a Patch and Vulnerability Management Program

  • 800-86, Guide to Computer and Network Data Analysis: Applying Forensic Techniques to Incident Response (which mentions my first book -- thanks)

  • 800-83, Guide to Malware Incident Prevention and Handling

  • 800-81, Secure Domain Name System (DNS) Deployment Guide


Check their Web site for comment deadlines.

Selasa, 16 Agustus 2005

National Vulnerability Database

I learned today the National Vulnerability Database (NVD) has replaced the old NIST ICAT system. The NVD describes itself this way:

"NVD is a comprehensive cyber security vulnerability database that integrates all publicly available U.S. Government vulnerability resources and provides references to industry resources. It is based on and synchronized with the CVE vulnerability naming standard."

There's a link to a workload index, whose URL includes the term "threatindex" (groan). On that page we read:

"Workload Index Information

This index calculates the number of important vulnerabilities that information technology security operations staff are required to address each day. The higher the number, the greater the workload and the greater the general threat represented by the vulnerabilities."

I think the last sentence should instead read:

"The higher the number, the greater the workload and the greater the general risk represented by the vulnerabilities."


I am not sure what the Open Source Vulnerability Database (OSVDB) thinks of the NVD. There is a blog posting about NVD, but no commentary by OSVDB members. I think the OSVDB needs to remain as a place that is independent of US government control. If a truly severe vulnerability is found, who is more likely to publish it first -- nvd.nist.gov or www.osvdb.org?

On a note related to vulnerabilities, here is a list of vulnerability or attack description projects.

These are papers on related subjects:

Senin, 15 Agustus 2005

Routing Enumeration

One of the cooler sections in Extreme Exploits covers ways to learn about a target network by looking at routes to those networks. I showed a few ways to use this data two years ago, but here's a more recent example.

Let's say I want to find out more about the organization hosting the Extreme Exploits Web site. First I resolve the hostname to an IP address.

host www.extremeexploits.com
www.extremeexploits.com has address 69.16.147.21

Now I use whois to locate the owner's netblock.

whois 69.16.147.21
Puregig, Inc. PUREGIG1 (NET-69-16-128-0-1)
69.16.128.0 - 69.16.191.255
VOSTROM Holdings, Inc. PUREGIG1-VOSTROM1 (NET-69-16-147-0-1)
69.16.147.0 - 69.16.147.255

# ARIN WHOIS database, last updated 2005-08-14 19:10
# Enter ? for additional hints on searching ARIN's WHOIS database.

Now I telnet to a route server and make queries about this netblock.

route-server.phx1>sh ip bgp 69.16.147.0
BGP routing table entry for 69.16.147.0/24, version 84120350
Bestpath Modifiers: always-compare-med, deterministic-med
Paths: (2 available, best #2)
Not advertised to any peer
22822 11588, (received & used)
67.17.64.89 from 67.17.81.24 (67.17.81.24)
Origin IGP, metric 0, localpref 300, valid, internal
Community: 3549:4044 3549:30840 22822:4012 22822:9120
Originator: 67.17.80.225, Cluster list: 0.0.0.11
22822 11588, (received & used)
67.17.64.89 from 67.17.80.251 (67.17.80.251)
Origin IGP, metric 0, localpref 300, valid, internal, best
Community: 3549:4044 3549:30840 22822:4012 22822:9120
Originator: 67.17.80.225, Cluster list: 0.0.0.11

I learn a few details:

  • The autonomous system for this network is truly a /24, as shown by "BGP routing table entry for 69.16.147.0/24"

  • The AS number for 69.16.147.0/24 is 11588. Its upstream provider AS is 22822. (AS data is read right-to-left.)

Now I want to find out if any other networks belong to this AS.

route-server.phx1>sh ip bgp regexp _11588$
BGP table version is 97334640, local router ID is 67.17.81.28
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path
* i63.78.12.0/22 67.17.64.89 0 300 0 22822 11588 i
*>i 67.17.64.89 0 300 0 22822 11588 i
* i69.16.128.0/19 67.17.64.89 0 300 0 22822 11588 i
*>i 67.17.64.89 0 300 0 22822 11588 i
* i69.16.147.0/24 67.17.64.89 0 300 0 22822 11588 i
*>i 67.17.64.89 0 300 0 22822 11588 i
* i69.16.187.0/24 67.17.64.89 0 300 0 22822 11588 i
*>i 67.17.64.89 0 300 0 22822 11588 i
* i69.16.191.0/24 67.17.64.89 0 300 0 22822 11588 i
*>i 67.17.64.89 0 300 0 22822 11588 i
* i140.99.96.0/19 67.17.64.89 0 300 0 22822 11588 i
*>i 67.17.64.89 0 300 0 22822 11588 i
* i208.247.17.0 67.17.64.89 0 300 0 22822 11588 i
*>i 67.17.64.89 0 300 0 22822 11588 i
* i209.50.48.0/20 67.17.64.89 0 300 0 22822 11588 i
*>i 67.17.64.89 0 300 0 22822 11588 i
* i209.50.56.0/21 67.17.64.89 0 300 0 22822 11588 i
Network Next Hop Metric LocPrf Weight Path
*>i 67.17.64.89 0 300 0 22822 11588 i

We could then run queries on the new networks to learn more about them, e.g.:

whois 63.78.12.0
UUNET Technologies, Inc. UUNET63 (NET-63-64-0-0-1)
63.64.0.0 - 63.127.255.255
ElDorado Sales, Inc. UU-63-78-12 (NET-63-78-12-0-1)
63.78.12.0 - 63.78.15.255

# ARIN WHOIS database, last updated 2005-08-14 19:10

One final cool tool: Victor has a project called Pwhois that provides prefix query information:
whois -h whois.pwhois.org 69.16.147.21
IP: 69.16.147.21
Origin-AS: 11588
Prefix: 69.16.147.0/24
AS-Path: 3356 11588
Cache-Date: 1122289900

I am a real newbie with this BGP and AS stuff. If anyone wants to comment (Trevor, Nate, etc.) I appreciate it.

Review of Extreme Exploits Posted

Amazon.com just posted my four star review of Extreme Exploits Advanced Defenses Against Hardcore Hacks. From the review:

"I read Extreme Exploits because the content looked intriguing and I am familiar with applications written by lead author Victor Oppleman. The back cover states the book is "packed with never-before-published advanced security techniques," but I disagree with that assessment. While I found all of the content helpful, between 1/3 and 1/2 of it is probably available in older books -- including several by publisher McGraw-Hill/Osborne. Nevertheless, I find the strength of the network infrastructure security sections powerful enough to recommend Extreme Exploits."

This is a cool book, but it is clear the publisher is trying to position it with a catchy title that doesn't necessarily reflect the contents. The book is mostly defensive in nature, but it does show ways to gather information that are used by more sophisticated intruders.

You may recognize author Victor Oppleman as the developer of Layer Four Traceroute. I look forward to his next book, arriving next summer: The Secrets to Carrier Class Network Security.

Minggu, 14 Agustus 2005

Updating FreeBSD Perl Using Packages

I detest having to upgrade core FreeBSD packages like Perl that are relied upon by so many other applications. All of my systems are old and dog slow, so I tend to install software on FreeBSD using its native package system. For example, before installing a package, I set this environment variable:

setenv PACKAGESITE ftp://ftp6.freebsd.org/pub/FreeBSD/ports/i386/packages-5-stable/Latest/

Replace '6' with the number of the mirror closest to you.

That command tells pkg_add to not use the default RELEASE packages, but to look for the latest STABLE package. Those packages are built by the FreeBSD ports cluster and are kept fairly current.

The problem with such a system is that the packages may get ahead of my upgrade plans. For example, if my system is running Perl 5.8.6_2 and the ports cluster is building packages that look for Perl 5.8.7, I will eventually run into trouble.

That happened this weekend. I installed security/metasploit, which was built as a package for Perl 5.8.7. While Metasploit ran fine, it could not use an SSL module to download updates. Apparently the way Metasploit invoked Perl with its msfupdate tool checking for Perl 5.8.7 and I have 5.8.6 installed.

I had a second problem with dns/dnswalk. It wouldn't run at all, because the package I installed relied on Perl 5.8.7 and again I had 5.8.6 installed.

I decided to bite the bullet and update Perl. This is usually a huge pain because all the applications which rely on Perl have to be updated too.

I found this in /usr/ports/UPDATING:

20050624:
AFFECTS: users of lang/perl5.8
AUTHOR: tobez@FreeBSD.org

lang/perl5.8 has been updated to 5.8.7. You should update everything
depending on perl. The easiest way to do that is to use
perl-after-upgrade script supplied with lang/perl5.8. Please see
its manual page for details.

perl-after-upgrade sounded interesting. I found this online man page by the author (he also has a blog), and this tip by Dru Lavigne. I started following Dru's advice by running 'portupgrade -rR perl' on one system. After a while I got discouraged because it was taking too long. Maybe there was an alternative?

I decided I would just force a deinstallation of Perl 5.8.6_2, and then install Perl 5.8.7 from package. I would follow with the perl-after-upgrade script.

In other words:

pkg_deinstall -f perl
setenv PACKAGESITE ftp://ftp6.freebsd.org/pub/FreeBSD/ports/i386/packages-5-stable/Latest/
pkg_add -r perl
perl-after-upgrade
perl-after-upgrade -f

The relevant items are found when running perl-after-upgrade:

# perl-after-upgrade
atk-1.9.1: 0 moved, 0 modified, 0 adjusted
desktop-file-utils-0.10_2: 0 moved, 0 modified, 0 adjusted
dnswalk-2.0.2: 0 moved, 1 modified, 0 adjusted
...edited...
imake-6.8.2: 0 moved, 0 modified, 0 adjusted
irssi-0.8.9_3: 16 moved, 1 modified, 21 adjusted
libcroco-0.6.0_1: 0 moved, 0 modified, 0 adjusted
...edited...
xpdf-3.00_6: 0 moved, 0 modified, 0 adjusted
-
---
Fixed 2 packages (16 files moved, 2 files modified)
Skipped 113 packages

**** In addition, please pay attention to the following:
The /usr/local/bin/irssi binary would be modified, make sure it works

--- Repeating summary:
Fixed 2 packages (16 files moved, 2 files modified)
Skipped 113 packages

Notice perl-after-upgrade found two troublesome applications: irssi and dnswalk. Running perl-after-upgrade again with the -f switch commits the changes.

Once I completed this process, I found that irssi worked but dnswalk still gave an error. I found my package database had a stale dependency. I eventually decided to remove dnswalk and its dependencies, and then reinstall the package. It worked fine after that. I was also able to get Metasploit to update its modules via SSL.

In any case, I believe I successfully navigated a Perl upgrade without having to compile any source code. If anyone cares to share comments, I would appreciate them.

By the way, I usually upgrade all of my ports using 'portupgrade -varRPP' after setting the PACKAGESITE variable. The PP switch tells portupgrade to only use packages. Any ports left over that aren't available as a package I have to upgrade without the PP switch.

Plug and Play Worm in Wild

The SANS ISC is reporting that a worm which exploits the Plug and Play (PnP) vulnerability described by MS05-039 is in the wild. The F-Secure Blog reports the worm is called Zotob. The Microsoft bulletin lists three mitigating factors:

  • On Windows XP Service Pack 2 and Windows Server 2003 an attacker must have valid logon credentials and be able to log on locally to exploit this vulnerability. The vulnerability could not be exploited remotely by anonymous users or by users who have standard user accounts. However, the affected component is available remotely to users who have administrative permissions.

  • On Windows XP Service Pack 1 an attacker must have valid logon credentials to try to exploit this vulnerability. The vulnerability could not be exploited remotely by anonymous users. However, the affected component is available remotely to users who have standard user accounts.

  • Firewall best practices [e.g., blocking SMB ports] and standard default firewall configurations can help protect networks from attacks that originate outside the enterprise perimeter. Best practices recommend that systems that are connected to the Internet have a minimal number of ports exposed.


Frank Knobbe is writing rules for the worm, which can be found by watching changes to the Bleeding Snort CVS interface for the all.rules file. Search for MS05-039 or 2002185, the rule SID.

His latest rule as of posting this story is

# Created 2005/08/14 by Frank Knobbe in response to first information posted on ISC
alert tcp any any -> any 1024:65535 (msg:"BLEEDING-EDGE Possible MS05-039 PnP worm infection";
flow:established,to_server; content:"get winpnp.exe"; depth:200; nocase;
reference:url,isc.sans.org/diary.php?date=2005-08-14; classtype:trojan-activity;
sid:2002185; rev:2;)

That rule watches for the compromised victim to retrieve a copy of itself using FTP from the infecting machine. Who says intrusion detection or full content monitoring is dead in an "age of encryption?" Remember the phases of compromise:

  1. Reconnaissance

  2. Exploitation

  3. Escalation

  4. Consolidation

  5. Pillage


During steps 3 and 4, the victim can't expect the tools he needs to already be on the victim (like an encrypted transport tool such as scp). Hence the intruder uses FTP, TFTP, etc. These are good reasons to remove such client programs from production servers if possible. Escalation is the process of moving from user privileges to root privileges, if the exploitation phase doesn't yield root immediately. Consolidation is the process of installing back doors, retrieving tools, or other actions to establish control of the victim. In the case of a worm, consolidation is the means whereby the worm replicates itself.

SANS ISC is also releasing rules that appear to concentrate on the initial exploitation, not the propagation of the worm via FTP.

You can test the vulnerability of your systems via controlled exploitation using this Metasploit module. The worm may be based on this exploit by houseofdabus.

Incidentally, while writing this post, I came across the new OpenRCE.org (Open Reverse Code Engineering) site. I also found that the Security Forest Exploit Tree CVS Interface is up, and that site has started a blog.

Jumat, 12 Agustus 2005

Ethernet to Your ISP

Today I was chatting in the #snort-gui channel on irc.freenode.net, and someone (who shall rename anonymous) mentioned that his ISP provides Ethernet connectivity. This surprised me because my previous employer had DS3 circuits as one might see in the image below.Tapping a DS3 connection requires specialized gear (as shown in the DS3 tap, but access to Ethernet is more readily available.

How many of you have Ethernet connectivity to your ISP?

The reason I ask is that many monitoring deployments place the wire access device (e.g., a tap) between the border router and your firewall. If you have Ethernet to your ISP, you could place the tap in front of your border router. This scenario would provide visibility to traffic addressed to the Internet-facing interface of your border router. You could monitor for attacks against the router without having to tap a T1 or DS3 connection.

LinuxWorld Sguil Presentation Online

David Bianco of Vorant Network Security posted his LinuxWorld presentation Open Source Network Security Monitoring With Sguil (.pdf). David provides a great overview of Sguil, how to use it, and its benefits. On the Sguil improvement front, lead developer Bamm Visscher has moved his family to Colorado and will settle in his new house next week. Expect to see Sguil 0.6.0 later this summer as Bamm's new work environment settles down.

Steve Riley on 802.1X Flaw

This is not a Microsoft issue, but I learned of it through a Microsoft Security Newsletter feature called 802.1X on Wired Networks Considered Harmful by Steve Riley. He claims to have written about this subject in his book Protect Your Windows Network: From Perimeter to Data, but he believes the issue merits greater attention. Cutting past the introduction to 802.1X, Steve writes:

"[T]here’s a fundamental flaw in wired, 802.1X that seriously reduces its effectiveness at keeping out rogue machines...

[I]t authenticates only at the establishment of a connection. Once a supplicant authenticates and the switch port opens, further communications between the supplicant and the switch aren’t authenticated. This creates a situation in which it’s possible for an attacker to join the network. (Thanks to Svyatoslav Pidgorny, Microsoft MVP for security, for showing me this vulnerability.)

Setting up the attack does require physical access to the network. An attacker needs to disconnect a computer (let’s call this the “victim”) from its 802.1X-protected network switch port, connect a hub to the port, connect the victim to the hub, and connect an attack computer (which we’ll call the “shadow”) to the hub. This is trivially easy if the attacker is physically inside your facility and if your Ethernet jacks are accessible. Or the attacker could connect an unmanaged access point to the hub and then conduct the attack from your parking lot. (Of course, the attacker could try to hide by disabling this AP’s SSI broadcast.)

The brief disconnection of the victim from the network won’t interfere with the attack’s success. When the victim computer is reconnected, it authenticates to the switch again. It doesn’t matter that a hub is in the way now, because a hub is little more than a wire with ports in it. Electrically, the victim is still connected to the switch.

Next the attacker configures the shadow computer’s MAC and IP addresses to be the same as those on the victim computer. A little network sniffing will quickly reveal this. The attacker also configures a host firewall to drop all inbound traffic that isn’t a reply to communications that it initiated.

Now, here’s why 802.1X on a wired network really is insufficient. After the victim computer has authenticated and the switch port is open, the attacker can connect to resources on the protected network. This is because there is no per-packet authentication of the traffic once the port is open. Since the shadow computer has the same MAC and IP addresses as the victim computer, from the point of view of the switch it appears only as if there’s a single computer connected to the port.

802.1X’s lack of follow-on per-packet authentication creates the situation for this man-in-the-middle attack I’ve just described. 802.1X only authenticates the connection; it assumes all traffic that’s flowing over the connection is legitimate. This assumption is 802.1X’s fundamental flaw."

Steve then addresses what happens when two computers try to use the same IP and MAC address while connected to the switch. If the "shadow" PC sends a SYN to a remote system, both victim and shadow PCs will see a SYN-ACK reply. The victim PC should reply with a RST (not RST ACK) because it won't be expecting an unsolicited SYN-ACK. Steve makes an interesting point, though:

"If the victim computer is running a firewall that drops unsolicited inbound SYN-ACKs, which most do, the victim won’t process the received SYN-ACK in step 2 and therefore won’t send the RST to the server. The rest of the above sequence won’t happen and the shadow computer can have complete access to the protected network.

This is the only instance I know of where a personal firewall on a computer can reduce the security of the rest of the network! Of course this is no reason not to deploy personal firewalls; their benefits strongly outweigh the likelihood of this attack actually happening."

I noticed Steve says the remote server, upon receiving a RST from the victim PC, will reply with "a RST-ACK (acknowledging the received RST and sending its own), which both the shadow and the victim receive."

This should not be the case. RFC 793 says a RST should not be sent in response to a RST. I don't know if Steve tested this in his own lab, since his description seems to mimic the page by Svyatoslav Pidgorny he mentions.

The bottom line appears to be that using 802.1X is a bad idea if an intruder has physical access to a port where a legitimate system is connected. 802.1X on wireless networks in not susceptible to this problem.