Sabtu, 28 Februari 2009

Using Responsible Person Records for Asset Management

Today while spending some time at the book store with my family, I decided to peruse a copy Craig Hunt's TCP/IP Network Administration. It covers BIND software for DNS. I've been thinking about my post Asset Management Assistance via Custom DNS Records. In the book I noticed the following:



"Responsible Person" record? That sounds perfect. I found RFC 1183 from 1990 introduced these.

I decided to try setting up these records on a VM running FreeBSD 7.1 and BIND 9. The VM had IP 172.16.99.130 with gateway 172.16.99.2. I followed the example in Building a Server with FreeBSD 7.

First I made changes to named.conf as shown in this diff:

# diff /var/named/etc/namedb/named.conf /var/named/etc/namedb/named.conf.orig
132c132
< // zone "16.172.in-addr.arpa" { type master; file "master/empty.db"; };
---
> zone "16.172.in-addr.arpa" { type master; file "master/empty.db"; };
274,290d273
< zone "example.com" {
< type master;
< file "master/example.com";
< allow-transfer { localhost; };
< allow-update { key rndc-key; };
< };
<
< zone "99.16.172.in-addr.arpa" {
< type master;
< file "master/example.com.rev";
< allow-transfer { localhost; };
< allow-update { key rndc-key; };
< };
< key "rndc-key" {
< algorithm hmac-md5;
< secret "4+IlE0Z/oHoHok9EnVwkUw==";
< };

To generate the last section I ran the following:

# rndc-confgen -a
wrote key file "/etc/namedb/rndc.key"
# cat rndc.key >> named.conf

Next I created /var/named/etc/namedb/master/example.com:

# cat example.com
$TTL 3600

example.com. IN SOA host.example.com. root.example.com. (

1 ; Serial
10800 ; Refresh
3600 ; Retry
604800 ; Expire
86400 ) ; Minimum TTL

;DNS Servers
example.com. IN NS host.example.com.

;Machine Names
host.example.com. IN A 172.16.99.130
gateway.example.com. IN A 172.16.99.2

;Aliases
www IN CNAME host.example.com.

;MX Record
example.com. IN MX 10 host.example.com.

;RP Record
host.example.com. IN RP taosecurity.email.com. sec-con.example.com.
gateway.example.com. IN RP networkteam.email.com. net-con.example.com.

;TXT Record
sec-con.example.com. IN TXT "Richard Bejtlich"
sec-con.example.com. IN TXT "Employee ID 1234567890"
sec-con.example.com. IN TXT "Northern VA office"
net-con.example.com. IN TXT "Network Admin"
net-con.example.com. IN TXT "Group ID 0987"
net-con.example.com. IN TXT "DC office"

Then I created /var/named/etc/namedb/master/example.com.rev:
# cat example.com.rev 
$TTL 3600

99.16.172.in-addr.arpa. IN SOA host.example.com. root.example.com. (

1 ; Serial
10800 ; Refresh
3600 ; Retry
604800 ; Expire
86400 ) ; Minimum TTL

;DNS Servers
99.16.172.in-addr.arpa. IN NS host.example.com.

;Machine IPs
1 IN RP networkteam.email.com. net-con.example.com.
2 IN PTR gateway.example.com.
130 IN PTR host.example.com.
130 IN PTR www.example.com.

;RP Record
2 IN RP networkteam.email.com. net-con.example.com.
13 IN RP taosecurity.email.com. sec-con.example.com.

If you caught my ommission, I'll point it out near the end of the post.

Finally I edited /etc/resolv.conf so it pointed only to 127.0.0.1, and restarted named:

# /etc/rc.d/named restart
Stopping named.
Starting named.

Now I was able to query the name server.

# dig @127.0.0.1 version.bind chaos txt | grep version.bind
; <<>> DiG 9.4.2-P2 <<>> @127.0.0.1 version.bind chaos txt
;version.bind. CH TXT
version.bind. 0 CH TXT "9.4.2-P2"
version.bind. 0 CH NS version.bind.

Let's do zone transfers for the forward and reverse zones.

# dig @127.0.0.1 axfr example.com.

; <<>> DiG 9.4.2-P2 <<>> @127.0.0.1 axfr example.com.
; (1 server found)
;; global options: printcmd
example.com. 3600 IN SOA host.example.com. root.example.com. 1 10800 3600 604800 86400
example.com. 3600 IN MX 10 host.example.com.
example.com. 3600 IN NS host.example.com.
gateway.example.com. 3600 IN RP networkteam.email.com. net-con.example.com.
gateway.example.com. 3600 IN A 172.16.99.2
host.example.com. 3600 IN RP taosecurity.email.com. sec-con.example.com.
host.example.com. 3600 IN A 172.16.99.130
net-con.example.com. 3600 IN TXT "Network Admin"
net-con.example.com. 3600 IN TXT "Group ID 0987"
net-con.example.com. 3600 IN TXT "DC office"
sec-con.example.com. 3600 IN TXT "Richard Bejtlich"
sec-con.example.com. 3600 IN TXT "Employee ID 1234567890"
sec-con.example.com. 3600 IN TXT "Northern VA office"
www.example.com. 3600 IN CNAME host.example.com.
example.com. 3600 IN SOA host.example.com. root.example.com. 1 10800 3600 604800 86400
;; Query time: 41 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sun Mar 1 04:22:57 2009
;; XFR size: 15 records (messages 1, bytes 480)

# dig @127.0.0.1 axfr 99.16.172.in-addr.arpa.

; <<>> DiG 9.4.2-P2 <<>> @127.0.0.1 axfr 99.16.172.in-addr.arpa.
; (1 server found)
;; global options: printcmd
99.16.172.in-addr.arpa. 3600 IN SOA host.example.com. root.example.com. 1 10800 3600 604800 86400
99.16.172.in-addr.arpa. 3600 IN NS host.example.com.
1.99.16.172.in-addr.arpa. 3600 IN RP networkteam.email.com. net-con.example.com.
13.99.16.172.in-addr.arpa. 3600 IN RP taosecurity.email.com. sec-con.example.com.
130.99.16.172.in-addr.arpa. 3600 IN PTR host.example.com.
130.99.16.172.in-addr.arpa. 3600 IN PTR www.example.com.
2.99.16.172.in-addr.arpa. 3600 IN RP networkteam.email.com. net-con.example.com.
2.99.16.172.in-addr.arpa. 3600 IN PTR gateway.example.com.
99.16.172.in-addr.arpa. 3600 IN SOA host.example.com. root.example.com. 1 10800 3600 604800 86400
;; Query time: 27 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sun Mar 1 04:26:36 2009
;; XFR size: 9 records (messages 1, bytes 380)

Now let's pretend we have a security incident involving 172.16.99.2. You want to know who owns it. Let's query for RP records.

VirtualBSD# host -t rp 172.16.99.2
2.99.16.172.in-addr.arpa domain name pointer gateway.example.com.

Ok, I see that I get a PTR record for 172.16.99.2. I can look for a RP record for that hostname.

# host -t rp gateway.example.com.
gateway.example.com has RP record networkteam.email.com. net-con.example.com.

That worked. I see the email address for the Responsible Person is networkteam@email.com (you have to imagine the @ instead of the . there), and I also get indication of a TXT record. I query for that next.

# host -t txt net-con.example.com.
net-con.example.com descriptive text "Network Admin"
net-con.example.com descriptive text "Group ID 0987"
net-con.example.com descriptive text "DC office"

Great, I have some additional details on the network team.

What if I try 172.16.99.130?

# host -t rp 172.16.99.130
130.99.16.172.in-addr.arpa domain name pointer www.example.com.
130.99.16.172.in-addr.arpa domain name pointer host.example.com.

# host -t RP www.example.com.
www.example.com is an alias for host.example.com.
host.example.com has RP record taosecurity.email.com. sec-con.example.com.

# host -t TXT sec-con.example.com.
sec-con.example.com descriptive text "Richard Bejtlich"
sec-con.example.com descriptive text "Employee ID 1234567890"
sec-con.example.com descriptive text "Northern VA office"

How about 172.16.99.1?

# host -t rp 172.16.99.1
1.99.16.172.in-addr.arpa has no PTR record

That was the error in the example.com.rev file I posted earlier. Or is it an error? Maybe not:

# host -t rp 1.99.16.172.in-addr.arpa
1.99.16.172.in-addr.arpa has RP record networkteam.email.com. net-con.example.com.

If we query for the IP in in-addr.arpa format, we can find a RP record. So, it's possible to have IPs without hostnames in your DNS and still have RP records. You just need to know how to ask for them.

I think this is really promising. At the very least, an DNS admin responsible for hosts in a certain subnet could add RP records, like that of 172.16.99.1, for every host. This would probably work best for servers, but it should be possible to extend it to hosts with dynamic DNS assignments.

Incidentally, RP records do not seem very popular on the Internet. If you find any in the wild, please let me know.


Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Sample Lab from TCP/IP Weapons School 2.0 Posted

Several of you have asked me to explain the difference between TCP/IP Weapons School (TWS), which I first taught at USENIX Security 2006, and TCP/IP Weapons School 2.0 (TWS2), which I first taught at Black Hat DC 2009 Training last week. This post will explain the differences, with an added bonus.


  1. I have retired TWS, the class I taught from 2006-2008. I am only teaching TWS2 for the foreseeable future.

  2. TWS2 is a completely brand-new class. I did not reuse any material from TWS, my older Network Security Operations class, or anything else.

  3. TWS2 offers zero slides. Students receive three handouts and a DVD. The handouts include an 84 page investigation guide, a 25 page student workbook, and a 120 page teacher's guide. The DVD contains a virtual machine with all the tools and evidence needed to complete the labs, along with the network and memory evidence as stand-alone files.

  4. TWS2 is heavily lab-focused. I've been teaching professionally since 2002, and I've recognized that students prefer doing to staring and maybe listening! Everyone who leaves TWS2 has had hands-on experience investigating computer incidents in an educational environment.

  5. TWS2 is designed for beginner-to-intermediate attendees. Some advanced people will like the material too, although I can't promise to please everyone. I built the class so that the newest people could learn by trying the labs, but follow the teacher's guide (which they receive) if they need extra assistance. More advanced students are free to complete the labs any way they see fit, preferably never looking at the teacher's guide until the labs are done. This system worked really well in DC last week.

  6. TWS2 uses multiple forms of evidence. Solving the labs relies heavily on the network traffic provided with each case, but some questions can only be answered by reviewing Snort alerts, or session data, or system logs provided via Splunk, or even memory captures analyzed with tools like Volatility or whatever else the student brings to the case.

  7. TWS2 comes home with the student and teaches an investigative mindset. Unlike classes that dump a pile of slides on you, TWS2 essentially delivers a book in courseware form. I use (*gasp*) whole sentences, even paragraphs, to describe how to solve labs. By working the labs the student learns how to be an investigator, rather than just watching or listening to investigative theories. I am using the same material to teach analysts on my team how to detect and respond to intrusions.


To provide a better sense of the class, I've posted materials from one of the labs at http://www.taosecurity.com/tws2_blog_sample_28feb09a.zip. The .zip contains the student workbook for the case, the teacher's guide for the case, and the individual network trace file for the case. There is no way for me to include the 4 GB compressed VM that students receive, but by reviewing this material you'll get some idea of the nature of this class.

My next session of TCP/IP Weapons School 2.0 will take place in Amsterdam on 14-15 April 2009 at Black Hat Europe 2009. Seats are already filling.

The last sessions of the year will take place in Las Vegas on 25-26 and 27-28 July 2009 at Black Hat USA 2009. Registration for training at that location will open this week, I believe.

I am not teaching the class publicly anywhere else in 2009. I do not offer private classes to anyone, except internally within GE (and those are closed to the public).

If you have any questions on these classes, please post them here. Thank you.


Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Jumat, 27 Februari 2009

Inputs vs Outputs, or Why Controls Are Not Sufficient

I have a feeling my post Consensus Audit Guidelines Are Still Controls is not going to be popular in certain circles. While tidying the house this evening I came across my 2007 edition of the Economist's Pocket World in Figures. Flipping through the pages I found many examples of inputs (think "control-compliant") vs outputs (think "field-assessed").

I'd like to share some of them with you in an attempt to better communicate the ideas my last post.

  • Business creativity and research


    • Input(s): Total expenditures on research and development, % of GDP

    • Output(s): Number of patents granted (per X people)


  • Education


    • Input(s): Education spending, % of GDP; school enrolment

    • Output(s): Literacy rate


  • Life expectancy, health, and related categories


    • Input(s): Health spending, % of GDP; population per doctor; number of hospital beds per citizen; (also add in air quality, drinking and smoking rates, etc.)

    • Output(s): Death rates; infant mortality; and so on...


  • Crime and punishment


    • Input(s): Total police per X population

    • Output(s): Crime rate



Is this making sense?


Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Consensus Audit Guidelines Are Still Controls

Blog readers know that I think FISMA Is a Joke, FISMA Is a Jobs Program, and if you fought FISMA Dogfights you would always die in a burning pile of aerial debris.

Now we have the Consensus Audit Guidelines (CAG) published by SANS. You can ask two questions: 1) is this what we need? and 2) is it at least a step in the right direction?

Answering the first question is easy. You can look at the graphic I posted to see that CAG is largely another set of controls. In other words, this is more control-compliant "security," not field-assessed security. Wait, you might ask, doesn't the CAG say this?

What makes this document effective is that it reflects knowledge of actual attacks and defines controls that would have stopped those attacks from being successful. To construct the document, we have called upon the people who have first-hand knowledge about how the attacks are being carried out.

That excerpt means that CAG defines defensive activities that are believed to be effective by various security practitioners. I am not doubting that these practitioners are smart. I am not doubting their skills. What I am trying to say is that implementing the controls in CAG does not tell you the score of the game. CAG is all about inputs. After implementing CAG you still do not know any outputs. In other words, you apply controls (an "X"), but what is the outcome (the "Y"). The controls may or may not be wonderful, but if you are control-compliant you do not have the information produced by field-assessed security.

Does anyone real think we do not have controls already? The CAG itself shows how it maps against NIST SP 800-53 Rev 3 Controls. Five are shown below as an example.



For example, looking at CAG, how many of these strike you as something you didn't already know about?

Critical Controls Subject to Automated Measurement and Validation:

  1. Inventory of Authorized and Unauthorized Hardware.

  2. Inventory of Authorized and Unauthorized Software.

  3. Secure Configurations for Hardware and Software on Laptops, Workstations, and Servers.

  4. Secure Configurations of Network Devices Such as Firewalls and Routers.

  5. Boundary Defense

  6. Maintenance and Analysis of Complete Security Audit Logs

  7. Application Software Security

  8. Controlled Use of Administrative Privileges

  9. Controlled Access Based On Need to Know

  10. Continuous Vulnerability Testing and Remediation

  11. Dormant Account Monitoring and Control

  12. Anti-Malware Defenses

  13. Limitation and Control of Ports, Protocols and Services

  14. Wireless Device Control

  15. Data Leakage Protection


Additional Critical Controls (not directly supported by automated measurement and validation):

  1. Secure Network Engineering

  2. Red Team Exercises

  3. Incident Response Capability

  4. Data Recovery Capability

  5. Security Skills Assessment and Training to Fill Gaps



Don't get me wrong. If you are not implementing these controls already, you should do so. That will still not tell you the score of the game. If you want to see exactly what I proposed, I differentiated between control-compliance "security" and field-assessed security in my post Controls Are Not the Solution to Our Problem.

So, to answer my second question, CAG is a step in the right direction away from FISMA. It doesn't change the game, especially if you are already implementing NIST guidance.


Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Rabu, 25 Februari 2009

Asset Management Assistance via Custom DNS Records

In my post Black Hat DC 2009 Wrap-Up, Day 2 I mentioned enjoying Dan Kaminsky's talk. His thoughts on the scalability of DNS made an impression on me. I thought about the way the Team Cymru Malware Hash Registry returns custom DNS responses for malware researchers, for example. In this post I am interested in knowing if any blog readers have encountered problems similar to the ones I will describe next, and if yes, did you / could you use DNS to help mitigate it?

When conducting security operations to detect and respond to incidents, my team follows the CAER approach. Escalation is always an issue, because it requires identifying a responsible party. If you operate a defensible network it will be inventoried and claimed, but getting to that point is difficult.

The problem is this: you have an IP address, but how do you determine the owner? Ideally you have access to a massive internal asset database, but the problems of maintaining such a system can be daunting. The more sites, departments, businesses, etc. in play, the more difficult it is to keep necessary information in a single database. Even a federated system runs into problems, since there must be a way to share information, submit queries, keep data current, and so on.

Dan made a key point during his talk: one of the reasons DNS scales so well is that edge organizations maintain their own records, without having to constantly notify the core. Also, anyone can query the system, and get results from the (presumably) right source.

With this in mind, would it make sense to internally deploy custom DNS records that identify asset owners?

In other words:

  1. Mandate by policy that all company assets must be registered in the internal company DNS.

  2. Add extensions of some type that provide information like the following, at a minimum:


    • Asset owner name and/or employee number

    • Owning business unit

    • Date record last updated


  3. Periodically, statistically survey IP addresses observed via network monitoring to determine if their custom DNS records exist and validate that they are accurate


These points assume that there is already a way to associate an employee name or number with a contact method such as email address and/or phone number, as would be the case with a Global Address List.

Is anyone doing this? If not, do you have ideas for identifying asset owners when the scale of the problem is measured in the hundreds of thousands?


Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Selasa, 24 Februari 2009

HD Moore on the Necessity of Disclosure

HD Moore posted a great defense of full disclosure in his article The Best Defense is Information on the latest Adobe vulnerability.

The strongest case for information disclosure is when the benefit of releasing the information outweighs the possible risks. In this case, like many others, the bad guys already won. Exploits are already being used in the wild and the fact that the rest of the world is just now taking notice doesn't mean that these are new vulnerabilities. At this point, the best strategy is to raise awareness, distribute the relevant information, and apply pressure on the vendor to release a patch.

Adobe has scheduled the patch for March 11th. If you believe that Symantec notified them on February 12th, this is almost a full month from news of a live exploit to a vendor response. If the vendor involved was Microsoft, the press would be tearing them apart right now. What part of "your customers are being exploited" do they not understand?



Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Buck Surdu and Greg Conti Ask "Is It Time for a Cyberwarfare Branch?"

The latest issue of the Information Assurance Technology Analysis Center's IANewsletter features "Army, Navy, Air Force, and Cyber -- Is It Time for a Cyberwarfare Branch of [the] Military?" by COL John "Buck" Surdu and LTC Gregory Conti. I found these excerpts enlightening.

The Army, Navy, and Air Force all maintain cyberwarfare components, but these organizations exist as ill-fitting appendages that attempt to operate in inhospitable cultures where technical expertise is not recognized, cultivated, or completely understood. The services have developed effective systems to build traditional leadership and management skills. They are quite good at creating the best infantrymen, pilots, ship captains, tank commanders, and artillerymen, but they do little to recognize and develop technical expertise. As a result, the Army, Navy, and Air Force hemorrhage technical talent, leaving the Nation’s military forces and our country under-prepared for both the ongoing cyber cold war and the likelihood of major cyberwarfare in the future...

The skill sets required to wage cyberwar in this complex and ill-defined environment are distinct from waging kinetic war. Both the kinetic and non-kinetic are essential components of modern warfare, but the status quo of integrating small cyberwarfare units directly into the existing components of the armed forces is insufficient...

The cultures of today’s military services are fundamentally incompatible with the culture required to conduct cyberwarfare... The Army, Navy, and Air Force are run by their combat arms officers, ship captains, and pilots, respectively. Understandably, each service selects leaders who excel at conducting land, sea, and air battles and campaigns. A deep understanding and respect for cyberwarfare by these leaders is uncommon.

To understand the culture clash evident in today’s existing militaries, it is useful to examine what these services hold dear -- skills such as marksmanship, physical strength, and the ability to jump out of airplanes and lead combat units under enemy fire. Accolades are heaped upon those who excel in these areas. Unfortunately, these skills are irrelevant in cyberwarfare...

The culture of each service is evident in its uniforms. Consider the awards, decorations, badges, patches, tabs, and other accoutrements authorized for wear by each service. Absent is recognition for technical expertise. Echoes of this ethos are also found in disadvantaged assignments, promotions, school selection, and career progression for those who pursue cyberwarfare expertise, positions, and accomplishments...

Evidence to back these assertions is easy to find. From a recent service academy graduate who desired more than anything to become part of a cyberwarfare unit but was given no other option than to leave the service after his initial commitment, to the placement of a service’s top wireless security expert in an unrelated assignment in the middle of nowhere, to the PhD whose mission was to prepare PowerPoint slides for a flag officer -- tales of skill mismanagement abound...

[W]e are arguing that these cultures inhibit (and in some cases punish) the development of the technical expertise needed for this new warfare domain.... Only by understanding the culture of the technical workforce can a cyberwarfare organization hope to succeed... High-and-tight haircuts, morning physical training runs, rigorously enforced recycling programs, unit bake sales, and second-class citizen status are unlikely to attract and retain the best and brightest people.


I agree with almost all of this article. When I left the Air Force in early 2001, I was the 31st of the last 32 eligible company grade officers in the Air Force Information Warfare Center to separate from the Air Force rather than take a new nontechnical assignment. The only exception was a peer who managed to grab a job at NSA. The other 31 all left to take technical jobs in industry because we didn't want to become protocol officers in Guam or logitisics officers in a headquarters unit.

Please read the whole article before commenting, if you choose to do so. I selected only a few points but there are others.


Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Senin, 23 Februari 2009

More Information on CNCI

In response to my post Black Hat DC 2009 Wrap Up, Day 1, a commenter shared a link to a Fairfax Chamber of Commerce briefing by Boeing on the Comprehensive National Cybersecurity Initiative (CNCI) that I last mentioned in FCW on Comprehensive National Cybersecurity Initiative. I've extracted a few slides below to highlight several points.

The first slide I share shows abbreviated definitions for Computer Network Defense, Computer Network Exploitation, and Computer Network Attack. These mirror what I cited in China Cyberwar, or Not? in late 2007.



The second slide supports what I said in my Predicitons for 2008 post: Expect greater military involvement in defending private sector networks. Notice DNI and DoJ are said to be "authorized to conduct domestic intrusion detection," and DNI and DoD are allowed "involvement with domestic networks."



The three phased approach is displayed next. Note mentions of deployment of sensors, counter-intrusion plans, and deterrence.



Finally, this slide lists the seven "emphasis areas" for the new program.



Thanks to the anonymous commenter for directing me to this public link.


Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

VirtualBSD: FreeBSD 7.1 Desktop in a VM

Want to try FreeBSD 7.1 in a comfortable, graphical desktop, via a VMWare VM? If your answer is yes, visit www.virtualbsd.info and download their 1.5 GB VM. I tried it last night and got it working with VMware 1.0.8 by making the following adjustments:

Edit VirtualBSD.vmx to say

#virtualHW.version = "6"
virtualHW.version = "4"

and VirtualBSD.vmdk to say

#ddb.virtualHWVersion = "6"
ddb.virtualHWVersion = "4"

and you will be able to use the VM on VMware Server 1.0.8.


Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Minggu, 22 Februari 2009

Black Hat Briefings Justify Supporting Retrospective Security Analysis

One of the tenets of Network Security Monitoring, as repeated in Network Monitoring: How Far?, is collect as much data as you can, given legal, political, and technical means (and constraints) because that approach gives you the best chance to detect and respond to intrusions. The Black Hat Briefings always remind me that such an approach makes sense. Having left the talks, I have a set of techniques for which I can now mine my logs and related data sources for evidence of past attacks.

Consider these examples:

  • Given a set of memory dumps from compromised machines, search them using the Snorting Memory techniques for activity missed when those dumps were first collected.

  • Review Web proxy logs for the presence of IDN in URIs.

  • Query old BGP announcements for signs of past MITM attacks.


You get the idea. The key concept is that none of us are smart enough to know how a certain set of advanced threats are exploiting us right now, or how they exploited us in the past. Once we get a clue to their actions, we can mine our security evidence for indicators of that activity. When we find signs of malicious activity we can focus our methods and expand our view until we have a better idea of the scope of an incident.

This strategy is the only one that has ever worked for digital intrusion victims who are constrained to purely defensive operations. A better alternative, as outlined in The Best Cyber Defense, is to conduct aggressive counterintelligence to find out what the enemy knows about you. Since that tactic is outside the scope for the vast majority of us, we should adopt a mindset, toolset, and tactics that enable retrospective security analysis -- the ability to review past evidence for indicators of modern attacks.

If you only rely on your security products to produce alerts of any type, or blocks of any type, you will consistently be "protected" from only the most basic threats. Advanced threats know how to evade many defenses because they test and hone their techniques before deploying them in the wild.

NSM has always implemented retrospective security analysis, but the idea applies to a wide variety of security evidence.


Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Black Hat DC 2009 Wrap-Up, Day 2

This is a follow-up to Black Hat DC 2009 Wrap-Up, Day 1.

  • I started day two with Dan Kaminsky. I really enjoyed his talk. I am not sure how much of it was presented last year, since I missed his presentation in Las Vegas. However, I found his comparison of DNS vs SSL infrastructures illuminating. The root name servers are stable, dependable, centrally coordinated, and guaranteed to be around in ten years. We know what root name servers to trust, and we can add new hosts to our domains without requesting permission from a central authority. Contrast that with certificate authorities. They have problems, cannot all be trusted, and come and go as their owning companies change. We do not always know what CAs to trust, but we must continuously consult them whenever we change infrastructure.

    Dan asked "are we blaming business people when really our engineering is poor?" I thought that was a really interesting question. Imagine that instead of being a security engineer, you're a housing engineer. Which of the following display poor engineering?



    It should be clear that you can't answer that question just by looking at the product of the engineering process. You have to consider a variety of constraints, external factors, and so on. The fact that so much of the Internet is broken says nothing about engineering, because engineering is seldom done for engineering's sake: engineering always servers another master, often a business mission.

  • After Dan I saw Prajakta Jagdale explain problems with applications code in Flash. I should not have been surprised to see Flash .swf files containing hardcoded usernames and passwords. Didn't we talk about this 10 years ago for generic Web pages? Show me any new feature-rich programming environment and you can probably find the same generic design and implementation flaws of a decade ago.

  • I watched some of Paul Wouters' talk on defending DNS, but the poor guy was really sick and the talk was boring. I had to leave early for a work call anyway.

  • Earl Zmijewski from Renesys gave one of my two favorite talks of the conference. He explains how to detect BGP Man-in-the-Middle attacks, described in this Renesys blog post. Earl's investigative method was impressive, and the majority of his talk involved describing how he developed a methodology to identify potential BGP MITM attacks. One clue appears in the diagram below, where it is unusual for a low-level player like Pilosoft to appear to be carrying traffic between two bigger players.



    Earl emphasized that routing is based on trust. There is really no way to validate that routes received via BGP are legitimate. (Note: With 270,000 routes in the global BGP tables, there are 45,000 updates per minute on a slow day. On Monday when AS 47868 decided to torpedo the Internet, updates arrived at 4 million per minute.) Individual BGP-speaking routers don't really need to know entire paths to route; paths are really used to drop routes via loop detection. (Path lengths are used to select routes, however.)

    The key to identifying BGP MITM is to realize that although the vast majority of the Internet will be fooled by an artificial route during a BGP MITM attack, a legitimate path must be maintained in order for the attacker to get intercepted traffic to its ultimate intended destinaton. By comparing routes seen across the Internet for a victim AS with routes seen by the legitimate path, one can identify BGP MITM attacks. You can look for other hints, like violations of the valley property shown below.



    I recommend reading the blog post and linked slides for more information.

  • David Litchfield's talk on Oracle forensics was interesting. Oracle is like a file system unto itself, so you can bring the same mindset to analyzing the variety of files Oracle uses during operation. This evidence is present by default.

  • I concluded the Briefings with Peter Silberman from Mandiant. His blog post describes his talk, which involved converting Snort signatures into strings for searching memory on victim systems. This technique can be used to discover remnants of attacks in system memory, or evidence of malware still resident in memory. His implementation relies on XPath if one wishes to write new signatures, and I am not familiar with that system now.


Overall I found the talks very informative and balanced across a variety of issues, from the CPU level all the way up to BGP.

Looking ahead, the Black Hat Europe 2009 speakers list looks much different, and I hope to be able to see at least some of the talks after I teach there.


Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Black Hat DC 2009 Wrap-Up, Day 1

I taught the first edition of TCP/IP Weapons School 2.0 at Black Hat DC 2009 Training in Arlington, VA last week to 31 students. Thanks to Steve Andres from Special Ops Security and Joe Klein from Command Information for helping as teaching assistants, and to Ping Look and the whole Black Hat staff for making the class successful.

I believe the class went well and I am looking forward to teaching at Black Hat Europe 2009 Training in April. Very soon I will post a sample lab from the class on this blog so you can get a feeling for this class, since it is completely new and totally slide-free.

I hope to blog a little more now that the class is done. I spent the vast majority of my free time over the last three months preparing the new class, even completing the coursework three days before the class, printing the books and burning the DVDs myself. I expect preparations for Amsterdam and eventually Las Vegas to be easier.

After my training I attended Black Hat DC 2009 Briefings. Without doubt, Black Hat is the best security conference I attend. Jeff Moss and company consistently put the best players on the field, year after year. Just as I wrote a Black Hat DC 2008 Wrap-Up, I'd like to do the same for 2009. You can access slide decks, and in some cases, video recordings, of the Briefings here.

  • I started the Briefings with Paul Kurtz, who emphasized 1) increased intelligence community (IC) involvement in our industry; 2) explicit cyber weapon development; and 3) defining authority for a "cyber Katrina." The IC needs to contribute attribution to the cyber picture, similar to its role in counter-terrorism actions. Attribution facilitates deterrence, a topic which I will address independently later. If you object to the "militarization of cyber space," Paul's answer is simple: "Too late."

    The US needs a) a strategy to defeat adversaries with cyber weapons; b) a governance model for command and control; c) treaties with allies; d) more open discussion, unlike the CNCI; and e) hack-back authority to trace attacks technically as an alternative or enhancement to IC attribution techniques or to disable malicious systems.

  • Moxie Marlinspike reiterated his SSL Basic Constraints vulnerability (CVE 2002-0862 of 2002.



    He then outlined ways to degrade SSL encryption by modifying traffic that links to HTTPS sites (via his "sslstrip" tool), along with clever ways to abuse International Domain Names (IDN). Steve Andres advised I configure Firefox using about:config -> network.IDN_show_punycode = true to always cause IDN sites to render in Punycode -- e.g. as http://www.xn--mnchhausen-9db.at/ and not http://www.münchhausen.at/ . I found it interesting that Moxie tested his SSL techniques by running a Tor exit node. See Dan Kaminsky's post for some good commentary.

  • I watched some of Michael Muckin's talk on Windows Vista Security Internals, but I checked out early to meet some friends for lunch away from the conference.

  • I returned to see Joanna Rutkowska and Rafal Wojtczukl continue to abuse Intel. I found Joanna's comment about the wisdom of addressing vulnerabilities in System Management Mode (SMM) by writing a new SMM Transfer Monitor (STM). If the SMM has vulnerabilities that can be monitored by a STM, what ensures the STM doesn't have vulnerabilities that require monitoring? Details are posted on their Invisible Things blog, including a paper, and not just slides.

    I must really praise them for writing a paper on this subject, using full sentences and paragraphs. After attending The Best Single Day Class Ever last year, I have made a point to congratulate anyone who resists the temptation to consider PowerPoint as a legitimate means of communication, especially as their sole means of communication. The next time you doubt your ability to write a paper instead of a PowerPoint slide, remember that Joanna and Rafal aren't even native English speakers, and they managed to describe their work in a paper!

    Their presentation raised interesting issues regarding engaging security researchers. Invisible Things Lab researches two types of security problems: design flaws and implementation flaws. Intel (and other vendors) provide both. Intel wants ITL to sign a NDA before it will share details of its designs. ITL prefers to not be bound by NDA. Clearly, Intel's latest initiative suffers severe design flaws. By not engaging ITL, Intel has wasted many man-months of research and implementation. Was that worth not engaging ITL because they would not sign the NDA? If Intel is serious about security, they need to work around this legal and intellectual property problem. If they only care about security theater, they can pretend to care about security but bring a flawed product to market.

  • I really enjoyed Michael Sutton's talk on vulnerabilities and exposures of persistent Web browser storage. He outlined issues with the four methods listed in this figure.



    I was interested in hearing how one could perform persistent client-side cross-site scripting by inserting malicious Javascript into a user's cookies. An intruder could perform a similar attack, called client-side SQL injection, against the databases maintained by Gears and HTML 5 implementations.

  • I finished day one by attending Adam Laurie's discussion of satellite hacking. I was most impressed by his application of visualization to the problem of deciding what channels were worth observing.


I'll wrap up day two shortly.


Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Kamis, 19 Februari 2009

Thoughts on Air Force Blocking Internet Access

Last year I wrote This Network Is Maintained as a Weapon System, in response to a story on Air Force blocks of blogging sites. Yesterday I read Air Force Unplugs Bases' Internet Connections by Noah Shachtman:

Recently, internet access was cut off at Maxwell Air Force Base in Alabama, because personnel at the facility "hadn't demonstrated — in our view at the headquarters — their capacity to manage their network in a way that didn't make everyone else vulnerable," [said] Air Force Chief of Staff Gen. Norton Schwartz.

I absolutely love this. While in the AFCERT I marvelled at the Marine Corps' willingness to take the same actions when one of their sites did not take appropriate defensive actions.

Let's briefly describe what needs to be in place for such an action to take place.

  1. Monitored. Those who wish to make a blocking decision must have some evidence to support their action. The network subject to cutoff must be monitored so that authorities can justify their decision. If the network to be cut off is attacking other networks, the targets of the attacks should also be monitored and use their data to justify action.

  2. Inventoried. The network to be cut off must be inventoried. The network must be understood so that a decision to block gateways A and B doesn't leave unknown gateways C and D free to continue conducting malicious activity.

  3. Controlled. There must be a way to implement the block.

  4. Claimed. The authorities must know the owners of the misbehaving network and be able to contact them.

  5. Command and Control. The authorities must be able to exercise authority over the misbehaving network.


You might notice the first four items are the first four elements of my Defensible Network Architecture 2.0 of a year ago.

Number five is very important. Those deciding to take blocking action must be able to exercise a block despite objections by the site. The site is likely to use terms like "mission critical," "business impact," "X dollars per hour," etc. The damage caused by leaving the malicious network able to attack the rest of the enterprise must exceed the impact of lost network connectivity to the misbehaving network.

It is usually much easier to wrap impact around a network outage than it is to determine the cost of sustaining and suffering network attacks. Loss of availability is usually easier to measure than losses of confidentiality or integrity. The easiest situation is one where downtime confronts downtime, i.e., cutting off a misbehaving site will allow its targets to restore their networks. This would be true of a malicious site conducting a DoS attack against others; terminating the offending denies his network availability but restores the victim's availability. That is why sites are most likely to allow network cutoffs when rogue code in one site is aggressively scanning or DoS'ing a target, resulting in the target losing services.

Does your enterprise have a policy that allows cutting off misbehaving subnets?


Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Minggu, 15 Februari 2009

Back from Bro Workshop

Last week I attended the Bro Hands-On Workshop 2009. Bro is an open source network intrusion detection and traffic characterization program with a lineage stretching to the mid-1990s. I finally met Vern Paxson in person, which was great. I've known who Vern was for about 10 years but never met him or heard him speak.

I first covered Bro in The Tao of Network Security Monitoring in 2004 with help from Chris Manders. About two years ago I posted Bro Basics and Bro Basics Follow-Up here. I haven't used Bro in production but after learning more about it in the workshop I would be comfortable using some of Bro's default features.

I'm not going to say anything right now about using Bro. I did integrate Bro analysis into most of the cases in my all-new TCP/IP Weapons School 2.0 class at Black Hat this year. If TechTarget clears me for writing again in 2009 I will probably write some Bro articles for Traffic Talk.



Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Selasa, 10 Februari 2009

Last Day to Register Online for TCP/IP Weapons School 2.0 in DC

Black Hat was kind enough to invite me back to teach a new 2-day course at Black Hat DC 2009 Training on 16-17 February 2009 at the Hyatt Regency Crystal City in Arlington, VA. This class, completely new for 2009, is called TCP/IP Weapons School 2.0. This is my only scheduled class on the east coast of the United States in 2009.

The short description says:

This hands-on, lab-centric class by Richard Bejtlich focuses on collection, detection, escalation, and response for digital intrusions.

Is your network safe from intruders? Do you know how to find out? Do you know what to do when you learn the truth? If you need answers to these questions, TCP/IP Weapons School 2.0 (TWS2) is the Black Hat course for you. This vendor-neutral, open source software-friendly, reality-driven two-day event will teach students the investigative mindset not found in classes that focus solely on tools. TWS2 is hands-on, lab-centric, and grounded in the latest strategies and tactics that work against adversaries like organized criminals, opportunistic intruders, and advanced persistent threats.


Online registration ends 11 Feb, and appears to restart onsite on 16 Feb.

If you've attended previous classes, even TCP/IP Weapons School, the new class is brand new and you're definitely welcome back. We have a few seats still left. Thank you.


Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

New Online Packet Repository

As of a few weeks ago I am no longer involved with OpenPacket.org. One of the reasons is a great new online packet repository sponsored and run by Mu Dynamics called Pcapr. I've had an account there for a few months, but it looks like the site is now open to the general public. Check it out -- there's a lot of cool features already.


Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

Kamis, 05 Februari 2009

Benefits of Removing Administrator Access in Windows

I think most security people advocate removing administrator rights for normal Windows users, but I enjoy reading even a cursory analysis of this "best practice" as published by BeyondTrust and reported by ComputerWorld. From the press release:

BeyondTrust’s findings show that among the 2008 Microsoft vulnerabilities given a "critical" severity rating, 92 percent shared the same best practice advice from Microsoft to mitigate the vulnerability: "Users whose accounts are configured to have fewer user rights on the system could be less impacted than users who operate with administrative user rights." This language, found in the "Mitigating Factors" portion of Microsoft’s security bulletins, also appears as a recommendation for reducing the threat from nearly 70 percent of all vulnerabilities reported in 2008.

Other key findings from BeyondTrust’s report show that removing administrator rights will better protect companies against the exploitation of:

* 94 percent of Microsoft Office vulnerabilities reported in 2008
* 89 percent of Internet Explorer vulnerabilities reported in 2008
* 53 percent of Microsoft Windows vulnerabilities reported in 2008.

I'd like to take this a step further. Let's compare a system operated by a user with no administrator rights -- but no antivirus -- against a system operated by an administrator *with* antivirus. I believe the no administrator rights system would survive more often, albeit not without some failures. Anyone know of a study like that?


Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

More on Weaknesses of Models

I read the following in the Economist;

Edmund Phelps, who won the Nobel prize for economics in 2006, is highly critical of today’s financial services.

"Risk-assessment and risk-management models were never well founded," he says. "There was a mystique to the idea that market participants knew the price to put on this or that risk.

But it is impossible to imagine that such a complex system could be understood in such detail and with such amazing correctness... the requirements for information... have gone beyond our abilities to gather it."


This is absolutely the problem I mentioned in Are the Questions Sound? and Wall Street Clowns and Their Models. Phelps could easily be describing information security models.


Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

Selasa, 03 Februari 2009

Notes on Installing Sguil Using FreeBSD 7.1 Packages

It's been a while since I've looked at the Sguil ports for FreeBSD, so I decided to see how they work.

In this post I will talk about installing a Sguil sensor and server on a single FreeBSD 7.1 test VM using packages shipped with FreeBSD 7.1.

To start with the system had no packages installed.

After running pkg_add -vr sguil-sensor, I watched what was added to the system. I'm only going to document that which I found interesting.

The sguil-sensor-0.7.0_2 package installed the following into /usr/local.

x bin/sguil-sensor/log_packets.sh
x bin/sguil-sensor/example_agent.tcl
x bin/sguil-sensor/pcap_agent.tcl
x bin/sguil-sensor/snort_agent.tcl
x etc/sguil-sensor/example_agent.conf-sample
x etc/sguil-sensor/pcap_agent.conf-sample
x etc/sguil-sensor/snort_agent.conf-sample
x etc/sguil-sensor/log_packets.conf-sample
x share/doc/sguil-sensor <- multiple files, omitted here
x etc/rc.d/example_agent
x etc/rc.d/pcap_agent
x etc/rc.d/snort_agent

Note that you have to copy

pcap_agent.conf-sample
log_packets.conf-sample
snort_agent.conf-sample

to

pcap_agent.conf
log_packets.conf
snort_agent.conf

and edit each, prior to starting

pcap_agent.tcl
log_packets.sh
snort_agent.tcl

via

rc.d/pcap_agent
cron
rc.d/snort_agent

Also, as noted in the configuration options, PADS and SANCP are not installed by default, so the package doesn't include them:

===> The following configuration options are available for sguil-sensor-0.7.0_2:
SANCP=off (default) "Include sancp sensor"
PADS=off (default) "Include pads sensor"
===> Use 'make config' to modify these settings


The snort-2.8.2.1_1 package installed the following.

x man/man8/snort.8.gz
x bin/snort
x etc/snort/classification.config-sample
x etc/snort/gen-msg.map-sample
x etc/snort/reference.config-sample
x etc/snort/sid-msg.map-sample
x etc/snort/snort.conf-sample
x etc/snort/threshold.conf-sample
x etc/snort/unicode.map-sample
x src/snort_dynamicsrc/bitop.h
x src/snort_dynamicsrc/debug.h
x src/snort_dynamicsrc/pcap_pkthdr32.h
x src/snort_dynamicsrc/preprocids.h
x src/snort_dynamicsrc/profiler.h
x src/snort_dynamicsrc/sf_dynamic_common.h
x src/snort_dynamicsrc/sf_dynamic_meta.h
x src/snort_dynamicsrc/sf_dynamic_preproc_lib.c
x src/snort_dynamicsrc/sf_dynamic_preproc_lib.h
x src/snort_dynamicsrc/sf_dynamic_preprocessor.h
x src/snort_dynamicsrc/sf_snort_packet.h
x src/snort_dynamicsrc/sf_snort_plugin_api.h
x src/snort_dynamicsrc/sfghash.h
x src/snort_dynamicsrc/sfhashfcn.h
x src/snort_dynamicsrc/sfsnort_dynamic_detection_lib.c
x src/snort_dynamicsrc/sfsnort_dynamic_detection_lib.h
x src/snort_dynamicsrc/str_search.h
x src/snort_dynamicsrc/stream_api.h
x lib/snort/dynamicengine/libsf_engine.so
x lib/snort/dynamicengine/libsf_engine.so.0
x lib/snort/dynamicengine/libsf_engine.la
x lib/snort/dynamicengine/libsf_engine.a
x lib/snort/dynamicrules/lib_sfdynamic_example_rule.so
x lib/snort/dynamicrules/lib_sfdynamic_example_rule.so.0
x lib/snort/dynamicrules/lib_sfdynamic_example_rule.la
x lib/snort/dynamicrules/lib_sfdynamic_example_rule.a
x lib/snort/dynamicpreprocessor/lib_sfdynamic_preprocessor_example.a
x lib/snort/dynamicpreprocessor/lib_sfdynamic_preprocessor_example.la
x lib/snort/dynamicpreprocessor/lib_sfdynamic_preprocessor_example.so
x lib/snort/dynamicpreprocessor/lib_sfdynamic_preprocessor_example.so.0
x lib/snort/dynamicpreprocessor/libsf_dcerpc_preproc.a
x lib/snort/dynamicpreprocessor/libsf_dcerpc_preproc.la
x lib/snort/dynamicpreprocessor/libsf_dcerpc_preproc.so
x lib/snort/dynamicpreprocessor/libsf_dcerpc_preproc.so.0
x lib/snort/dynamicpreprocessor/libsf_dns_preproc.a
x lib/snort/dynamicpreprocessor/libsf_dns_preproc.la
x lib/snort/dynamicpreprocessor/libsf_dns_preproc.so
x lib/snort/dynamicpreprocessor/libsf_dns_preproc.so.0
x lib/snort/dynamicpreprocessor/libsf_ftptelnet_preproc.a
x lib/snort/dynamicpreprocessor/libsf_ftptelnet_preproc.la
x lib/snort/dynamicpreprocessor/libsf_ftptelnet_preproc.so
x lib/snort/dynamicpreprocessor/libsf_ftptelnet_preproc.so.0
x lib/snort/dynamicpreprocessor/libsf_smtp_preproc.a
x lib/snort/dynamicpreprocessor/libsf_smtp_preproc.la
x lib/snort/dynamicpreprocessor/libsf_smtp_preproc.so
x lib/snort/dynamicpreprocessor/libsf_smtp_preproc.so.0
x lib/snort/dynamicpreprocessor/libsf_ssh_preproc.a
x lib/snort/dynamicpreprocessor/libsf_ssh_preproc.la
x lib/snort/dynamicpreprocessor/libsf_ssh_preproc.so
x lib/snort/dynamicpreprocessor/libsf_ssh_preproc.so.0
x lib/snort/dynamicpreprocessor/libsf_ssl_preproc.a
x lib/snort/dynamicpreprocessor/libsf_ssl_preproc.la
x lib/snort/dynamicpreprocessor/libsf_ssl_preproc.so
x lib/snort/dynamicpreprocessor/libsf_ssl_preproc.so.0
x share/examples/snort/classification.config-sample <- copied to classification.config
x share/examples/snort/create_db2
x share/examples/snort/create_mssql
x share/examples/snort/create_mysql
x share/examples/snort/create_oracle.sql
x share/examples/snort/create_postgresql
x share/examples/snort/gen-msg.map-sample <- copied to gen-msg.map
x share/examples/snort/reference.config-sample <- copied to reference.config
x share/examples/snort/sid-msg.map-sample <- copied to sid-msg.map
x share/examples/snort/snort.conf-sample <- copied to snort.conf
x share/examples/snort/threshold.conf-sample <- copied to threshold.conf
x share/examples/snort/unicode.map-sample <- copied to unicode.map
x share/doc/snort <- multiple files, omitted here
x etc/rc.d/snort

These are the configuration options for Snort.

===> The following configuration options are available for snort-2.8.2.2_2:
DYNAMIC=on (default) "Enable dynamic plugin support"
FLEXRESP=off (default) "Flexible response to events"
FLEXRESP2=off (default) "Flexible response to events (version 2)"
MYSQL=off (default) "Enable MySQL support"
ODBC=off (default) "Enable ODBC support"
POSTGRESQL=off (default) "Enable PostgreSQL support"
PRELUDE=off (default) "Enable Prelude NIDS integration"
PERPROFILE=off (default) "Enable Performance Profiling"
SNORTSAM=off (default) "Enable output plugin to SnortSam"
===> Use 'make config' to modify these settings

I'm glad dynamic plugin support is enabled, but disappointed to see performance profiling disabled. The --enable-timestats option isn't available via the port at all, apparently.

The FreeBSD port/package can't ship with rules, so you need to download your own rules from Sourcefire, along with any Emerging Threats rules you might want to enable. You then need to edit the snort.conf file to account for your HOME_NET and rule preferences.

The barnyard-sguil-0.2.0_5 package installed the following.

x bin/barnyard
x etc/barnyard.conf-sample <- copied to etc/barnyard.conf by the port
x share/doc/barnyard <- multiple files, omitted here
x etc/rc.d/barnyard

I noticed the barnyard.conf only contained

output sguil

Usually we need something like this:

output sguil: sensor_name sensornamegoeshere

When done the following packages are installed:

tao# pkg_info
barnyard-sguil-0.2.0_5 An output system for Snort (patched for sguil)
mysql-client-5.0.67_1 Multithreaded SQL database (client)
pcre-7.7_1 Perl Compatible Regular Expressions library
sguil-sensor-0.7.0_2 Sguil is a network security monitoring program
snort-2.8.2.1_1 Lightweight network intrusion detection system
tcl-8.4.19,1 Tool Command Language
tclX-8.4_1 Extended TCL
tcltls-1.6 SSL extensions for TCL; dynamicly loadable

Because I want this test system to host the Sguil server too, I decided to move to that phase of the testing.

Before add the sguil-server package, I need to install MySQL server 5.0. This is due to the configuration options:

===> The following configuration options are available for sguil-server-0.7.0_2:
MYSQL50=off (default) "Install mysql50 server"
===> Use 'make config' to modify these settings

I assume this is the case because the port maintainer prefers running MySQL on one system and the Sguil server on another.

Therefore, I install MySQL server 5.0 using pkg_add -vr mysql50-server.

Next I stopped MySQL via /usr/local/etc/rc.d/mysql stop. This is critical for the next step in the process.

I installed sguil-server next via pkg_add -vr sguil-server.

The sguil-server-0.7.0_2 package installed the following.

x bin/archive_sguildb.tcl
x bin/incident_report.tcl
x bin/sguild
x etc/sguil-server/autocat.conf-sample
x etc/sguil-server/sguild.access-sample
x etc/sguil-server/sguild.conf-sample
x etc/sguil-server/sguild.email-sample
x etc/sguil-server/sguild.queries-sample
x etc/sguil-server/sguild.reports-sample
x etc/sguil-server/sguild.users-sample
x lib/sguil-server/SguildAccess.tcl
x lib/sguil-server/SguildAutoCat.tcl
x lib/sguil-server/SguildClientCmdRcvd.tcl
x lib/sguil-server/SguildConnect.tcl
x lib/sguil-server/SguildCreateDB.tcl
x lib/sguil-server/SguildEmailEvent.tcl
x lib/sguil-server/SguildEvent.tcl
x lib/sguil-server/SguildGenericDB.tcl
x lib/sguil-server/SguildGenericEvent.tcl
x lib/sguil-server/SguildHealthChecks.tcl
x lib/sguil-server/SguildLoaderd.tcl
x lib/sguil-server/SguildMysqlMerge.tcl
x lib/sguil-server/SguildPadsLib.tcl
x lib/sguil-server/SguildQueryd.tcl
x lib/sguil-server/SguildReportBuilder.tcl
x lib/sguil-server/SguildSendComms.tcl
x lib/sguil-server/SguildSensorAgentComms.tcl
x lib/sguil-server/SguildSensorCmdRcvd.tcl
x lib/sguil-server/SguildTranscript.tcl
x lib/sguil-server/SguildUtils.tcl
x share/sguil-server/create_ruledb.sql
x share/sguil-server/create_sguildb.sql
x share/sguil-server/migrate_event.tcl
x share/sguil-server/migrate_sancp.tcl
x share/sguil-server/sancp_cleanup.tcl
x share/sguil-server/update_0.7.tcl
x share/sguil-server/update_sguildb_v5-v6.sql
x share/sguil-server/update_sguildb_v6-v7.sql
x share/sguil-server/update_sguildb_v7-v8.sql
x share/sguil-server/update_sguildb_v8-v9.sql
x share/sguil-server/update_sguildb_v9-v10.sql
x share/sguil-server/update_sguildb_v10-v11.sql
x share/sguil-server/update_sguildb_v11-v12.sql
x share/doc/sguil-server/CHANGES
x share/doc/sguil-server/FAQ
x share/doc/sguil-server/INSTALL
x share/doc/sguil-server/INSTALL.openbsd
x share/doc/sguil-server/LICENSE.QPL
x share/doc/sguil-server/OPENSSL.README
x share/doc/sguil-server/TODO
x share/doc/sguil-server/UPGRADE
x share/doc/sguil-server/USAGE
x share/doc/sguil-server/sguildb.dia
x etc/rc.d/sguild

What came next was very interesting. The port maintainer created a script to help set up the server. I'll show relevant excerpts.

Running pre-install for sguil-server-0.7.0_2..
This sguild install script creates a "turnkey" install
of sguild, including configuing the database and conf files
and user accounts so that sguild can be started immediately.

You may have already done all this (especially if this is an upgrade)
and may not be interested in iterating through cert creation and
everything else that the script does.

This portion of the script creates user and group accounts named "sguil".
Would you like to opt out of this portion of the install script
n
==> Pre-installation configuration of sguil-server-0.7.0_2
User 'sguil' create successfully.
sguil:*:1002:1002::0:0:User &:/home/sguil:/usr/sbin/nologin
...edited...
Running post-install for sguil-server-0.7.0_2..
This sguild install script creates a "turnkey" install
of sguild, including configuing the database and conf files
and user accounts so that sguild can be started immediately.

You may have already done all this (especially if this is an upgrade)
and may not be interested in iterating through cert creation and
everything else that the script does.

Would you like to opt out of the entire install script
and configure sguild manually yourself?
n
There are a few things that need to be done to complete the install.
First, you need to create certs so that the ssl connections between server and
sensors will work, you need to create the database, the account to access it and
the tables for the database and you need to create the directories where all the
data will be stored. (You will also need to edit the conf files for your setup.)


If you haven't already done this, I can do it for you now.
Would you like to create certs now? (y for yes, n for no)
y
Creating /usr/local/etc/sguil-server/certs ....
First we need to create a password-protected CA cert.

(The Common Name should be the FQHN of your squil server.)
Generating a 1024 bit RSA private key
.....++++++
.......................................++++++
writing new private key to 'privkey.pem'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:VA
Locality Name (eg, city) []:M
Organization Name (eg, company) [Internet Widgits Pty Ltd]:T
Organizational Unit Name (eg, section) []:O
Common Name (eg, YOUR name) []:R
Email Address []:o

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:sguil
An optional company name []:
Now we need to create the actual certificate for your server.
Signature ok
subject=/C=US/ST=VA/L=M/O=T/OU=O/CN=R/emailAddress=o
Getting CA Private Key
Enter pass phrase for privkey.pem:
Finally, we need to move the certs to the '/usr/local/etc/sguil-server/certs}' directory
and clean up the port directory as well.
mv: rename /a/ports/security/sguil-server/sguild.key to /usr/local/etc/sguil-server/certs/sguild.key:
No such file or directory
mv: rename /a/ports/security/sguil-server/sguild.pem to /usr/local/etc/sguil-server/certs/sguild.pem:
No such file or directory
rm: /a/ports/security/sguil-server/CA.pem: No such file or directory
rm: /a/ports/security/sguil-server/privkey.pem: No such file or directory
rm: /a/ports/security/sguil-server/sguild.req: No such file or directory
rm: /a/ports/security/sguil-server/file.sr1: No such file or directory

Those errors happen because the script was written with the assumption that it would be run from a ports installation, not a package installation. I emailed the ports maintainer to see if the problem can be fixed.

Is the installation of mysql brand new and unaltered?
By default, when mysql is installed, it creates five accounts.
None of those accounts are protected by passwords. That needs to be corrected.
The five accounts are:
root@localhost
root@127.0.0.1
root@tao.taosecurity.com
@localhost
@tao.taosecurity.com
I can remove all of the accounts except root@localhost (highly recommended)
and I can set the password for the root@localhost account. (If you get an error
don't worry about it. The account may not have been created to begin with.
Would you like me to do that now?
y
Enabling mysql in /etc/rc.conf and starting the server.....
It appears that mysql is already enabled!

The mysql pid is ....
Starting mysql.
Deleting users from mysql......
All done deleting.......
What would you like root@localhost's password to be?
root
Would you like to bind mysql to localhost so it only listens on that address?

y
The mysql pid is 1694.....
Stopping mysql.
Waiting for PIDS: 1694.
Starting mysql.
Would you like to create the database to store all nsm data?

y
NOTE: If you're upgrading, you do NOT want to do this! You want to upgrade.
./+INSTALL: cannot open /work/a/ports/security/sguil-server/work/sguil-0.7.0/server/sql_scripts/create_sguildb.sql:
No such file or directory

This error is similar to the previous error. I also emailed the port maintainer.

Would you like to create a user "sguild@localhost" for database access?

y
Please enter the password that you want to use for the sguild account.

sguil
Creating account for sguild with access to sguildb.....
Would you like to create the data directory and all its subdirectories?

y
What do you want the name of the main directory to be?
(Be sure to include the full path to the directory - e.g. /var/nsm)
/var/nsm
The main directory will be named '/var/nsm'.
Creating /var/nsm ....
Creating /var/nsm/archives ....
Creating /var/nsm/rules ....
Creating /var/nsm/load ....
Would you like to enable sguild in /etc/rc.conf?

y
iWriting to /etc/rc.conf....

If the sguild.conf file does not exist, I will create and edit it now.

Preparing to edit the sguild.conf file......
You still need to review all the conf files and configure sguil
per your desired setup before starting sguild. Refer to the port docs in
/usr/local/share/doc/sguil-server before proceeding.

Right now, all the conf files except sguild.conf are set to the defaults.
...edited...

That ends the need for user input. The final step advises the user on other required changes.

***********************************
* !!!!!!!!!!! WARNING !!!!!!!!!!! *
***********************************

PLEASE NOTE: If you are upgrading from a previous version,
read the UPGRADE doc (in /usr/local/share/doc/sguil-server) before proceeding!!!
Some noteworthy changes in version 0.7.0:
SSL is now required for server, sensor and client.
The sguild.conf and sguild.email files have changed.
You MUST run the upgrade_0.7.tcl script to clean up and
prepare the database before running the new version. BE SURE
TO BACK UP YOUR DATABASE BEFORE PROCEEDING!!!

If you had existing config files in /usr/local/etc/sguil-server
they were not overwritten. If this is a first time install, you
must copy the sample files to the corresponding conf file and
edit the various config files for your site. See the INSTALL
doc in /usr/local/share/doc/sguil-server for details. If this is an upgrade, replace
your existing conf file with the new one and edit accordingly.

The sql scripts for creating database tables were placed in
the /usr/local/share/sguil-server/ directory. PLEASE
NOTE: LOG_DIR is not set by this install. You MUST create the
correct LOG_DIRS and put a copy of the snort rules you use in
LOG_DIR/rules.

The sguild, archive_sguildb.tcl and incident_report.tcl scripts
were placed in /usr/local/bin/. The incident_report.tcl
script is from the contrib section. There is no documentation
and the script's variables must be edited before it is used.

A startup script, named sguild.sh was installed in
/usr/local/etc/rc.d/. To enable it, edit /etc/rc.conf
per the instructions in the script.

NOTE: Sguild now runs under the sguil user account not root!

At the end of the process the system had these packages installed.

tao# pkg_info
barnyard-sguil-0.2.0_5 An output system for Snort (patched for sguil)
mysql-client-5.0.67_1 Multithreaded SQL database (client)
mysql-server-5.0.67_1 Multithreaded SQL database (server)
mysqltcl-3.05 TCL module for accessing MySQL databases based on msqltcl
p0f-2.0.8 Passive OS fingerprinting tool
pcre-7.7_1 Perl Compatible Regular Expressions library
sguil-sensor-0.7.0_2 Sguil is a network security monitoring program
sguil-server-0.7.0_2 Sguil is a network security monitoring program
snort-2.8.2.1_1 Lightweight network intrusion detection system
tcl-8.4.19,1 Tool Command Language
tclX-8.4_1 Extended TCL
tcllib-1.10_1 A collection of utility modules for Tcl
tcltls-1.6 SSL extensions for TCL; dynamicly loadable
tcpflow-0.21_1 A tool for capturing data transmitted as part of TCP connec

If I wanted to go from here to actually run the Sguil server, I would have to manually create the database and certificates. Once the script is fixed I shouldn't have to do that.

The major configuration issue that remains is ensuring that data is being written to logical locations. This primarily means pcap data is stored in a partition that can accommodate it, and the database is located in a partition that can handle growing tables.

I think it should be clear at this point that the easiest way to try Sguil is to use NSMNow. I recommend that only for demo installations, although you can tweak the installation to put what you want in locations you like.


Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

Senin, 02 Februari 2009

Data Leakage Protection Thoughts

"Data Leakage Protection" (DLP) appears to be the hot product everybody wants. I was asked to add to the SearchSecurity section I wrote two years ago, but I'm not really interested. I mentioned "extrusion" over five years ago in What Is Extrusion Detection?

This InformationWeek story had an interesting take:

What constitutes DLP? Any piece of backup software, disk encryption software, firewall, network access control appliance, virus scanner, security event and incident management appliance, network behavior analysis appliance--you name it--can be loosely defined as a product that facilitates DLP.

For the purposes of this Rolling Review, we will define enterprise DLP offerings as those that take a holistic, multitiered approach to stopping data loss, including the ability to apply policies and quarantine information as it rests on a PC (data in use), as it rests on network file systems (data at rest), and as it traverses the LAN or leaves the corporate boundary via some communication protocol (data in motion).

Locking down access to USB ports or preventing files from being printed or screen-captured isn't enough anymore; organizations require true content awareness across all channels of communication and across all systems.


Wow. Cue a giant product rebranding effort. "Yes, we do DLP!!"

I tried to capture my concerns in the following two figures.

I usually approach security issues from the point of view of a security analyst, meaning someone who has operational responsibilities. I don't just deploy security infrastructure. I don't just keep the security infrastructure functioning. I am usually the person who has to do something with the output of the security infrastructure.

In this respect I can see the world in two states: 1) block/filter/deny or 2) inspect and log.

As a security analyst, B/F/D is generally fairly simple. Once a blocking decision is made, I don't care that much. Sure, you might want to know why someone tried to do something that ended up resulting in a B/F/D condition, but since the target is unaffected I don't really care.

Consider this diagram.


As a security analyst, inspect and log is much more complicated. Nothing is blocked, but I am told that a suspicious or malicious activity was permitted. Now I really need to know what someone successfully completed an act that resulted in a permitted yet inspected and logged condition, because the target could be negatively affected.

Consider this diagram.


Some might naively assume that the solution to this problem is just to forget inspection and logging and just block/filter/deny everything. Good luck trying that in an operational setting! How often do we hear about so-called "IPS" running in passive mode? How many fancy "DLP" products are running now in alert-only mode?

At the risk of discussing too many topics at once, let me also contribute this: is it just me, or are we security people continuously giving ground to the adversary? In other words:

  1. Let's stop them at our firewall.

  2. Well, we have to let some traffic through. Our IPS will catch the bad guy.

  3. Shoot, that didn't work. Ok, when the bad guy tries to steal our data the DLP system will stop him.

  4. Darn, DLP is for "stopping stupid." At least when the bad guy gets the data back to his system our Digital Rights Management (DRM) will keep him from reading it. (Right.)


I guess my thoughts on DLP can be distilled to the following.

  1. DLP is "workable" (albeit of dubious value nevertheless) if you run it solely in a B/F/D mode.

  2. As soon as you put DLP is inspect and log mode, you need to hire an army of analysts to make sense of the output.

  3. The amount of asset understanding to run DLP in either mode is likely to be incredibly large, unless you so narrowly scope it as to make me question why you bought a new product to enforce such a policy.

  4. DLP is not going to stop anyone who is not stupid.


Is anyone else hearing demand for DLP, and what are you saying?


Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.