Sabtu, 28 Februari 2009

Using Responsible Person Records for Asset Management

Today while spending some time at the book store with my family, I decided to peruse a copy Craig Hunt's TCP/IP Network Administration. It covers BIND software for DNS. I've been thinking about my post Asset Management Assistance via Custom DNS Records. In the book I noticed the following:



"Responsible Person" record? That sounds perfect. I found RFC 1183 from 1990 introduced these.

I decided to try setting up these records on a VM running FreeBSD 7.1 and BIND 9. The VM had IP 172.16.99.130 with gateway 172.16.99.2. I followed the example in Building a Server with FreeBSD 7.

First I made changes to named.conf as shown in this diff:

# diff /var/named/etc/namedb/named.conf /var/named/etc/namedb/named.conf.orig
132c132
< // zone "16.172.in-addr.arpa" { type master; file "master/empty.db"; };
---
> zone "16.172.in-addr.arpa" { type master; file "master/empty.db"; };
274,290d273
< zone "example.com" {
< type master;
< file "master/example.com";
< allow-transfer { localhost; };
< allow-update { key rndc-key; };
< };
<
< zone "99.16.172.in-addr.arpa" {
< type master;
< file "master/example.com.rev";
< allow-transfer { localhost; };
< allow-update { key rndc-key; };
< };
< key "rndc-key" {
< algorithm hmac-md5;
< secret "4+IlE0Z/oHoHok9EnVwkUw==";
< };

To generate the last section I ran the following:

# rndc-confgen -a
wrote key file "/etc/namedb/rndc.key"
# cat rndc.key >> named.conf

Next I created /var/named/etc/namedb/master/example.com:

# cat example.com
$TTL 3600

example.com. IN SOA host.example.com. root.example.com. (

1 ; Serial
10800 ; Refresh
3600 ; Retry
604800 ; Expire
86400 ) ; Minimum TTL

;DNS Servers
example.com. IN NS host.example.com.

;Machine Names
host.example.com. IN A 172.16.99.130
gateway.example.com. IN A 172.16.99.2

;Aliases
www IN CNAME host.example.com.

;MX Record
example.com. IN MX 10 host.example.com.

;RP Record
host.example.com. IN RP taosecurity.email.com. sec-con.example.com.
gateway.example.com. IN RP networkteam.email.com. net-con.example.com.

;TXT Record
sec-con.example.com. IN TXT "Richard Bejtlich"
sec-con.example.com. IN TXT "Employee ID 1234567890"
sec-con.example.com. IN TXT "Northern VA office"
net-con.example.com. IN TXT "Network Admin"
net-con.example.com. IN TXT "Group ID 0987"
net-con.example.com. IN TXT "DC office"

Then I created /var/named/etc/namedb/master/example.com.rev:
# cat example.com.rev 
$TTL 3600

99.16.172.in-addr.arpa. IN SOA host.example.com. root.example.com. (

1 ; Serial
10800 ; Refresh
3600 ; Retry
604800 ; Expire
86400 ) ; Minimum TTL

;DNS Servers
99.16.172.in-addr.arpa. IN NS host.example.com.

;Machine IPs
1 IN RP networkteam.email.com. net-con.example.com.
2 IN PTR gateway.example.com.
130 IN PTR host.example.com.
130 IN PTR www.example.com.

;RP Record
2 IN RP networkteam.email.com. net-con.example.com.
13 IN RP taosecurity.email.com. sec-con.example.com.

If you caught my ommission, I'll point it out near the end of the post.

Finally I edited /etc/resolv.conf so it pointed only to 127.0.0.1, and restarted named:

# /etc/rc.d/named restart
Stopping named.
Starting named.

Now I was able to query the name server.

# dig @127.0.0.1 version.bind chaos txt | grep version.bind
; <<>> DiG 9.4.2-P2 <<>> @127.0.0.1 version.bind chaos txt
;version.bind. CH TXT
version.bind. 0 CH TXT "9.4.2-P2"
version.bind. 0 CH NS version.bind.

Let's do zone transfers for the forward and reverse zones.

# dig @127.0.0.1 axfr example.com.

; <<>> DiG 9.4.2-P2 <<>> @127.0.0.1 axfr example.com.
; (1 server found)
;; global options: printcmd
example.com. 3600 IN SOA host.example.com. root.example.com. 1 10800 3600 604800 86400
example.com. 3600 IN MX 10 host.example.com.
example.com. 3600 IN NS host.example.com.
gateway.example.com. 3600 IN RP networkteam.email.com. net-con.example.com.
gateway.example.com. 3600 IN A 172.16.99.2
host.example.com. 3600 IN RP taosecurity.email.com. sec-con.example.com.
host.example.com. 3600 IN A 172.16.99.130
net-con.example.com. 3600 IN TXT "Network Admin"
net-con.example.com. 3600 IN TXT "Group ID 0987"
net-con.example.com. 3600 IN TXT "DC office"
sec-con.example.com. 3600 IN TXT "Richard Bejtlich"
sec-con.example.com. 3600 IN TXT "Employee ID 1234567890"
sec-con.example.com. 3600 IN TXT "Northern VA office"
www.example.com. 3600 IN CNAME host.example.com.
example.com. 3600 IN SOA host.example.com. root.example.com. 1 10800 3600 604800 86400
;; Query time: 41 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sun Mar 1 04:22:57 2009
;; XFR size: 15 records (messages 1, bytes 480)

# dig @127.0.0.1 axfr 99.16.172.in-addr.arpa.

; <<>> DiG 9.4.2-P2 <<>> @127.0.0.1 axfr 99.16.172.in-addr.arpa.
; (1 server found)
;; global options: printcmd
99.16.172.in-addr.arpa. 3600 IN SOA host.example.com. root.example.com. 1 10800 3600 604800 86400
99.16.172.in-addr.arpa. 3600 IN NS host.example.com.
1.99.16.172.in-addr.arpa. 3600 IN RP networkteam.email.com. net-con.example.com.
13.99.16.172.in-addr.arpa. 3600 IN RP taosecurity.email.com. sec-con.example.com.
130.99.16.172.in-addr.arpa. 3600 IN PTR host.example.com.
130.99.16.172.in-addr.arpa. 3600 IN PTR www.example.com.
2.99.16.172.in-addr.arpa. 3600 IN RP networkteam.email.com. net-con.example.com.
2.99.16.172.in-addr.arpa. 3600 IN PTR gateway.example.com.
99.16.172.in-addr.arpa. 3600 IN SOA host.example.com. root.example.com. 1 10800 3600 604800 86400
;; Query time: 27 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Sun Mar 1 04:26:36 2009
;; XFR size: 9 records (messages 1, bytes 380)

Now let's pretend we have a security incident involving 172.16.99.2. You want to know who owns it. Let's query for RP records.

VirtualBSD# host -t rp 172.16.99.2
2.99.16.172.in-addr.arpa domain name pointer gateway.example.com.

Ok, I see that I get a PTR record for 172.16.99.2. I can look for a RP record for that hostname.

# host -t rp gateway.example.com.
gateway.example.com has RP record networkteam.email.com. net-con.example.com.

That worked. I see the email address for the Responsible Person is networkteam@email.com (you have to imagine the @ instead of the . there), and I also get indication of a TXT record. I query for that next.

# host -t txt net-con.example.com.
net-con.example.com descriptive text "Network Admin"
net-con.example.com descriptive text "Group ID 0987"
net-con.example.com descriptive text "DC office"

Great, I have some additional details on the network team.

What if I try 172.16.99.130?

# host -t rp 172.16.99.130
130.99.16.172.in-addr.arpa domain name pointer www.example.com.
130.99.16.172.in-addr.arpa domain name pointer host.example.com.

# host -t RP www.example.com.
www.example.com is an alias for host.example.com.
host.example.com has RP record taosecurity.email.com. sec-con.example.com.

# host -t TXT sec-con.example.com.
sec-con.example.com descriptive text "Richard Bejtlich"
sec-con.example.com descriptive text "Employee ID 1234567890"
sec-con.example.com descriptive text "Northern VA office"

How about 172.16.99.1?

# host -t rp 172.16.99.1
1.99.16.172.in-addr.arpa has no PTR record

That was the error in the example.com.rev file I posted earlier. Or is it an error? Maybe not:

# host -t rp 1.99.16.172.in-addr.arpa
1.99.16.172.in-addr.arpa has RP record networkteam.email.com. net-con.example.com.

If we query for the IP in in-addr.arpa format, we can find a RP record. So, it's possible to have IPs without hostnames in your DNS and still have RP records. You just need to know how to ask for them.

I think this is really promising. At the very least, an DNS admin responsible for hosts in a certain subnet could add RP records, like that of 172.16.99.1, for every host. This would probably work best for servers, but it should be possible to extend it to hosts with dynamic DNS assignments.

Incidentally, RP records do not seem very popular on the Internet. If you find any in the wild, please let me know.


Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Sample Lab from TCP/IP Weapons School 2.0 Posted

Several of you have asked me to explain the difference between TCP/IP Weapons School (TWS), which I first taught at USENIX Security 2006, and TCP/IP Weapons School 2.0 (TWS2), which I first taught at Black Hat DC 2009 Training last week. This post will explain the differences, with an added bonus.


  1. I have retired TWS, the class I taught from 2006-2008. I am only teaching TWS2 for the foreseeable future.

  2. TWS2 is a completely brand-new class. I did not reuse any material from TWS, my older Network Security Operations class, or anything else.

  3. TWS2 offers zero slides. Students receive three handouts and a DVD. The handouts include an 84 page investigation guide, a 25 page student workbook, and a 120 page teacher's guide. The DVD contains a virtual machine with all the tools and evidence needed to complete the labs, along with the network and memory evidence as stand-alone files.

  4. TWS2 is heavily lab-focused. I've been teaching professionally since 2002, and I've recognized that students prefer doing to staring and maybe listening! Everyone who leaves TWS2 has had hands-on experience investigating computer incidents in an educational environment.

  5. TWS2 is designed for beginner-to-intermediate attendees. Some advanced people will like the material too, although I can't promise to please everyone. I built the class so that the newest people could learn by trying the labs, but follow the teacher's guide (which they receive) if they need extra assistance. More advanced students are free to complete the labs any way they see fit, preferably never looking at the teacher's guide until the labs are done. This system worked really well in DC last week.

  6. TWS2 uses multiple forms of evidence. Solving the labs relies heavily on the network traffic provided with each case, but some questions can only be answered by reviewing Snort alerts, or session data, or system logs provided via Splunk, or even memory captures analyzed with tools like Volatility or whatever else the student brings to the case.

  7. TWS2 comes home with the student and teaches an investigative mindset. Unlike classes that dump a pile of slides on you, TWS2 essentially delivers a book in courseware form. I use (*gasp*) whole sentences, even paragraphs, to describe how to solve labs. By working the labs the student learns how to be an investigator, rather than just watching or listening to investigative theories. I am using the same material to teach analysts on my team how to detect and respond to intrusions.


To provide a better sense of the class, I've posted materials from one of the labs at http://www.taosecurity.com/tws2_blog_sample_28feb09a.zip. The .zip contains the student workbook for the case, the teacher's guide for the case, and the individual network trace file for the case. There is no way for me to include the 4 GB compressed VM that students receive, but by reviewing this material you'll get some idea of the nature of this class.

My next session of TCP/IP Weapons School 2.0 will take place in Amsterdam on 14-15 April 2009 at Black Hat Europe 2009. Seats are already filling.

The last sessions of the year will take place in Las Vegas on 25-26 and 27-28 July 2009 at Black Hat USA 2009. Registration for training at that location will open this week, I believe.

I am not teaching the class publicly anywhere else in 2009. I do not offer private classes to anyone, except internally within GE (and those are closed to the public).

If you have any questions on these classes, please post them here. Thank you.


Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Jumat, 27 Februari 2009

Inputs vs Outputs, or Why Controls Are Not Sufficient

I have a feeling my post Consensus Audit Guidelines Are Still Controls is not going to be popular in certain circles. While tidying the house this evening I came across my 2007 edition of the Economist's Pocket World in Figures. Flipping through the pages I found many examples of inputs (think "control-compliant") vs outputs (think "field-assessed").

I'd like to share some of them with you in an attempt to better communicate the ideas my last post.

  • Business creativity and research


    • Input(s): Total expenditures on research and development, % of GDP

    • Output(s): Number of patents granted (per X people)


  • Education


    • Input(s): Education spending, % of GDP; school enrolment

    • Output(s): Literacy rate


  • Life expectancy, health, and related categories


    • Input(s): Health spending, % of GDP; population per doctor; number of hospital beds per citizen; (also add in air quality, drinking and smoking rates, etc.)

    • Output(s): Death rates; infant mortality; and so on...


  • Crime and punishment


    • Input(s): Total police per X population

    • Output(s): Crime rate



Is this making sense?


Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Consensus Audit Guidelines Are Still Controls

Blog readers know that I think FISMA Is a Joke, FISMA Is a Jobs Program, and if you fought FISMA Dogfights you would always die in a burning pile of aerial debris.

Now we have the Consensus Audit Guidelines (CAG) published by SANS. You can ask two questions: 1) is this what we need? and 2) is it at least a step in the right direction?

Answering the first question is easy. You can look at the graphic I posted to see that CAG is largely another set of controls. In other words, this is more control-compliant "security," not field-assessed security. Wait, you might ask, doesn't the CAG say this?

What makes this document effective is that it reflects knowledge of actual attacks and defines controls that would have stopped those attacks from being successful. To construct the document, we have called upon the people who have first-hand knowledge about how the attacks are being carried out.

That excerpt means that CAG defines defensive activities that are believed to be effective by various security practitioners. I am not doubting that these practitioners are smart. I am not doubting their skills. What I am trying to say is that implementing the controls in CAG does not tell you the score of the game. CAG is all about inputs. After implementing CAG you still do not know any outputs. In other words, you apply controls (an "X"), but what is the outcome (the "Y"). The controls may or may not be wonderful, but if you are control-compliant you do not have the information produced by field-assessed security.

Does anyone real think we do not have controls already? The CAG itself shows how it maps against NIST SP 800-53 Rev 3 Controls. Five are shown below as an example.



For example, looking at CAG, how many of these strike you as something you didn't already know about?

Critical Controls Subject to Automated Measurement and Validation:

  1. Inventory of Authorized and Unauthorized Hardware.

  2. Inventory of Authorized and Unauthorized Software.

  3. Secure Configurations for Hardware and Software on Laptops, Workstations, and Servers.

  4. Secure Configurations of Network Devices Such as Firewalls and Routers.

  5. Boundary Defense

  6. Maintenance and Analysis of Complete Security Audit Logs

  7. Application Software Security

  8. Controlled Use of Administrative Privileges

  9. Controlled Access Based On Need to Know

  10. Continuous Vulnerability Testing and Remediation

  11. Dormant Account Monitoring and Control

  12. Anti-Malware Defenses

  13. Limitation and Control of Ports, Protocols and Services

  14. Wireless Device Control

  15. Data Leakage Protection


Additional Critical Controls (not directly supported by automated measurement and validation):

  1. Secure Network Engineering

  2. Red Team Exercises

  3. Incident Response Capability

  4. Data Recovery Capability

  5. Security Skills Assessment and Training to Fill Gaps



Don't get me wrong. If you are not implementing these controls already, you should do so. That will still not tell you the score of the game. If you want to see exactly what I proposed, I differentiated between control-compliance "security" and field-assessed security in my post Controls Are Not the Solution to Our Problem.

So, to answer my second question, CAG is a step in the right direction away from FISMA. It doesn't change the game, especially if you are already implementing NIST guidance.


Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Rabu, 25 Februari 2009

Asset Management Assistance via Custom DNS Records

In my post Black Hat DC 2009 Wrap-Up, Day 2 I mentioned enjoying Dan Kaminsky's talk. His thoughts on the scalability of DNS made an impression on me. I thought about the way the Team Cymru Malware Hash Registry returns custom DNS responses for malware researchers, for example. In this post I am interested in knowing if any blog readers have encountered problems similar to the ones I will describe next, and if yes, did you / could you use DNS to help mitigate it?

When conducting security operations to detect and respond to incidents, my team follows the CAER approach. Escalation is always an issue, because it requires identifying a responsible party. If you operate a defensible network it will be inventoried and claimed, but getting to that point is difficult.

The problem is this: you have an IP address, but how do you determine the owner? Ideally you have access to a massive internal asset database, but the problems of maintaining such a system can be daunting. The more sites, departments, businesses, etc. in play, the more difficult it is to keep necessary information in a single database. Even a federated system runs into problems, since there must be a way to share information, submit queries, keep data current, and so on.

Dan made a key point during his talk: one of the reasons DNS scales so well is that edge organizations maintain their own records, without having to constantly notify the core. Also, anyone can query the system, and get results from the (presumably) right source.

With this in mind, would it make sense to internally deploy custom DNS records that identify asset owners?

In other words:

  1. Mandate by policy that all company assets must be registered in the internal company DNS.

  2. Add extensions of some type that provide information like the following, at a minimum:


    • Asset owner name and/or employee number

    • Owning business unit

    • Date record last updated


  3. Periodically, statistically survey IP addresses observed via network monitoring to determine if their custom DNS records exist and validate that they are accurate


These points assume that there is already a way to associate an employee name or number with a contact method such as email address and/or phone number, as would be the case with a Global Address List.

Is anyone doing this? If not, do you have ideas for identifying asset owners when the scale of the problem is measured in the hundreds of thousands?


Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Selasa, 24 Februari 2009

HD Moore on the Necessity of Disclosure

HD Moore posted a great defense of full disclosure in his article The Best Defense is Information on the latest Adobe vulnerability.

The strongest case for information disclosure is when the benefit of releasing the information outweighs the possible risks. In this case, like many others, the bad guys already won. Exploits are already being used in the wild and the fact that the rest of the world is just now taking notice doesn't mean that these are new vulnerabilities. At this point, the best strategy is to raise awareness, distribute the relevant information, and apply pressure on the vendor to release a patch.

Adobe has scheduled the patch for March 11th. If you believe that Symantec notified them on February 12th, this is almost a full month from news of a live exploit to a vendor response. If the vendor involved was Microsoft, the press would be tearing them apart right now. What part of "your customers are being exploited" do they not understand?



Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.

Buck Surdu and Greg Conti Ask "Is It Time for a Cyberwarfare Branch?"

The latest issue of the Information Assurance Technology Analysis Center's IANewsletter features "Army, Navy, Air Force, and Cyber -- Is It Time for a Cyberwarfare Branch of [the] Military?" by COL John "Buck" Surdu and LTC Gregory Conti. I found these excerpts enlightening.

The Army, Navy, and Air Force all maintain cyberwarfare components, but these organizations exist as ill-fitting appendages that attempt to operate in inhospitable cultures where technical expertise is not recognized, cultivated, or completely understood. The services have developed effective systems to build traditional leadership and management skills. They are quite good at creating the best infantrymen, pilots, ship captains, tank commanders, and artillerymen, but they do little to recognize and develop technical expertise. As a result, the Army, Navy, and Air Force hemorrhage technical talent, leaving the Nation’s military forces and our country under-prepared for both the ongoing cyber cold war and the likelihood of major cyberwarfare in the future...

The skill sets required to wage cyberwar in this complex and ill-defined environment are distinct from waging kinetic war. Both the kinetic and non-kinetic are essential components of modern warfare, but the status quo of integrating small cyberwarfare units directly into the existing components of the armed forces is insufficient...

The cultures of today’s military services are fundamentally incompatible with the culture required to conduct cyberwarfare... The Army, Navy, and Air Force are run by their combat arms officers, ship captains, and pilots, respectively. Understandably, each service selects leaders who excel at conducting land, sea, and air battles and campaigns. A deep understanding and respect for cyberwarfare by these leaders is uncommon.

To understand the culture clash evident in today’s existing militaries, it is useful to examine what these services hold dear -- skills such as marksmanship, physical strength, and the ability to jump out of airplanes and lead combat units under enemy fire. Accolades are heaped upon those who excel in these areas. Unfortunately, these skills are irrelevant in cyberwarfare...

The culture of each service is evident in its uniforms. Consider the awards, decorations, badges, patches, tabs, and other accoutrements authorized for wear by each service. Absent is recognition for technical expertise. Echoes of this ethos are also found in disadvantaged assignments, promotions, school selection, and career progression for those who pursue cyberwarfare expertise, positions, and accomplishments...

Evidence to back these assertions is easy to find. From a recent service academy graduate who desired more than anything to become part of a cyberwarfare unit but was given no other option than to leave the service after his initial commitment, to the placement of a service’s top wireless security expert in an unrelated assignment in the middle of nowhere, to the PhD whose mission was to prepare PowerPoint slides for a flag officer -- tales of skill mismanagement abound...

[W]e are arguing that these cultures inhibit (and in some cases punish) the development of the technical expertise needed for this new warfare domain.... Only by understanding the culture of the technical workforce can a cyberwarfare organization hope to succeed... High-and-tight haircuts, morning physical training runs, rigorously enforced recycling programs, unit bake sales, and second-class citizen status are unlikely to attract and retain the best and brightest people.


I agree with almost all of this article. When I left the Air Force in early 2001, I was the 31st of the last 32 eligible company grade officers in the Air Force Information Warfare Center to separate from the Air Force rather than take a new nontechnical assignment. The only exception was a peer who managed to grab a job at NSA. The other 31 all left to take technical jobs in industry because we didn't want to become protocol officers in Guam or logitisics officers in a headquarters unit.

Please read the whole article before commenting, if you choose to do so. I selected only a few points but there are others.


Richard Bejtlich is teaching new classes in Europe in 2009. Register by 1 Mar for the best rates.