Rabu, 30 Juni 2010

Digital Forensics Magazine


I just learned of a new resource for digital forensics practitioners -- Digital Forensics Magazine. They just published their third issue. This appears to be a high quality publication with authors like Mark D. Rasch (The Fourth Amendment: Cybersearches, Particularity and Computer Forensics), Solera's Steve Shillingford (It's Not About Prevention), and others. Check it out!

Jumat, 25 Juni 2010

Comments on Sharkfest Presentation Materials

I saw that presentations from Sharkfest 2010 are now posted. This is the third year that CACE Technologies has organized this conference. I've had conflicts each of the last three years, but I think I need to reserve the dates for 2011 when they are available. In this post I wanted to mention a few slides that looked interesting.

Jasper Bongertz presented Wireshark vs the Cloud (.pdf) I reviewed this presentation to see if anyone is doing something novel regarding monitoring Cloud environments. In the slide at right you see his first option is to install a monitoring tool inside a VM. That's standard.

In the next slide you see his second option is to select a link upstream from the VM server and tap that line. That's standard too. I know of some cloud providers who use this strategy and then filter the results. You will likely need some robust equipment, depending on active the link is.

In the last slide you see that future options include ensuring that the virtual switch in the VM server provide instrumentation options. From my limited understanding this should be the case with expensive solutions like the Cisco Nexus 1000v, but I don't have any personal experience with that. Any comments from blog readers?

I also wanted to mention SPAN Out of the Box (.pdf) by John He of Dualcomm Technology. In his presentation he advocates replacing a tap with a switch used only for port mirroring, as shown in the slide at left. He's mainly trying to compete on price, since his "USB Powered 5-Port Gigabit Desktop Switch with Port-Mirroring & PoE Pass-Through" sells for $139.95 on his Web site. I'll ask Mr He if I could get a demo switch to see how well it works.

Dealing with Security Instrumentation Failures

I noticed three interesting blog posts that address security instrumentation failures.

First, security software developer Charles Smutz posted Flushing Out Leaky Taps:

How many packets does your tapping infrastructure drop before ever reaching your network monitoring devices? How do you know?

I’ve seen too many environments where tapping problems have caused network monitoring tools to provide incorrect or incomplete results. Often these issues last for months or years without being discovered, if ever...

One thing to keep in mind when worrying about loss due to tapping is that you should probably solve, or at least quantify, any packet loss inside your network monitoring devices before you worry about packet loss in the taps. You need to have strong confidence in the accuracy of your network monitoring devices before you use data from them to debug loss by your taps. Remember, in most network monitoring systems there are multiple places where packet loss is reported...

I’m not going to discuss in detail the many things that can go wrong in getting packets from your network to a network monitoring tool... I will focus largely on the resulting symptoms and how to detect, and to some degree, quantify them. I’m going to focus on two very common cases: low volume packet loss and unidirectional (simplex) visibility.


Read Charles' post to learn ways he deals with these issues.

Next I'd like to point to this post by the West Point Information Technology Operations Center on Misconfiguration Issue of NSA SPAN Port:

Thanks to the input we have already received on the 2009 CDX dataset, we have identified an issue in the way the NSA switch was configured. Specifically, we believe the span port from which our capture node was placed was configured for unidirectional listening. This resulted in our capture node only "hearing" received traffic from the red cell.

Doh. This is a good reminder to test your captures, as Charles recommends!

Finally, Alec Waters discusses weaknesses in SIEMs in his post Si(EM)lent Witness:

[H]ow can we convince someone that the evidence we are presenting is a true and accurate account of a given event, especially in the case where there is little or no evidence from other sources...

D]idn’t I say that vendors went to great lengths to prevent tampering? They do, but these measures only protect the information on the device already. What if I can contaminate the evidence before it’s under the SIEM’s protection?

The bulk of the information received by an SIEM box comes over UDP, so it’s reasonably easy to spoof a sender’s IP address; this is usually the sole means at the SIEM’s disposal to determine the origin of the message. Also, the messages themselves (syslog, SNMP trap, netflow, etc.) have very little provenance – there’s little or no sender authentication or integrity checking.

Both of these mean it’s comparatively straightforward for an attacker to send, for example, a syslog message that appears to have come from a legitimate server when it’s actually come from somewhere else.

In short, we can’t be certain where the messages came from or that their content is genuine.


Read Alec's post for additional thoughts on the validity of messages sent to SIEMs.

Kamis, 24 Juni 2010

CloudShark, Another Packet Repository in the Cloud

I've been interested in online packet tools for several years, dating back to my idea for OpenPacket.org, then continuing with Mu Dynamics' cool site Pcapr.net, which I profiled in Traffic Talk 10.

Yesterday I learned of CloudShark, which looks remarkably similar to Wireshark but appears as a Web application.

I generated the picture at right by downloading a trace showing FTP traffic from pcapr.net, then uploading it to CloudShark. Apparently CloudShark renders the trace by invoking Tshark, then building the other Wireshark-like components separately. You can access the trace at this link. CloudShark says:

While the URLs to your decode session are not publicly shared, we make no claims that you data is not viewable by other CloudShark users. For now, if you want to protect sensitive data in your capture files, don't use CloudShark.

Using Tshark is pretty clever, though it exposes the CloudShark back end to the variety of vulnerabilities that get fixed with every new Wireshark release. This is the same concern I had with OpenPacket.org, which limited that site's effectiveness. Incidentally, I have nothing to do with OpenPacket.org now, although there have been rumors that the site will get some attention at some point.

For comparison's sake, I took a screen capture of the same FTP pcap as rendered by Pcapr.net. Personally I think it's a great idea to use a front end that everyone should understand -- i.e., something that looks like Wireshark.

At this point I think CloudShark is more of a novelty and maybe an educational tool. It would be cool if various packet capture repositories joined forces, but I don't see that happening.

Senin, 21 Juni 2010

All Aboard the NSM Train?

It was with some small amusement that I read the following two press releases recently:

First, from May, NetWitness® and ArcSight Partner to Provide Increased Network Visibility:

NetWitness, the world leader in advanced threat detection and real-time network forensics, announced certification by ArcSight (NASD: ARST) of compliance with its Common Event Format (CEF) standard. ArcSight CEF certification ensures seamless interoperability and support between NetWitness’ industry-leading threat management solution and ArcSight’s security information and event management (SIEM) platform.

Let me parse the market-speak. This is another indication that an ArcSight user can click on an event in the SIM console and access network traffic captured by NetWitness.

Second, from June, Solera Networks™ and Sourcefire™ Announce Partnership:

Solera Networks, a leading network forensics products and services company today announced its partnership with Sourcefire, Inc. (Nasdaq:FIRE), the creators of SNORT® and a leader in intelligent Cybersecurity solutions. Solera Networks can now integrate its award-winning network forensics technology directly into Sourcefire’s event analysis. The integration enhances Sourcefire’s packet analysis functionality to include full session capture, which provides detailed forensics for any security event. The partnership enables swift incident response to any security event and provides full detail in the interest of understanding “what happened before and after a security event?”

Martin Roesch, founder and CTO of Sourcefire. “There is a powerful advantage in being able to see the full content of every attack on your network. Network forensics from Solera Networks compliments Sourcefire’s IPS and RNA products by letting you see everything that led up to and followed a successful prevention of an attack.


This press release is a little clearer. This is an indication that a Sourcefire user can click on an event in the Sourcefire console and access network traffic captured by Solera.

This second development is interesting from a personal level, because it shows that the Network Security Model has finally been accepted by the developer (Marty Roesch) of what is regarded as the most popular intrusion detection system (Snort).

In other words, after over eight years of evangelizing the need to collect NSM data (at its core, full content, session, statistical, and alert data) in order to detect and respond to intrusions, we see Sourcefire partnering with Solera to pair full content network traffic with Snort alert data. It's almost enough to bring a tear to my eye. "Yo Adrian! I did it!"

Mike Cloppert on Defining APT Campaigns

Please stop what you're doing and read Mike Cloppert's latest post Security Intelligence: Defining APT Campaigns. Besides very clearly and concisely explaining how to think about APT activity, Mike includes some original Tufte-esque figures to demonstrate APT attribution and moving up the kill chain.

Minggu, 20 Juni 2010

Full Disclosure for Attacker Tools

The idea of finding vulnerabilities in tools used by attackers is not new. It's part of the larger question of aggressive network self defense that I first discussed here in 2005 when reviewing a book of that title. (The topic stretches back to 2002 and before, before this blog was born.) If you follow my blog's offense label you'll see other posts, such as More Aggressive Network Self Defense that links to an article describing Joel Eriksson's vulnerability research into Bifrost and other remote access trojans.

What's a little more interesting now is seeing Laurent Oudot releasing 13 security advisories for attacker tools. Laurent writes:

For example, we gave (some of) our 0days against known tools like Sniper Backdoor, Eleonore Exploit Pack, Liberty Exploit Pack, Lucky Exploit Pack, Neon Exploit Pack, Yes Exploit Pack...

If you're not familiar with these sorts of tools, see an example described by Brian Krebs at A Peek Inside the ‘Eleonore’ Browser Exploit Kit.

Why release these advisories?

It's time to have strike-back capabilities for real, and to have alternative and innovative solutions against those security issues.

I agree with the concept, but not necessarily with releasing "advisories" for attacker tools. Laurent claims these are "0days". This would imply the developers of these attacker tools did not know about the vulnerabilities. By publishing advisories, attackers now know to fix them. Assuming "customers" heed the advisories and update their software, this process has now denied security researchers and others who conduct counter-intruder operations access to attacker sites. This is tactically counterproductive from a white hat point of view.

On the other hand, developers of these attacker tools might already know about the vulnerabilities, and might have already patched them. In this case, publishing advisories is more about creating some publicity for Laurent's new company and for his talk last week. (Did anyone see it?)

I like the idea of taking the fight to the enemy. Security researchers are already penetrating attacker systems to infiltrate botnet command and control servers and do other counter-intruder operations. These activities increase the black hat cost to conduct intrusions, and the more resources the attackers have to divert to defending their own infrastructure, the fewer resources they can direct at compromising victims.

However, disclosing details of vulnerabilities in attacker tools is likely to not work in the white hat's favor. White hats are bound by restrictions like laws and rules that black hats routinely break. Announcement of a vulnerability in the Eleonore exploit kit is not going to unleash a wave of activity against black hats like announcement of a vulnerability in Internet Explorer. It's likely that the few researchers and others wearing white hats will not learn much from a public announcement due to their independent research, while mass-targeting attackers (who historically are not great developers themselves) will disproportionately benefit from the disclosure.

What do you think? Should white hat researchers publish security advisories for black hat tools?

Sabtu, 19 Juni 2010

Argus!!!

I have been reading Real Digital Forensics and came across the recommended use of Argus ("Audit Record Generation and Utilization System"). Argus is fast, wide and deep network analysis of pcap files.  It took me some time to compile and start to make sense of it, although there is a relevant and clever wiki page and a good collection of recent articles explaining research, university and real world use. My discussion below concerns Argus auditing functionality.

Argus dumps your pcap file into a compressed argus formatted file which carries every piece of session information an inquisitive NSM forensic could possibly want from a network trace including time-slices, TCP options, anonymization, geolocation, and graphing . Here are some basic examples I walked myself through. The first step is to write the pcap file to an argus file using 'argus'.

/usr/local/sbin/argus -d -r 08Mar1142PST2010.in.1268074842 -w 08Mar1142PST2010.in.1268074842.argus

Next I use 'ra' (read argus)  to read the packet data.  You can specify fields and bpf style filters. Here I specify (append) a filter ('ip proto 6') for only TCP packets  (e.g grep TCP /etc/protocols):
  
ra -n -r 08Mar1142PST2010.in.1268074842.argus - ip proto 6 | less
19:08:09.660222 e s tcp 207.44.254.106.56813 -> 192.168.0.12.3246 3 186 REQ
19:12:01.707471 e tcp 204.236.155.168.12200 -> 192.168.0.12.3246 1 60 REQ
19:32:55.259094 e tcp 204.236.155.168.12200 -> 192.168.0.12.3246 1 60 REQ
19:33:44.995964 e tcp 221.192.199.35.12200 -> 192.168.0.12.8000 1 60 REQ
19:34:36.506022 e tcp 221.192.199.35.12200 -> 192.168.0.12.80 1 60 REQ
19:53:52.914418 e tcp 204.236.155.168.12200 -> 192.168.0.12.3246 1 60 REQ

Here I specify source address, destination port and connection state fields with the '-s' option and sort the result by source address and destination port before using 'uniq -c' to rank those fields.

ra -n -s saddr dport state -r 08Mar1142PST2010.in.1268074842.argus - ip proto 6 | sort -k1,2 -nr | uniq -c | sort -nr | less
149 221.195.73.86 8000 REQ
100 192.168.0.12 80 ACC
81 222.45.112.59 2479 REQ
80 222.45.112.59 8085 REQ
80 222.45.112.59 3246 REQ
76 204.236.155.168 3246 REQ

I am using 'rasort' to something similar here but appending grep to filter only those source addresses with a connected state.

 rasort -n -s saddr dport state -r 08Mar1142PST2010.in.1268074842.argus - ip proto 6 | sort -k1 -nr | uniq -c | sort -nr | grep CON | less
14 74.125.19.19 19412 CON
14 74.125.19.17 20073 CON
13 85.13.200.108 19216 CON
13 85.13.200.108 19024 CON
13 74.125.19.83 19145 CON
13 74.125.19.83 18961 CON

I am not quite clear when to use 'rasort'  versus 'ra' with sort and uniq appended.  There is also 'ratop' . May take some time to sort out the best scripts for top talkers. Like 'ra', I can tell 'rasort' to include specific field (-s switch) and then specify  the field(s) to sort by (-m  switch). I am still using 'uniq -c | sort -r' .

rasort -s saddr dport proto bytes stat -m dport saddr  -r 08Mar1142PST2010.in.1268074842.argus | grep -v -f file | uniq -c | sort -r | less

149 221.195.73.86 8000 tcp 60 REQ
81 222.45.112.59 2479 tcp 60 REQ
80 222.45.112.59 8085 tcp 60 REQ
80 222.45.112.59 3246 tcp 60 REQ
76 204.236.155.168 3246 tcp 60 REQ
76 222.45.112.59 9415 tcp 60 REQ


So here I apply a bpf filter for dst port 22 and the '-z' to see TCPstate changes :
  
rasort -nn -s saddr dport proto bytes state -m dport saddr -z -r 08Mar1142PST2010.in.1268074842.argus - dst port 22 | uniq -c | sort -nr

3 125.141.195.190 22 6 62 s
3 114.202.247.235 22 6 62 s
3 58.217.255.103 22 6 62 s
3 97.163.189.33 22 6 62 s
2 94.158.184.183 22 6 62 s
2 61.151.246.140 22 6 62 s
 
Argus, baby!! Fast, wide and deep!!

Senin, 14 Juni 2010

Can Someone Do the Afghanistan Math?

I'm sure most of you have read the NY Times story U.S. Identifies Vast Mineral Riches in Afghanistan:

The United States has discovered nearly $1 trillion in untapped mineral deposits in Afghanistan, far beyond any previously known reserves and enough to fundamentally alter the Afghan economy and perhaps the Afghan war itself, according to senior American government officials...

Instead of bringing peace, the newfound mineral wealth could lead the Taliban to battle even more fiercely to regain control of the country...

The mineral deposits are scattered throughout the country, including in the southern and eastern regions along the border with Pakistan that have had some of the most intense combat in the American-led war against the Taliban insurgency.


I'd like to make two points.

First, I see dollars and a security problem. Can someone do the Afghanistan math? In other words, how much should be spent on security in Afghanistan in order to yield a worthwhile "return on investment"?

Second, this sounds like a "Road House" scenario, like I described four years ago in my post Return on Security Investment. I've always been troubled by these sorts of scenarios, meaning I'm not exactly sure how to think about them. I believe there are two general sorts of security scenarios to consider:

  1. An environment has transitioned from a "secure" state to a "nonsecure" state due to intruder activity, and the security team wants to promote a return to the "secure" state

  2. An environment suffers a "nonsecure" state, and the security team wants to promote a transition to a "secure" state


Most digital security work is Type 1, meaning an enterprise (presumably) begins intruder-free, transitions to a nonsecure state due to intruder activity, and the security team works to return to a secure state. That is a loss prevention exercise, where the security team seeks to preserve the value of the business activity but doesn't add to the value of the business activity.

Scenarios like Road House and Afghanistan's mineral wealth appear to me to be Type 2, meaning chaos reigns, and by spending resources a security team can produce a real "return on investment" by enabling a business activity that was previously not possible.

What do blog readers think?

the 'find' command for security...Part I

These are some meditations on using the *NIX 'find' command for security...



These are very quick ways of find the 'last access' on every file. 'Stat -x' is for OpenBSD. The grep 'file' contains:
File:
Access:

for i in `find /`; do echo $i `stat -x $i | grep "Access"`;done
find  / | xargs stat -x | grep -f file | tr -d "[\042]"

On Linux or Cygwin:





for i in `find /cygdrive/C/Security`; do echo $i `stat $i | grep "Access" | grep -v Gid`;done
/cygdrive/C/Security Access: 2010-06-14 15:58:04.293000000 -0700
/cygdrive/C/Security/.ImplementingSecurityDuringWebDesign.txt.swp Access: 2009-12-08 18:33:47.445000000 -080
/cygdrive/C/Security/.PapersToAuthor.txt.swo Access: 2009-12-08 18:30:19.533000000 -0800
/cygdrive/C/Security/.PapersToAuthor.txt.swp Access: 2009-12-07 12:23:46.045000000 -0800

 






find /cygdrive/C/Security | xargs stat | grep -f file | grep -v Gid: | tr -d "[\042]"







File: `/cygdrive/C/Security/004.log'






Access: 2010-05-17 11:57:27.217000000 -0700
File: `/cygdrive/C/Security/05.13.10.log'
Access: 2010-05-13 11:47:53.292000000 -0700
File: `/cygdrive/C/Security/05.14.10.log'
Access: 2010-05-14 09:27:55.329000000 -0700





....









Now, I am looking at ways to use the find command per user. The purpose of this experiment is to understand why I get such different results for commands that would seemingly return only more detail for the same result...

find / -user rferrisx
find / -exec ls -l {} \; | awk '$3=="rferrisx" {print $3" "$9}'
find / -user rferrisx -exec ls -lhuS {} \; | awk '{print $3" "$5" "$9}'


bash-4.0# find / -user rferrisx
/home/rferrisx
/home/rferrisx/.ssh
/home/rferrisx/.ssh/authorized_keys
/home/rferrisx/.Xdefaults
/home/rferrisx/.cshrc
/home/rferrisx/.login
/home/rferrisx/.mailrc
/home/rferrisx/.profile
/home/rferrisx/.Xauthority
/dev/ttyp0

bash-4.0# find / -exec ls -l {} \; | awk '$3=="rferrisx" {print $3" "$9}'
rferrisx rferrisx
rferrisx .Xauthority
rferrisx .Xdefaults
rferrisx .cshrc
rferrisx .login
rferrisx .mailrc
rferrisx .profile
rferrisx .ssh
rferrisx authorized_keys
rferrisx /home/rferrisx/.ssh/authorized_keys
....

bash-4.0# find / -user rferrisx -exec ls -lhuS {} \; | awk '{print $3" "$5" "$9}'
root 28.6M 08Mar1142PST2010.in.1268074842
root 18.2M 08Mar1137PST2010.out.1268074837
root 2.7M 08Mar1142PST2010.in.log
root 1.6M 08Mar1142PST2010.in.p0f
root 258K 08Mar1137PST2010.out.log
root 154K 08Mar1137PST2010.out.p0f
rferrisx 773B .cshrc
rferrisx 512B .ssh
rferrisx 398B .login
rferrisx 218B .profile
...



Sabtu, 12 Juni 2010

Light Bulbs Slowly Illuminating at NASA?

I've seen a few glimmers of hope appearing in the .gov space recently, so I wanted to note them here.

Linda Cureton in her NASA CIO blog said:

We have struggled in the area of cyber security because of our belief that we are able to obtain this ideal state called – secure. This belief leads us to think for example, that simply by implementing policies we will generate the appropriate actions by users of technology and will have as a result a secure environment. This is hardly the truth. Not to say that policies are worthless, but just as the 55 mph speed limit has value though it does not eliminate traffic fatalities, the policies in and of themselves do not eliminate cyber security compromises.

Army General Keith Alexander, the nation's first military cyber commander, described situational awareness as simply knowing what systems' hackers are up to. He goes on to say that with real-time situational awareness, we are able to know what is going on in our networks and can take immediate action.

In addition to knowing our real-time state, we need to understand our risks and our threat environment... It is through an understanding of the state of our specific environment and the particular risks and threats we face where we can take the right actions to produce the results that we need.


Well said. Will anyone else pay attention?

Jumat, 11 Juni 2010

NITRD: "You're going the wrong way!"

If you remember the great 1980's movie "Planes, Trains, and Automobiles" the title of this post will make sense. When Steve Martin and John Candy are driving down the wrong side of the highway, another motorist yells "You're going the wrong way!" They deluded pair reply "How do they know where we're going?"

I am starting to feel like the motorist yelling "You're going the wrong way!" and I'm telling Federal research efforts like the Federal Networking and Information Technology Research and Development (NITRD) Program. This program describes itself thusly:

The NITRD Program is the primary forum by which the US Government coordinates its unclassified networking and information technology (IT) research and development (R&D). Fourteen Federal agencies, including all of the large science and technology agencies, are formal members of the NITRD Program, whose combined 2010 networking and IT R&D budgets totaled more than $4 billion.

This program proposes three Federal Cybersecurity Game-change R&D Themes:

  1. Tailored Trustworthy Spaces: Tailored Trustworthy Spaces (TTS) provide flexible, adaptive, distributed trust environments that can support functional and policy requirements arising from a wide spectrum of activities in the face of an evolving range of threats. A TTS recognizes the user’s context and evolves as the context evolves. The user chooses to accept the protections and risks of a tailored space, and the attributes of the space must be expressible in an understandable way to support informed choice and must be readily customized, negotiated and adapted.

    The scientific challenge of tailored spaces is to provide the separation, isolation, policy articulation, negotiation, and requisite assurances to support specific cyber sub-spaces.

  2. Moving Target: Research into Moving Target (MT) technologies will enable us to create, analyze, evaluate, and deploy mechanisms and strategies that are diverse and that continually shift and change over time to increase complexity and cost for attackers, limit the exposure of vulnerabilities and opportunities for attack, and increase system resiliency. The characteristics of a MT system are dynamically altered in ways that are manageable by the defender yet make the attack space appear unpredictable to the attacker.

    MT strategies aim to substantially increase the cost of attacks by deploying and operating networks and systems in a manner that makes them less deterministic, less homogeneous, and less static.

  3. Cyber Economic Incentives: Cybersecurity practices lag behind technology. Solutions exist for many of the threats introduced by casual adversaries, but these solutions are not widely used because incentives are not aligned with objectives and resources are not correctly allocated. Secure practices must be incentivized if cybersecurity is to become ubiquitous, and sound economic incentives need to be based on sound metrics, processes that enable assured development, sensible and enforceable notions of liability and mature cost/risk analysis methods.


This is lovely. Great. However, if you're going to spend $4 billion, why not focus on better operations. The problem with this endeavor is that it is driven by researchers. This is my personal opinion, but researchers do not know what is happening inside real enterprises. Researchers reply "How do they know where we're going?" I know where they are going because I see these sorts of R&D efforts and I don't see them addressing the real problems in the enterprise.

Harlan Carvey always makes this point, and he is right: many enterprises are not conducting counter-intrusion operations at the level that is required for modern defense. We don't need output from a research project to be yet another aspect of digital security that is not designed, built, or run properly in the IT environment.

Kamis, 10 Juni 2010

June 2010 Hakin9 Magazine Published


The new June 2010 Hakin9 has been published in .pdf form. It looks like they replaced the registration-based download with a link straight to the .pdf -- nice. The article Testing Flash Memory Forensic Tools – part two looks interesting, and I always like reading whatever Matt Jonkman writes. Check it out -- it's free!

"Untrained" or Uncertified IT Workers Are Not the Primary Security Problem

There's a widespread myth damaging digital security policy making. As with most security myths it certainly seems "true," until you spend some time outside the policy making world and think at the level where real IT gets done.

The myth is this: "If we just had a better trained and more professional IT corps, digital security would improve."

This myth is the core of the story White House Commission Debates Certification Requirements For Cybersecurity Pros. It says in part:

A commission set up to advise the Obama administration on cybersecurity policy is considering recommending certification and training for federal IT security employees and contractors.

The Commission on Cybersecurity for the 44th Presidency, which in December 2008 issued its Securing Cyberspace for the 44th Presidency report to Congress, is currently working on a sequel to that report, due sometime in late June or early July. The commission, made up of a who's who of experts and policy-makers, is debating strategies for building and developing a skilled cybersecurity workforce for the U.S., as well as issues surrounding an international cybersecurity strategy and online authentication...

[R]egulated entities, such as critical infrastructure firms, also would likely fall within its scope.


My opinion? This is a jobs program for security training and certification companies.

(Disclaimer: I still teach TCP/IP Weapons School four times per year for Black Hat, and I organize the Incident Detection Summit for SANS. I've also held the CISSP since 2001. Whether this makes you more or less inclined to listen to me is up to you!)

So what's the problem? Isn't training good for everyone?

In a world of exploding Federal budgets, every new spending proposal should be carefully examined. In the words of the article:

[M]andating certifications could be a bit limiting -- and expensive -- for the feds. "I don't know if the government has that kind of money lying around." Certification courses can cost thousands of dollars per person, for example.

Here's my counter-proposal that will be cheaper, more effective, and still provide a gravy train for the trainers and certifiers:

Train Federal non-IT managers first.

What do I mean? Well, do you really think the problem with digital security involves people on the front lines not knowing what they are supposed to do? In my opinion, the problem is management who remains largely ignorant of the modern security environment. If management truly understood the risks in their environment, they would be reallocating existing budgets to train their workforce to better defend their agencies.

Let's say you still think the problem is that people on the front lines do not know what they are supposed to do. Whose fault is that? Easy: management. A core responsibility of management is to organize, train, and equip their teams to do their jobs. In other words, in agencies where IT workers may not be qualified, I guarantee their management is failing their responsibilities.

So why not still start with training IT workers? Simple: worker gets trained, returns to job, the following conversation occurs:

Worker to boss: "Hey boss, I just learned how terrible our security is. We need to do X, Y, and Z, and stop listening to vendors A, B, and C, and hire people 1, 2, and 3, and..."

Boss to worker: "Go paint a rock."


Instead of spending money first on IT workers, educate their management, throughout the organization, on the security risks in their public and private lives. Unleash competent Blue and Red teams on their agencies, perform some tactical security monitoring, and then bring the results to a class where attendees sign a waiver saying their own activity is subject to monitoring. During the class shock the crowd by showing how insecure their environment is, how the instructors know everyone's Facebook and banking logins, and how they could cause professional and personal devastation for every attendee and their agency.

We need to help managers understand how dangerous the digital world is and let them allocate budgets accordingly.

Rabu, 09 Juni 2010

Publicly Traded Companies Read This Blog

I think some publicly traded companies read this blog! Ok, maybe I'm dreaming, but consider the story After Google hack, warnings pop up in SEC filings by Robert McMillan:

Five months after Google was hit by hackers looking to steal its secrets, technology companies are increasingly warning their shareholders that they may be materially affected by hacking attempts designed to take valuable intellectual property.

In the past few months Google, Intel, Symantec and Northrop Grumman -- all companies thought to have been targets of a widespread spying operation -- have added new warnings to their U.S. Securities and Exchange Commission filings informing investors of the risks of computer attacks...

Google warned that it could lose customers following a breach, as users question the effectiveness of its security. "Because the techniques used to obtain unauthorized access, disable or degrade service, or sabotage systems change frequently and often are not recognized until launched against a target, we may be unable to anticipate these techniques or to implement adequate preventative measures," the company said in the filing.

Google's admission that it had been targeted put a public spotlight on a problem that had been growing for years: targeted attacks, known to security professionals as the advanced persistent threat (APT)...


So how do I know they read my blog? Check out my February 2008 post Justifying Digital Security via 10-K Risk Factors:

Perhaps digital security could try aligning itself with the risk factors in the company 10-K?

More directly, check out my May 2009 post President Obama's Real Speech on Cyber Security:

We will work with Congress to establish a national breach disclosure law, and we will require publicly traded companies to outline digital risks in their annual 10-K filings.

Well, the President didn't say that (I did), but thankfully companies are not waiting around for President Obama to be a real information security leader.

Minggu, 06 Juni 2010

Simple Questions, Difficult Answers

Recently I had a discussion with one of the CISOs in my company. He asked a simple question:

"Can you tell me when something bad happens to any of my 100 servers?"

That's a very reasonable question. Don't get hung up on the wording. If it makes you feel better, replace "something bad happens to" with "an intruder compromises," or any other wording that conveys the question in a way you like.

It's a simple question, but the answer is surprisingly difficult. Let's consider the factors that affect answering this question.

  • We need to identify the servers.


    • We will almost certainly need IP addresses.


      • How many IP addresses does each server have?

      • What connectivity does each IP address provide?

      • Are they IPv4, IPv6, both?

      • Are they static or dynamic? (Servers should be static, but that is unfortunately not universal.)


    • We will probably need hostnames.


      • How many hostnames does each server have?

      • What DNS entries exist?

      • Extrapolate from the IP questions above to derive related hostname questions.


    • We will need to identify server users and owners to separate authorized activity from unauthorized activity, if possible.


  • What is the function and posture of each server?


    • Is the server Internet-exposed? Internally exposed? A combination? Something different?

    • How is the server used? What sort of services does it provide, at what load?

    • What is considered normal server activity? Suspicious? Malicious?

  • What data can we collect and analyze to detect intrusion?


    • Can we see network traffic?


      • Do we have instrumentation in place to collect data for the servers in question?

      • Can we see network traffic involving each server interface?

      • Is some or all of the traffic encrypted?

      • Does the server use obscure protocols?

      • What volume of data do we need to analyze?

      • What retention period do we have for this data?

      • What laws, regulations, or other restrictions affect collecting and analyzing this data?


    • Can we collect host and application logs?


      • Do we have instrumentation in place to collect data for the servers in question?

      • Are the logs standard? Nonstandard? Obscure? Binary?

      • Are the logs complete? Useful?

      • What volume of data do we need to analyze?

      • What retention period do we have for this data?

      • What laws, regulations, or other restrictions affect collecting and analyzing this data?


    • Is the collection and analysis process sufficient to determine when an intrusion occurs?


      • Is the data sufficiently helpful?

      • Are our analysts sufficiently trained?

      • Do our tools expose the data for analysis in an efficient and effective manner?

      • Do analysts have a point of contact for each server knowledgeable in the server's operations, such that the analyst can determine if activity is normal, suspicious, or malicious?




I'll stop there. I'm not totally satisfied with what I wrote, but you should have a sense of the difficulty associated with answering this CISO's question.

Furthermore, at what number is this process likely to yield results in your organization, and at what number will it fail? Can it be done for 1 server? 10? 100? 1,000? 10,000? 100,000?

Jumat, 04 Juni 2010

Reminder for Incident Responders

I found this post [Dailydave] How to pull a dinosaur out of a hat in 2010 by Dave Aitel to contain two warnings for incident responders:

I do know that reliably owning Wireshark on Windows 7 is priceless.

and

So many otherwise very cautious people don't realize that RDP is like giving your passwords away to the remote machine. So we had to write a trojan that stole the passwords as people RDP'd in and we installed it for demos on various client sites.


The first is a reminder that intruders sometimes practice counter-forensics, i.e., attacking defensive tools. In fact, the post I just linked from 2007 mentions Wireshark vulnerabilities. Some things never change.

The second is a reminder that gaining remote access to suspected intrusion victims is a risky gambit. If you suspect a system is compromised, and you connect to it, expect trouble. This applies across the spectrum of intruders, from mindless malware to advanced persistent threat. Your best bet is to gather as much evidence as possible without ever touching the victim, if possible. Since you can't trust the victim to report in a trustworthy manner anyway, this has always been sound advice.

As a bonus, Dave throws in the following:

My favourite latest is the NGINX remote exploit which works even when you don't expect it to!

This reminds me that many intruders use Nginx to host their Web-based C2 servers. If you want to practice aggressive incident response, you may consider attacking that infrastructure yourself. Intruders tend not to be the best defenders.

Rabu, 02 Juni 2010

time stamping windows directory and file names

This is something I have blogged about before, but I thought it worth posting again.  Special characters need to be eliminated to create a time stamp that can be used as a Windows file name. The `date` program in Unix has a number of very useful options for this.  Windows cmd shell is more limited. This is what I use:

:: rtime.cmd
@echo off

set realdate=%date:/=.%
set realdate=%realdate:* =%
set realtime=%time::=.%
set realtime=%realtime:* =%
set timestamp=%realdate%.%realtime%
echo %timestamp%

This command script uses 'variable substitution' from the set command to remove special characters (e.g. :  / ) unacceptable as Windows file or directory names . This line:
set timestamp=%realdate%.%realtime%


can be changed as needed for more CSV compatible logging:
set timestamp="%realdate%","%realtime%"


Once cached, it runs pretty fast and is suitable for lightweight logging:

$ time /cygdrive/C/Security/rtime.cmd
06.02.2010.11.04.05.99

real    0m0.202s
user    0m0.015s
sys     0m0.031s

$ time /cygdrive/C/Security/rtime.cmd
06.02.2010.11.04.12.65

real    0m0.062s
user    0m0.000s
sys     0m0.015s

$ time /cygdrive/C/Security/rtime.cmd
06.02.2010.11.04.14.68

real    0m0.062s
user    0m0.000s
sys     0m0.015s