Minggu, 30 November 2008

Craig Balding Podcast on Cloud Security

I noticed Craig Balding's post Podcast: Cloud Computing, Software Development, Testing and Security, so I just listened to all three segments. Readers of this blog may choose to concentrate on the third segment, Cloud computing's effect on application security. Craig is a thought leader on cloud security so I enjoy hearing his ideas.


Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

Selasa, 25 November 2008

Splunk on FreeBSD 7.0

Although there is not a version of Splunk compiled natively for FreeBSD 7.0, I was told to try using Splunk 3.4.1 on FreeBSD 7.0 via FreeBSD's compat6x libraries.

I did the following:

freebsd70:/usr/local/src# pkg_add -v splunk-3.4.1-45588-freebsd-6.1-intel.tgz
Requested space: 106458852 bytes, free space: 1565927424 bytes in
/var/tmp/instmp.HhNhQk
Running pre-install for splunk-3.4.1-45588-freebsd-6.1-intel..
extract: Package name is splunk-3.4.1-45588-freebsd-6.1-intel
extract: CWD to /opt
extract: /opt/splunk/README.txt
extract: /opt/splunk/bin/btool
extract: /opt/splunk/bin/bunzip2
...edited...
extract: /opt/splunk/splunk-3.4.1-45588-FreeBSD-i386-manifest
extract: CWD to .
Running post-install for splunk-3.4.1-45588-freebsd-6.1-intel..
----------------------------------------------------------------------
Splunk has been installed in:
/opt/splunk

To start Splunk, run the command:
/opt/splunk/bin/splunk start

To use the Splunk Web interface, point your browser at:
http://freebsd70.localdomain:8000

Complete documentation is at http://www.splunk.com/r/docs
----------------------------------------------------------------------
Attempting to record package into /var/db/pkg/splunk-3.4.1-45588-freebsd-6.1-intel..
Package splunk-3.4.1-45588-freebsd-6.1-intel registered in
/var/db/pkg/splunk-3.4.1-45588-freebsd-6.1-intel

If you try to start Splunk at this point you'll get an error like the following:

freebsd70:/usr/local/src# /opt/splunk/bin/splunk start
/libexec/ld-elf.so.1: Shared object "libc.so.6" not found, required by "splunk"

To fix the problem I installed compat6:

freebsd70:/usr/local/src# pkg_add -vr ftp://ftp.freebsd.org/pub/FreeBSD/ports/i386/
packages-7.0-release/misc/compat6x-i386-6.3.602114.200711.tbz
scheme: [ftp]
user: []
password: []
host: [ftp.freebsd.org]
port: [0]
document: [/pub/FreeBSD/ports/i386/packages-7.0-release/misc/
compat6x-i386-6.3.602114.200711.tbz]
---> ftp.freebsd.org:21
looking up ftp.freebsd.org
connecting to ftp.freebsd.org:21
<<< 220 ftp.FreeBSD.org NcFTPd Server (licensed copy) ready.
>>> USER anonymous
<<< 331 Guest login ok, send your complete e-mail address as password.
>>> PASS analyst@freebsd70.localdomain
<<< 230-You are user #147 of 800 simultaneous users allowed.
<<< 230-
<<< 230 Logged in anonymously.
>>> PWD
<<< 257 "/" is cwd.
>>> CWD pub/FreeBSD/ports/i386/packages-7.0-release/misc
<<< 250 "/pub/FreeBSD/ports/i386/packages-7.0-release/misc" is new cwd.
>>> MODE S
<<< 200 Mode okay.
>>> TYPE I
<<< 200 Type okay.
setting passive mode
>>> PASV
<<< 227 Entering Passive Mode (62,243,72,50,214,227)
opening data connection
initiating transfer
>>> RETR compat6x-i386-6.3.602114.200711.tbz
<<< 150 Data connection accepted from 24.126.62.67:61531; transfer starting for compat6x-
i386-6.3.602114.200711.tbz (3164256 bytes).
Fetching ftp://ftp.freebsd.org/pub/FreeBSD/ports/i386/packages-7.0-release/misc/compat6x-
i386-6.3.602114.200711.tbz...x +CONTENTS
x +COMMENT
...edited...
extract: CWD to /usr/local
extract: /usr/local/libdata/ldconfig/compat6x
extract: CWD to .
Running mtree for compat6x-i386-6.3.602114.200711..
mtree -U -f +MTREE_DIRS -d -e -p /usr/local >/dev/null
Attempting to record package into /var/db/pkg/compat6x-i386-6.3.602114.200711..
Package compat6x-i386-6.3.602114.200711 registered in
/var/db/pkg/compat6x-i386-6.3.602114.200711

*******************************************************************************
* *
* Do not forget to add COMPAT_FREEBSD6 into *
* your kernel configuration (enabled by default). *
* *
* To configure and recompile your kernel see: *
* http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/kernelconfig.html *
* *
*******************************************************************************

Then I could start Splunk:

freebsd70:/usr/local/src# /opt/splunk/bin/splunk start
Splunk Free Software License Agreement
...edited...
Do you agree with this license? [y/n]: y
Copying '/opt/splunk/etc/myinstall/splunkd.xml.cfg-default'
to '/opt/splunk/etc/myinstall/splunkd.xml'.
Copying '/opt/splunk/etc/openldap/ldap.conf.default'
to '/opt/splunk/etc/openldap/ldap.conf'.
Copying '/opt/splunk/etc/modules/distributedSearch/config.xml.default'
to '/opt/splunk/etc/modules/distributedSearch/config.xml'.
/opt/splunk/etc/auth/audit/private.pem
/opt/splunk/etc/auth/audit/public.pem
/opt/splunk/etc/auth/audit/private.pem generated.
/opt/splunk/etc/auth/audit/public.pem generated.

/opt/splunk/etc/auth/audit/private.pem
/opt/splunk/etc/auth/audit/public.pem
/opt/splunk/etc/auth/audit/private.pem generated.
/opt/splunk/etc/auth/audit/public.pem generated.


This appears to be your first time running this version of Splunk.
Validating databases...
Creating /opt/splunk/var/lib/splunk/audit/thaweddb
Creating /opt/splunk/var/lib/splunk/blockSignature/thaweddb
Creating /opt/splunk/var/lib/splunk/_internaldb/thaweddb
Creating /opt/splunk/var/lib/splunk/fishbucket/thaweddb
Creating /opt/splunk/var/lib/splunk/historydb/thaweddb
Creating /opt/splunk/var/lib/splunk/defaultdb/thaweddb
Creating /opt/splunk/var/lib/splunk/sampledata/thaweddb
Creating /opt/splunk/var/lib/splunk/splunkloggerdb/thaweddb
Creating /opt/splunk/var/lib/splunk/summarydb/thaweddb
Validated databases: _audit, _blocksignature, _internal, _thefishbucket,
history, main, sampledata, splunklogger, summary

Checking prerequisites...
Checking http port [8000]: open
Checking mgmt port [8089]: open
Verifying configuration. This may take a while...
Finished verifying configuration.
Checking index directory...
Verifying databases...
Verified databases: _audit, _blocksignature, _internal, _thefishbucket,
history, main, sampledata, splunklogger, summary

Checking index files
All index checks passed.
All preliminary checks passed.
Starting splunkd...
Starting splunkweb.../opt/splunk/share/splunk/certs does not exist. Will create
Generating certs for splunkweb server
Generating a 1024 bit RSA private key
..................................++++++
.............................................++++++
writing new private key to 'privkeySecure.pem'
-----
Signature ok
subject=/CN=freebsd70.localdomain/O=SplunkUser
Getting CA Private Key
writing RSA key

Splunk Server started.

The Splunk web interface is at http://freebsd70.localdomain:8000

I was then able to connect to the Splunk Web interface, add a directory (/var/log) to monitor, and access results.

Documentation for FreeBSD installation is also available. Thanks Splunk!


Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

Senin, 24 November 2008

Defining the Win

In March I posted Ten Themes From Recent Conferences, which included the following:

Permanent compromise is the norm, so accept it. I used to think digital defense was a cycle involving resist -> detect -> respond -> recover. Between recover and the next attack there would be a period where the enterprise could be considered "clean." I've learned now that all enterprises remain "dirty" to some degree, unless massive and cost-prohibitive resources are directed at the problem.

We can not stop intruders, only raise their costs. Enterprises stay dirty because we can not stop intruders, but we can make their lives more difficult. I've heard of some organizations trying to raise the $ per MB that the adversary must spend in order to exfiltrate/degrade/deny information.‏
(emphasis added)

Since then I've grappled with this idea of how to define the win. If you used to define the win as detecting and ejecting all intruders from your enterprise, you are going to be perpetually disappointed (unless your enterprise is sufficiently small). Are there are alternative ways to define the win if you have to accept permanent compromise as the norm? The following are a few ideas, credited where applicable.

The first two come from my post Intellectual Property: Develop or Steal, but I repost them here for easy reference.

  1. Information assurance (IA) is winning, in a broad sense, when the cost of stealing intellectual property via any means is more expensive than developing that intellectual property independently. Nice idea, but probably too difficult to measure.

  2. IA is winning, in a narrow sense, when the cost of stealing intellectual property via digital means is more expensive than stealing that data via nontechnical means (such as human agents placed inside the organization). Still difficult to measure, but might be estimated using red teaming/adversary simulation/penetration testing.

  3. IA is winning when detection operations can see the adversary's actions. This relates to Bruce Schneier's classic advice to Monitor First. The more mature answer is next.

  4. IA is winning when incident responders can anticipate the adversary's next target. I credit Kevin Mandia with this idea. I like it because it shows that complex enterprises will always have vulnerabilities and will always be targeted, but a sufficiently mature detection and response operation will at least be able to guess the intruder's next move. You can even test this by keeping a track record.

  5. IA is winning when the time to detect and remediate has been reduced to B. Insert your own value there. You can track your progress from time A to time B.

  6. IA is winning when your enterprise security integrity assessments show less than D percent of your assets are compromised. You can track progress from C percent to D percent over time. This leads to the more mature version which follows.

  7. IA is winning when your enterprise intrusion debt is reduced to F. You can measure intrusion debt as you like and take steps to reduce it from E to F.


Does anyone else have ideas on how to define the win?


Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

Live Incident Map

I think this is fascinating: a map depicting naval piracy.



One of the most interesting aspects of this map is that it concerns commercial entities (i.e. ships carrying cargo) and anyone can quickly learn the fate of each vessel. It's a giant incident map for 2008. Previous years (2007, 2006) are also available.

The closest equivalent for digital security is probably the narrative of the Breach Blog and similar sites.

Only when we can openly talk about this problem and share lessons learned can we improve. We still need a National Digital Security Board.


Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

Minggu, 23 November 2008

Digital Asset Scorecards

Last month I reviewed Marty Raffy's great book Applied Security Visualization. Recently I've been considering ways to describe systems in my environment using visual means instead of text. I decided to try sharing the following visualization, which I call a Digital Asset Scorecard. I've created a zipped .ppt explaining this idea, but I'll share it here as well.

The Digital Asset Scorecard for a single system is shown below. As you will see shortly, each cell of the box is color-coded depending on its state. Here I use blue and tan to separate categories of elements.

The blue section began as a 4 x 4 table. I merged certain cells as a way to show that some elements (like Assurance) is more important than others (like Base, aka Baselined). These are completely subjective; you could change them, remove them, add them, and so on.



On a single slide I can show 16 systems. The choice of a 4 x 4 arrangement is deliberate; it's a /28. This will make sense later.



I've done some sample color-coding to show how this might appear on a security or operational dashboard of some type. This network is mostly green, which we intuitively know is "good."



Here I've introduced some problems, and they can be seen by less green.



This subnet has some severe problems.



If you reduce the size by 75% you can now arrange systems on a 16 x 16 basis. Now you're depicting an entire /24.



I conclude with a few other ideas.



I'm not sure if I will end up trying to develop a system at work that implements these ideas. It might be possible to create a front-end that accepts feeds from a variety of sources in order to populate the color-coded cells.

Please let me know if I've re-invented someone's wheel or if you have some ideas. I could point to Raffy's sections on Audit Data Visualization or Business Process Monitoring as being similar already.


Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

Reading on Justifying Security Operations

My post Managing Security in Economic Downturns mentioned wrapping everything in metrics to justify your security operation. I decided to peruse the past proceedings of the Workshop on the Economics of Information Security for ideas.

I was mostly interested in works explaining how to show value derived from security operations. (Remember value is mainly or exclusively cost avoidance.) I am really interested in knowing how much it costs to maintain and defend an information infrastructure vs what it costs to exploit it. I found the following to be previous work in related areas.

You may also remember my review of Managing Cyber-Security Resources: A Cost-Benefit Analysis. It is good background reading.


Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

Jumat, 21 November 2008

NASA v China

Yesterday Businessweek posted a fascinating and lengthy report titled Network Security Breaches Plague NASA. This part will sound familiar to many readers.

By early 1999 the volume of intrusions had grown so worrisome that Thomas J. Talleur, the most senior investigator specializing in cyber-security in the Inspector General's office at NASA, wrote a detailed "network intrusion threat advisory..."

Talleur, now 59, retired in December 1999, frustrated that his warnings weren't taken more seriously. Five months after his advisory was circulated internally, the Government Accountability Office, the investigative arm of Congress, released a public report reiterating in general terms Talleur's concerns about NASA security. But little changed, he says in an interview. "There were so many intrusions and hackers taking things we had on servers, I felt like the Dutch boy with his finger in the dike," he explains, sitting on the porch of his home near Savannah, Ga. On whether other countries are behind the intrusions, he says: "State-sponsored? God, it's been state-sponsored for 15 years!"


The article mentions China and the Russians.

Speaking of China, yesterday's story coincides with a press release on the Annual Report to Congress of the U.S.-China Economic and Security Review Commission titled U.S. – China commission cites Chinese cyber attacks, authoritarian rule, and trade violations as impediments to U.S. economic and national security interests.


Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

Don't Fight the Future

Digital security practitioners should fight today's battles while preparing for the future. I don't know what that future looks like, and neither does anyone else. However, I'd like to capture a few thoughts here. This is a mix of what I think will happen, plus what I would like to see happen. If I'm lucky (or good) the future will reflect these factors, for which I am planning.

A few caveats: I don't have an absolute time factor for these, and I'm not considering these my "predictions for 2009." This is not an endorsement of the Jericho Forum. I think it makes sense to plan for the environment I will describe next because it will be financially attractive, but not necessarily universally security-enhancing (or even smart).

  1. Virtual Private Network (VPN) connections will disappear. For many readers this is nothing groundbreaking, but bring up the possibility with a networking team and they stare in bewilderment. Is there any reason why a remote system needs to have a simulated connection, using all available protocols, to a corporate network? Some of you might limit the type of connection to certain protocols, but why not just expose those protocols directly to the outside world and avoid the VPN altogether?

  2. Intranets will disappear. This is the next step when you architect for situations where VPNs are no longer needed. What's the purpose of an Intranet if you expose all the corporate applications to the outside world? The Intranet essentially becomes a giant local ISP. That seems ripe for outsourcing. How many of you sit in a company office connected to someone else's network, perhaps using 3G, but still check your email or browse the Web? It's happening now.

  3. Every device might be able to talk to every other device. This restores the dream of "end-to-end connectivity" destroyed by NAT, firewalls, and other "middleboxes." IPv6 seems to be making some ground, at least in mindshare in the Western world and definitely on the ground in the Far East. "End-to-end" is a core idea of IPv6, but scares me. Isolation is one of the few defensive measures that works in many intrusion scenarios.

  4. Preferably, only authorized applications will talk to other authorized applications. This is one way to deal with the previous point. It's more complicated to implement, but will make me sleep better. I would like the ability to configure how my endpoint talks to the world, and how the world talks to it. For me, I would like to completely disable functionality, and abandon any kind of network-based filtering or blocking mechanism. It is a travesty that I have to use some aspects of Microsoft SMB for business functions, but generally allow any SMB traffic if I'm not willing to run a host-based layer 7 firewall (aka "IPS").

  5. Every device must protect itself. This one really pains me, and I think it's the greatest risk. This one is going to happen no matter how much protests security people make. Again, it's already happening. Mobile devices are increasingly exposed to each other, with the owners completely at the mercy of the service provider. For me, this is an operational reality for which we must build in visibility and failure planning. We can't just assume everything will be ok, because prevention eventually fails. I'll say more on that later.

  6. Devices will often have to report their own status, but preferably to a central location. Again, scary. It means that if an endpoint is exploited, the best you're likely to get from it is a last log event gasp as it reports something odd. After that a skilled intruder will make the endpoint appear as if nothing is wrong. At least if centralized logging is a core component you'll have that log as an indicator. However, past that point the endpoint cannot be trusted to report its state. This is happening more and more as mobile devices move from monitored connections (say a company network) to open ones (like wireless providers or personal broadband links).

  7. As fast, high-bandwidth wireless becomes ubiquitous, smart organizations will design platforms to rely on centralized remote storage and protection of critical data. For certain types of data, we have to hope that our varied mobile devices act as little more than terminals to cloud-hosted, well-mannered information stores. The more data we keep centrally, the less persistent it needs to be on end devices, and therefore the less exposed it can be. Central data is easier to deduplicate, back up, archive, classify, inventory, e-discover, retain, destroy, and manage.


I called this post "don't fight the future" because I think these developments will transpire. The model they represent is financially more attractive to people who don't put security first, which is every decision maker I've met. This isn't necessarily a bad thing, but it does mean we security practitioners should be making plans for this new world.


Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

Managing Security in Economic Downturns

You don't need to read this blog for news on the global economic depression. However, several people have asked me what it means for security teams, especially when Schneier Agrees: Security ROI is "Mostly Bunk". No one can generate cash by running a security team; the best we can do is save money. If your security team generates cash, you're either a MSSP, a collection agency of some sort (these do exist, believe it or not!), in need of being spun-off, or not accounting for all of your true costs.

Putting the ROI debate aside, these are tough economic times. Assuming we can all stay employed, we might be able to work the situation to our advantage. Nothing motivates management like a financial argument. See if one or more of the following might work to your advantage, because of the downturn.

  1. Promote centralization and consolidation. The more large organizations I've joined, consulted for, or met, the more I see that successful ones have centralized, consolidated security teams. There's simply not enough skilled security personnel to protect us, and spreading the talent across large organizations leaves too many gaps. Think of the pockets of talent distributed across your own company, and how their skills could be applied organization-wide if properly positioned. If head counts are threatened, make a play for creating a single central group that helps the whole company and bring the best talent into that team.

  2. Convert business security leaders into local experts/consultants. If you work within a large company, your individual business leaders may not like seeing their local staff join a larger company-wide organization. However, those that remain in the business should now be free to focus on what is unique about their business, instead of the minutiae of managing anti-virus, firewalls, patches, and other "traditional" security measures that are absolutely vanilla functions which could be outsourced overseas in a heartbeat. What's more valuable, a security leader who can run an AV console, configure a firewall, and apply a patch, or one who can advise their business CEO on the risks, regulations, and realities of operating in their individual realm? Notice I said leader and not technician. Technicians do the routine tasks I mentioned and are ripe for outsourcing; don't cling to that role unless you wanted to be replaced by a Perl script.

  3. Advocate standardization where it makes sense. For example, is it really necessary to have more than one "gold image" for your common desktop/laptop user? Why develop your own image when the Federal government is doing all the work for you with the Federal Desktop Core Configuration? Turn the team that creates your own image into a much smaller one that tweaks the FDCC, and redeploy the personnel where you need them.

  4. Cut through bureaucracy and authority barriers with a financial knife. This one really bugs me. How many incident responders out there lose time, effectiveness, and data because 1) you don't know who owns a victim computer; 2) finding someone who owns the computer takes time; 3) getting permission to do something about the victim requires more time? You can probably make a case for reduced help desk costs, fewer support personnel, and faster/more accurate/cheaper incident response if you gain the authority to perform remote live response and/or forensics on any platform required, minus some accepted and reasonable exclusion list. This requires 1) good inventory management; 2) forensic agent pre-deployment or administrator credentials to deploy and agent or scripts as necessary; and 3) mature processes and trained people to execute.

  5. Simplify and build visibility in. An example comes from my post Feds Plan to Reduce, Then Monitor. What's cheaper than 1) identifying all your gateways; 2) devising a plan to reduce that number; and 3) building visibility in? Step 1 takes some effort, step 2 might strain your network architects, and step 3 could require new monitoring platforms. However, when done, you're spending less money on gateways, less time scoping intrusions, and less resources on scrambling during incident response because you know all the ways in and out of your organization -- and you can see what is happening. This is a no-brainer.

  6. Move data, not people. This is the principle I mentioned in Green Security. I'm sure your travel budget is being cut. Why fly a security person around the world when, if you achieve the goals in step 4, you can move the data instead? And, if you're building visibility in, you have more data available and don't need to scramble for it.

  7. Wrap everything in metrics. This one is probably the most painful, but it's definitely necessary. If you can't justify your security spending, you're more likely to be cut in a downturn. This doesn't mean "security ROI." What is does mean is showing why your approach is better than the alternatives, with "better" usually meaning (but not always) "cheaper." It can be difficult to capture finances in our field, but I have some ideas. One is intrusion debt. If you've recently hired any outside consultants to assist with security work, their invoices provide a ton of metrics opportunities. (You have a tangible cost that you wish to avoid by taking steps X, Y, and Z in the future.) Metrics can also justify team growth, which is the next step out of the downturn. Be ready!


If you have any ideas, please post them here. I think this is an important topic. Thank you.


Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

Tips for PSIRTs

If your company sells software, you probably need to have a Product Security Incident Response Team (PSIRT). The PSIRT should act as the single point of contact for any user of your product to report and coordinate security problems with your software product.

Examples of PSIRTs include:

I think you can tell how serious a company takes security by the way they promote their PSIRT, obscure its existence, or not even operate one. Try comparing Oracle to Cisco, for example.

If you're looking to start a PSIRT, Chad Dougherty's Recommendations to vendors for communicating product security information post on the CERT blog is a great start.


Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

Snort Report 21 Posted

My 21st Snort Report titled Understanding Snort's Unified2 output has been posted. From the article:

Welcome to the 21st edition of the Snort Report! In July 2007 I described Snort's Unified output, first released in July 2001 with Snort 1.8.0. Unified output allows Snort to write sets of data to a sensor's hard drive. Writing to the hard drive, instead of performing database inserts, allows Snort to operate faster and minimize packet loss.

Unified2 output first appeared in Snort 2.8.0, released in September 2007.


I came across this comparison of Unified and Unified2 format at SecurixLive.com but didn't get to include it in my article.

If you're worried about the Barnyard2 implementation at SecurixLive having licensing issues, the author is addressing those as we speak; he did not intend to cause any trouble. So, I am looking forward to seeing greater adoption of Unified2 formats once solutions like those in my article are tested.


Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

Kamis, 20 November 2008

Intellectual Property: Develop or Steal

I found the article Internet thieves make big money stealing corporate info in USA Today to be very interesting.

In the past year, cybercriminals have begun to infiltrate corporate tech systems as never before. Knowing that some governments and companies will pay handsomely for industrial secrets, data thieves are harvesting as much corporate data as they can, in anticipation of rising demand...

Elite cybergangs can no longer make great money stealing and selling personal identity data. Thousands of small-time, copycat data thieves have oversaturated the market, driving prices to commodity levels. Credit card account numbers that once fetched $100 or more, for instance, can be had for $10 or less, says Gunter Ollmann, chief security strategist at IBM ISS, IBM's tech security division.

Who buys stolen business data? Brett Kingstone, founder of Super Vision International (now Nexxus Lighting), an Orlando-based industrial lighting manufacturer, knows the answer all too well. In 2000, an intruder breached Super Vision's public-facing website and probed deep enough to snatch secrets behind the company's patented fiber-optic technology.

That intelligence made its way into the hands of a Chinese entrepreneur, Samson Wu. In his book, The Real War Against America, Kingstone recounts how Wu obtained Super Vision's detailed business plans, built a new Chinese factory from scratch and began mass marketing low-priced counterfeit lighting fixtures, complete with warranties referring complaints to Super Vision.

"They had an entire clone of our manufacturing facility," says Kingstone, who won a civil judgment against Wu. "What took us $10 million and 10 years to develop, they were able to do for $1.4 million in six months..."

In the past nine months, data thieves have stepped up attacks against any corporation with weak Internet defenses. The goal: harvest wide swaths of data, with no specific buyer yet in mind, according to security firm Finjan...

"Cybercriminals are focusing on data that can be easily obtained, managed and controlled in order to get the maximum profit in a minimum amount of time," says Ben-Itzhak.

Researchers at RSA, the security division of tech systems supplier EMC, have been monitoring deals on criminal message boards. One recent solicitation came from a buyer offering $50 each for e-mail addresses for top executives at U.S. corporations...

Meanwhile, corporations make it all too easy, say tech security experts and law enforcement officials.
(emphasis added)

We know amateurs study cryptography; professionals study economics, and this explains why. $1.4 million over six months vs $10 million over 10 years makes theft the more attractive proposition for those outside the law.

I'm often asked how we should think about "winning" our current cyber conflicts. I like to consider two metrics.

  1. Information assurance is winning, in a broad sense, when the cost of stealing intellectual property via any means is more expensive than developing that intellectual property independently.

  2. Information assurance is winning, in a narrow sense, when the cost of stealing intellectual property via digital means is more expensive than stealing that data via nontechnical means (such as human agents placed inside the organization).


Number 1 is preferred when you consider your organization as a whole. Number 2 is preferred if you only care about making IP theft the problem of your physical security organization! Obviously I prefer number 1 if possible, but achieving number 2 is more achievable in the medium to long term.

This echoes the comment I made in Ten Themes from Recent Conferences:

We can not stop intruders, only raise their costs. Enterprises stay dirty because we can not stop intruders, but we can make their lives more difficult. I've heard of some organizations trying to raise the $ per MB that the adversary must spend in order to exfiltrate/degrade/deny information.‏


Richard Bejtlich is teaching new classes in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

Selasa, 11 November 2008

Laid-off Sys Admin Story Makes My Point

I read this great story by Sharon Gaudin titled Laid-off sysadmin arrested for threatening company's servers:

A systems administrator was arrested in New Jersey today for allegedly trying to extort money and even good job references out of a New York-based mutual fund company that had just laid him off...

Viktor Savtyrev, of Old Bridge, N.J., was arrested at his home Monday morning. He faces two charges under the federal cyberextortion statute...

Late in the morning of Thursday, Nov. 6, Savtyrev allegedly used a Gmail account to e-mail the company's general counsel and three other employees, saying he was "not satisfied with the terms" of his severance, according to FBI Special Agent Gerald Cotellesse in the complaint. Savtyrev allegedly threatened to cause extensive damage to the company's computer servers if it would not increase his severance pay, extend his medical coverage and provide "excellent" job references.

The sysadmin also threatened to alert the media after attacking the server.


Now, I know many of you are saying "See! The insider threat is so terrible!" I look at this story and think the opposite. This story exemplifies the point I made in Of Course Insiders Cause Fewer Security Incidents. If the potential intruder in this case had been an adversary in East Slobovia, the victim company would have no recourse. The bad guy could take whatever action he wants because no on can touch him.

Because the potential intruder was an insider, the victim company knew who he was, where he lived, and could enlist law enforcement help to arrest him.

Like I also said in the previous post:

However, as I've said elsewhere, insiders will always be better informed and positioned to cause the most damage to their victims. They know where to hurt, how to hurt, and may already have all the access they need to hurt, their victim.

This is another strike against those who believe in vulnerability-centric security. No company has air-tight defenses, so even if you do a good job revoking access from ex-employees they still can strike back. At least when they are former insiders you have a chance of putting them out of commission by striking at the threat, not patching more holes.


Richard Bejtlich is teaching in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

Marcus Ranum on Network Security

I liked this interview with Marcus Ranum titled Marcus Ranum on Network Security:

Q: In your opinion, what is the current weakest link in the network security chain that will need to be dealt with next year and beyond?

MJR: There are two huge problems: Software development and network awareness. The software development aspect is pretty straightforward. Very few people know how to write good code and even fewer know how to write secure code. Network awareness is more subtle. All through the 1990s until today, organizations were building massive networks and many of them have no idea whatsoever what's actually out there, which systems are crucial, which systems hold sensitive data, etc.

The 1990s were this period of irrational exuberance from a security standpoint - I think we are going to be paying the price for that, for a long time indeed. Not knowing what's on your network is going to continue to be the biggest problem for most security practitioners...

The real best practices have been the same since the 1970s: know where your data is, who has access to what, read your logs, guard your perimeter, minimize complexity, reduce access to "need only" and segment your networks. Those are the practices and techniques that result in real security.
(emphasis added)

One way to begin this process is to hire an Enterprise Visibility Architect with the authority to figure out what is happening inside the organization.


Richard Bejtlich is teaching in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

BGPMon on BGP Table Leak by Companhia de Telecomunicacoes do Brasil Central

Last month I posted BGPMon.net Watches BGP Announcements for Free. I said:

I created an account at BGPMon.net and decided to watch for route advertisements for Autonomous System (AS) 80, which corresponds to the 3.0.0.0/8 network my company operates. The idea is that if anyone decides to advertise more specific routes for portions of that net block, and the data provided to BGPMon.net by the Réseaux IP Européens (RIPE) Routing Information Service (RIS) notices the advertisements, I will get an email.

Well, that started happening last night:


You Receive this email because you are subscribed to BGPmon.net.
For more details about these updates please visit:
http://bgpmon.net/showupdates.php

====================
Possible Prefix Hijack (Code: 11)
1 number of peer(s) detected this updates for your prefix 3.0.0.0/8:
Update details: 2008-11-11 01:55 (UTC)
3.0.0.0/8
Announced by: AS16735 (Companhia de Telecomunicacoes do Brasil Central)
Transit AS: 27664 (CTBC Multimídia)
ASpath: 27664 16735

I got four more updates, the last at 2008-11-11 02:59 (UTC).

These alerts indicated that AS16735 (Companhia de Telecomunicacoes do Brasil Central) was advertising routes for my company's 3.0.0.0/8 netblock. That's not good.

When I saw that I initially assumed we were the only ones affected. Early today I read Prefix hijack by AS16735 on the BGPMon blog stating the following:

Between 01:55 UTC and 02:15 267947 distinct prefixes were originated from AS16735 (Companhia de Telecomunicacoes do Brasil Central), hence a full table ‘leak’. After that more updates were detected. The last hijack update originated by AS16735 was received at 03:07 UTC. So the ‘hijack’ was there for about 75 minutes As far as I can see the only RIS collector who saw this hijack was the one in Sao Paulo, Brazil (PTTMetro-SP), there it was seen by a few RIS peers.

This means that Companhia de Telecomunicacoes do Brasil Central advertised routes for the whole Internet. It was a mistake; no one does that on purpose.

The NANOG mailing list has a thread on this event if you want to see what others reported.

A look at the RIPE AS Dashboard for AS 27664, a transit AS, shows the spike in BGP updates per minute caused by this event.



Unfortunately, I do not see one for AS 16735, the culprit here. Good work BGPMon!


Richard Bejtlich is teaching in DC and Europe in 2009. Register by 1 Jan and 1 Feb, respectively, for the best rates.

Bejtlich Teaching at Black Hat Europe 2009

Black Hat was kind enough to invite me back to teach a new 2-day course at Black Hat Europe 2009 Training on 14-15 April 2009 at the Mövenpick City Centre in Amsterdam, Netherlands. This class, completely new for 2009, is called TCP/IP Weapons School 2.0. This is my only scheduled class outside the United States in 2009.

The short description says:

This hands-on, lab-centric class by Richard Bejtlich focuses on collection, detection, escalation, and response for digital intrusions.

Is your network safe from intruders? Do you know how to find out? Do you know what to do when you learn the truth? If you need answers to these questions, TCP/IP Weapons School 2.0 (TWS2) is the Black Hat course for you. This vendor-neutral, open source software-friendly, reality-driven two-day event will teach students the investigative mindset not found in classes that focus solely on tools. TWS2 is hands-on, lab-centric, and grounded in the latest strategies and tactics that work against adversaries like organized criminals, opportunistic intruders, and advanced persistent threats.


Registration is now open. Black Hat set the four price points and deadlines for registration:

  1. Early: Ends Feb 1

  2. Regular: Ends Mar 1

  3. Late: Ends Apr 1

  4. Onsite: Apr 14


Please join me in Amsterdam next year for TCP/IP Weapons School 2.0. If you've attended previous classes, even TCP/IP Weapons School, the new class is brand new and you're definitely welcome back. This will be the same class as the one I teach in DC in February 2009, however. Thank you.

Bejtlich Teaching at Black Hat DC 2009 Training

Black Hat was kind enough to invite me back to teach a new 2-day course at Black Hat DC 2009 Training on 16-17 February 2009 at the Hyatt Regency Crystal City in Arlington, VA. This class, completely new for 2009, is called TCP/IP Weapons School 2.0. This is my only scheduled class on the east coast of the United States in 2009.

The short description says:

This hands-on, lab-centric class by Richard Bejtlich focuses on collection, detection, escalation, and response for digital intrusions.

Is your network safe from intruders? Do you know how to find out? Do you know what to do when you learn the truth? If you need answers to these questions, TCP/IP Weapons School 2.0 (TWS2) is the Black Hat course for you. This vendor-neutral, open source software-friendly, reality-driven two-day event will teach students the investigative mindset not found in classes that focus solely on tools. TWS2 is hands-on, lab-centric, and grounded in the latest strategies and tactics that work against adversaries like organized criminals, opportunistic intruders, and advanced persistent threats.


Registration is now open. Black Hat set the four price points and deadlines for registration:

  1. Early: Ends Jan 1

  2. Regular: Ends Feb 1

  3. Late: Ends Feb 11

  4. Onsite: Feb 16


Please join me in the DC area next year for TCP/IP Weapons School 2.0. If you've attended previous classes, even TCP/IP Weapons School, the new class is brand new and you're definitely welcome back. Thank you.

Senin, 10 November 2008

Securix-NSM 1.0 Released

Yesterday I read A successor is born... Securix-NSM 1.0. Securix-NSM is a Debian-based live CD that is the fastest way I've ever seen for a new user to try Sguil. All you have to do is download the 280 MB .iso, boot it, and follow the quick start documentation.

Those steps are basically:

  1. Open a terminal.

  2. Execute 'sudo nsm start'.

  3. Double-click on the Sguil client icon.

  4. Log into Sguil.



To test Sguil, I executed 'apt-get install lynx' then visited www.testmyids.com. In the screenshot you'll see the default Sguil installation generated two alerts. I was able to generate a transcript and launch Wireshark. However, SANCP session records did not appear to be inserted into the database although SANCP was running.

I suggest trying Securix-NSM if you'd like to try using Sguil but have no experience setting it up.

Minggu, 09 November 2008

2nd Issue of BSD Magazine

I recently received a copy of the 2nd issue of BSD Magazine. This edition has a heavy OpenBSD focus, which is nice considering OpenBSD 4.4 was released last week. I have it on good authority that the next issue of the magazine will focus on NetBSD and be available in December. When I can say more I will post details on my blog.

Jumat, 07 November 2008

Fast Money's Transparency and Digital Security

This evening I was very happy to attend a live taping of CNBC's Fast Money program in Washington, DC. Several years ago my wife and I saw a live taping of CNN's old Crossfire program, but this event took place in a huge hall with over 2,000 audience members.

Before the broadcast Fast Money host Dylan Ratigan addressed us and shared his thoughts on current economic conditions. He said that a lack of transparency was a fundamental problem on Wall Street and in Washington, DC. He stated he is on a crusade to obtain from those in power the information investors and citizens need to make sound decisions.

This point resonated with me. Looking at the financial wreckage around us, I remembered my post Bankers: Welcome to Our World. I wondered if I might have to write a post where bankers tell digital security people "welcome to our world." In other words, what bubbles of false security have we encouraged thanks to low security spending, lack of management interest, and lack of visibility? (The financial equivalents might be low interest rates, poor oversight, and off-balance-sheet activities.)

In my post General Chilton on the Cyber Fight I used this language:

Imagine that you defer that cost by not detecting and responding to the intrusion. Perhaps the intruder is stealthy. Perhaps you detect the attack but cannot respond for a variety of reasons. The longer the intrusion remains active, I would argue, the more debt one builds.

For my keynote at the 2008 SANS Forensics and IR Summit I coined the term intrusion debt to describe the costs I outlined in my Chilton post. (Slides from my SANS talk are here. [.pdf])

When does that intrusion debt become too great? How many CEOs/CIOs/CTOs/CISOs/CSOs will look at the digital wreckage of an incident and wonder "why didn't we see this happening?"

Current and Future White House v China

To continue my "v China" series of blog posts, I note the following:

Chinese hack into White House network:

Chinese hackers have penetrated the White House computer network on multiple occasions, and obtained e-mails between government officials, a senior US official told the Financial Times.

On each occasion, the cyber attackers accessed the White House computer system for brief periods, allowing them enough time to steal information before US computer experts patched the system.

US government cyber intelligence experts suspect the attacks were sponsored by the Chinese government because of their targeted nature. But they concede that it is extremely difficult to trace the exact source of an attack beyond a server in a particular country.

”We are getting very targeted Chinese attacks so it stretches credulity that these are not directed by government-related organisations,” said the official.

The official said the Chinese cyber attacks had the hallmarks of the “grain of sands” approach taken by Chinese intelligence, which involves obtaining and pouring through lots of - often low-level - information to find a few nuggets.

Some US defence companies have privately warned about attacks on their systems, which they believe are attempts to learn about future weapons systems.

The National Cyber Investigative Joint Task Force [apparently an FBI-led group], a new unit established in 2007 to tackle cyber security, detected the attacks on the White House. But the official stressed that the hackers had only accessed the unclassified computer network, not the more secure classified network.


So that's the current administration. On to the next:

Obama, McCain campaigns' computers hacked for policy data:

Computers at the headquarters of the Barack Obama and John McCain campaigns were hacked during the campaign by a foreign entity looking for future policy information, a source with knowledge of the incidents confirms to CNN.

Sources say McCain campaign computers were hacked around the same time as those of Obama's campaign.

Workers at Barack Obama's headquarters first thought there was a computer virus.

The source said the computers were hacked mid-summer by either a foreign government or organization.

Another source, a law enforcement official familiar with the investigation, says federal investigators approached both campaigns with information the U.S. government had about the hacking, and the campaigns then hired private companies to mitigate the problem.

U.S. authorities, according to one of the sources, believe they know who the foreign entity responsible for the hacking is, but refused to identify it in any way, including what country.

The source, confirming the attacks that were first reported by Newsweek, said the sophisticated intrusions appeared aimed at gaining information about the evolution of policy positions in order to gain leverage in future dealings with whomever was elected.

The FBI is investigating, one of the sources confirmed to CNN.


This is the Golden Age for incident detection and response. Where are all the prevention advocates? How about the inside threat fans? Sorry, it's all about detecting and responding to external threats.

Kamis, 06 November 2008

Defining Security Event Correlation

This my final post discussing security event correlation (SEC) for now. (When I say SAC I do not mean the Simple Event Correlator [SEC] tool.)

Previously I looked at some history regarding SEC, showing that the ways people thought about SEC really lacked rigor. Before describing my definition of SEC, I'd like to state what I think SEC is not. So, in my opinion -- you may disagree -- SEC is not:

  1. Collection (of data sources): Simply putting all of your log sources in a central location is not correlation.

  2. Normalization (of data sources): Converting your log sources into a common format, while perhaps necessary for correlation (according to some), is not correlation.

  3. Prioritization (of events): Deciding what events you most care about is not correlation.

  4. Suppression (via thresholding): Deciding not to see certain events is not correlation.

  5. Accumulation (via simple incrementing counters: Some people consider a report that one has 100 messages of the same type to be correlation. If that is really correlation I think your standards are too low. Counting is not correlation.

  6. Centralization (of policies): Applying a single policy to multiple messages, while useful, is not correlation itself.

  7. Summarization (via reports): Generating a report -- again helpful -- by itself is not correlation. It's counting and sorting.

  8. Administration (of software): Configuring systems is definitely not correlation.

  9. Delegation (of tasks): Telling someone to take action based on the above data is not correlation.


So what is correlation? In my last post I cited Greg Shipley, who said if the engine sees A and also sees B or C, then it will go do X. That seems closer to what I consider security event correlation. SEC has a content component (what happened) and a temporal component (when did it happen). Using those two elements you can accomplish what Greg says.

I'd like to offer the following definition, while being open to other ideas:

Security event correlation is the process of applying criteria to data inputs, generally of a conditional ("if-then") nature, in order to generate actionable data outputs.

So what about the nine elements are listed? They all seem important. Sure, but they are not correlation. They are functions of a Security Information and Event Management (SIEM) program, with correlation as one component. So, add correlation as item 10, and I think those 10 elements encompass SIEM well. This point is crucial:

SIEM is an operation, not a tool.

You can buy a SIEM tool but you can't buy a SIEM operation. You have to build a SIEM operation, and you may (or may not) use a SIEM to assist you.

Wait, didn't Raffy say SIM is dead? I'll try to respond to that soon. For now let me say that the guiding principle for my own operation is the following:

Not just more data; the right data -- fast, flexible, and functional.

Selasa, 04 November 2008

Response to Marcus Ranum HITB Cyberwar Talk

Many readers have been asking me to comment on Marcus Ranum's keynote titled Cyberwar is Bullshit at Hack In The Box Security Conference 2008 - Malaysia. (What a great conference; I think we are seeing the Asia-Pacific area really grow its digital security community. You can access the conference materials here. I'd like to point out my friend CS Lee spoke about NSM at the event.)

The article Don’t waste funds preparing for cyberwars summarized Marcus' talk as follows:

The billions of dollars spent on researching cyberwarfare can be put to better use because cyberwar is never going to be as effective as conventional war, said an IT ­security expert.

Marcus Ranum, chief security officer of Tenable Network Security said cyberattacks aren’t a good force multiplier in an actual war.

Many people, he said, talk about cyberspace as if it can be a new form of battlefield but this is not possible because you can’t occupy and hold cyberspace as you would a piece of enemy territory.

Ranum was speaking at HiTBSecConf 2008 here this week.

He said trying to overcome another country via cyberspace is impossible unless you also have a huge army that can defeat its forces in conventional warfare.

A small country, even with an army of hackers on its side, is never going to be able to defeat a big country with an extensive land, air and sea military force by attacking through the Internet.


If you search my blog for the term cyberwar you'll find plenty of posts, but let me try to summarize my thoughts.

In September 2007 I wrote China Cyberwar, or Not?:

DoD Joint Publication 3-13, Information Operations, differentiates between two sorts of offensive information operations.

  1. Computer Network Exploitation. Enabling operations and intelligence collection capabilities conducted through the use of computer networks to gather data from target or adversary automated information systems or networks. Also called CNE.

  2. Computer Network Attack. Actions taken through the use of computer networks to disrupt, deny, degrade, or destroy information resident in computers and computer networks, or the computers and networks themselves. Also called CNA.


You can think of CNE as spycraft, and CNA as warfare. In the physical world, the former is always occurring; the latter is hopefully much rarer. I would place all of the publicly reported activity from the last few months in the CNE category.


I'd like to add a third category not mentioned in the information operations doctrine: cybercrime. In Marcus' talk, he separates adversary action into cybercrime, cyberterror, cyberespionage, and cyberwar. I don't explicitly break out terrorism because I consider it a criminal issue, and not a military issue.

Marcus's cyberespionage and cyberwar categories relate to my points about Computer Network Exploitation and Computer Network Attack, respectively.

Marcus' slides say "packets don't hold ground." The question is whether that matters. Aircraft don't hold ground either. However, no army wants to operate without air supremacy or at least air superiority overhead. (Ask the Georgians if you doubt this.) Would you rather be able to conduct CNE, or not? If yes, why?

Combatant commanders approach the problem this way. If you're Stormin' Norman Schwarzkopf in 1991, and you want to remove the Iraqi army from Kuwait, you'll want to blind the Iraqi radar grid. If you can do so electronically instead of risking the life of a pilot or running down your missile stocks, would you want to? Most commanders I knew wanted to be 100% sure that their decision would work. Not all warfare is about holding ground.

I think the major problem with the cyberwar discussion is the idea that a real conflict could be a purely cyber conflict. This is wrong. I don't think the early air pioneers expected their role to involve purely aerial warfare. Each method of combat has been integrated into the overall ugly fabric of war. So, I don't think "cyberwar is bullshit," but I'm guessing neither does Marcus if you discuss it in the proper context.

Senin, 03 November 2008

Response to "Air Force Aims to 'Rewrite Laws of Cyberspace'"

Given my recent posts like Whither Air Force Cyber? I felt the need to comment on Noah Shachtman's story Air Force Aims to 'Rewrite Laws of Cyberspace':

The Air Force is fed up with a seemingly endless barrage of attacks on its computer networks from stealthy adversaries whose motives and even locations are unclear. So now the service is looking to restore its advantage on the virtual battlefield by doing nothing less than the rewriting the "laws of cyberspace."

Four years ago I wrote Thoughts on the United States Air Force Computing Plans:

I was asked my thoughts on the US Air Force's new computing deal with Microsoft. In short, Microsoft will provide core server software, maintenance and upgrade support, and Dell will supply more than 525,000 Microsoft desktop Windows and Office software licenses to the Air Force...

So instead of taking a serious look at the root cause of its patching and exploitation costs (both financial and in mission impact), the Air Force sought a better deal from the vendor producing flawed software. This is sad. TechWorld's Ellen Messmer wrote "The US Air Force has had enough of Microsoft's security problems. But rather than switch to an alternative, it has struck a deal with CEO Steve Ballmer for a specially configured version of Windows..."

Had the Air Force decided to break away from Microsoft, the other services would have definitely taken notice. In fact, corporate America would have taken notice.


I followed a few months later with As Always, .gov and .mil Fight the Last War:

The US Office of Management and Budget's Karen Evans reportedly likes the US Air Force's plans to "deliver standardized and securely configured Microsoft software throughout the service..."

This approach is fighting the last war, since it relies on running hundreds of thousands of personal computers with general purpose operating systems. All of these systems will still need applications installed, and those apps and the OS will have to be patched, updated, etc.


Here we are staring at 2009 and the Air Force is still being 0wned. So much for the bold Microsoft strategy! Apparently the Air Force has taken a note from my blog post Change the Plane by seeking to "rewrite laws of cyberspace."

Unfortunately, the Air Force and anyone else who seeks a vulnerability-centric security program needs to realize that the only way to win purely by playing defense is to be different. Being different means you force the adversary to expend time and resources on attacking you. Right now it's cheap for an adversary to develop a single Word 0-day and sell it to someone attacking .mil, or .edu, or .com, or anyone else running Office. However, if you really want to attack the Air Force, and they use AFOffice on AFOS (maybe on AF CPU), you have to develop new ways to steal their data. That's probably not cheap.

Unfortunately for the Air Force and others adopting a defense-by-diversity strategy, being different costs money. The whole reason the defense and intel communities adopted COTS (Commercial Off The Shelf) platforms was to save money. The Air Force and anyone else who pursues a vulnerability-centric security posture should weigh the total costs of COTS vs GOTS (Government Off The Shelf). I bet when you factor in security costs, COTS doesn't look so attractive anymore.

The Best Cyber-Defense...

I've previously posted Taking the Fight to the Enemy and Taking the Fight to the Enemy, Revisited. I agreed with sentiments like the following, quoted in my posts:

The best defense against cyberattacks on U.S. military, civil and commercial networks is to go on the offensive, said Marine Gen. James Cartwright, commander of the Strategic Command (Stratcom), said March 21 in testimony to the House Armed Services Committee.

“History teaches us that a purely defensive posture poses significant risks,” Cartwright told the committee. He added that if “we apply the principle of warfare to the cyberdomain, as we do to sea, air and land, we realize the defense of the nation is better served by capabilities enabling us to take the fight to our adversaries, when necessary, to deter actions detrimental to our interests...”


I found this idea echoed in the book Enemies: How America's Foes Steal Our Vital Secrets--and How We Let It Happen by Bill Gertz which I mentioned in Counterintelligence: Worse Than Security?. The author argues that the best way to protect a nation's intelligence from enemies is to attack the adversary's intelligence services. In other words, conduct aggressive counterintelligence to find out what the enemy knows about you. When you know what the enemy knows about you, you fight a more informed battle. You may even be able to alter his perception of you, and avoid a fight altogether.

I think Joe Stewart's latest post, Tracking Gimmiv, illustrates this point very well. Joe isn't a .mil or .gov operative, so he can't bomb anyone or put them in jail. He can conduct research operations, however, to learn the truth about the enemy's capabilities. Joe writes:

On October 23, 2008, Microsoft released an out-of-cycle emergency patch for a flaw in the Windows RPC code. The reason for this unusual occurance was the discovery of a “zero-day” exploit being used in the wild by a worm (or trojan, depending on how you look at it). The announcement of a new remote exploit for unpatched Windows systems always raises tension levels among network administrators. The fact that this one was already being used by a worm evoked flashbacks of Blaster and Sasser and other previous threats that severely impacted the networked world.

But, unlike these past worms, Gimmiv turned out to have infected scarcely any networks at all...

Because of some mistakes made by the author(s) of Gimmiv, third parties were able to download the logfiles of the Gimmiv control server. Although most of the data in the logs is AES-encrypted, we were able to find the key hardcoded in the Gimmiv binary and decrypt the data.

Although it has been reported that Gimmiv is a credential-stealing trojan, this functionality is actually not used - the gathered data is never sent. What is sent is simply basic system information, such as the Windows version, IP and MAC address, Windows install date/time and the default system locale. Using this data we were able to track exactly how many computers had been infected prior to October 23rd (after this time infection counts are somewhat skewed due to malware researchers all over the world investigating Gimmiv). As it turns out, only around 200 computers were infected since the time Gimmiv was actively deployed on September 29, 2008...

Additionally, a zip file left behind on one of the control servers contained Korean characters in the compressed folder name. For these two reasons, we believe Gimmiv’s author is probably from South Korea.
(emphasis added)


Joe took the fight to the enemy. This is what most malware researchers do; they infiltrate the adversary's systems to figure out what is happening. This isn't a task for novices, but it does yield excellent results.

Joe's work isn't strictly counterintelligence, since he is probably not opposing a foreign intelligence service. Speaking of counterintelligence, I noticed this August article New Unit of DIA Will Take the Offensive On Counterintelligence about the Defense Counterintelligence and Human Intelligence Center:

The Defense Intelligence Agency's newly created Defense Counterintelligence and Human Intelligence Center is going to have an office authorized for the first time to carry out "strategic offensive counterintelligence operations," according to Mike Pick, who will direct the program...

In strategic offensive counterintelligence operations, a foreign intelligence officer is the target, and the main goals most often are "to gather information, to make something happen . . . to thwart what the opposition is trying to do to us and to learn more about what they're trying to get from us," [Toby] Sullivan [director of counterintelligence for James R. Clapper Jr., the Undersecretary of Defense for Intelligence] said.
(emphasis added)

I found the transcript of the news conference contained this section mentioning cyber:

Q: Could you talk about the threats that you guys are sort of arrayed against? I’m thinking China has got to be high on your list. They seem to be in the news a lot for particularly defense technology, espionage. And I’m wondering where you fit into the whole cyber initiative that seems to be – so could you just talk about those and other things that you’re particularly focused on?

MR. SULLIVAN: The cyber initiative – there are other parts of the department that are responsible for protecting the IT systems of the department. The counterintelligence role in that – and we do have a role – is to provide some analysis and then, quite frankly, from an offensive capability, it provides us another venue to perhaps engage the enemy. But we don’t have a role in protecting the systems, if you will. There are other folks in the department that do that. As far as the threats, we had the Cold War threats and we have the today threats. There hadn’t been a whole lot of change over the last 20 or 30 years.


It will be interesting to (not) see how this new organization develops.

Snort Report 20 Posted

My 20th Snort Report titled Using Snort 2.8.3 to inspect HTTP traffic has been posted. From the article:

Solution provider takeaway: Solution providers will learn new features in Snort 2.8.3 to improve the granularity of inspecting HTTP traffic.

Welcome to the 20th edition of the Snort Report! In July, we described new features in Snort 2.8.2 and how to identify them when compared to Snort 2.8.0 and intervening releases. Since then, Snort 2.8.2.1, 2.8.2.2 and 2.8.3 have arrived. In this issue of the Snort Report, we'll use the previously explained techniques to learn what's new in Snort 2.8.3, and then try those techniques ourselves.