Tampilkan postingan dengan label law. Tampilkan semua postingan
Tampilkan postingan dengan label law. Tampilkan semua postingan

Kamis, 09 Agustus 2012

DOJ National Security Division Pursuing Cyber Espionage

I just read Justice Department trains prosecutors to combat cyber espionage by Sari Horowitz, writing for the Washington Post. The article makes several interesting points:

Confronting a growing threat to national security, the Justice Department has begun training hundreds of prosecutors to combat and prosecute cyber espionage and related crimes, according to senior department officials.

The new training is part of a major overhaul following an internal review that pinpointed gaps in the department’s ability to identify and respond to potential terrorist attacks over the Internet and to the rapidly growing crime of cyber espionage, the officials said, describing it for the first time.

In recent weeks, Justice has begun training more than 300 lawyers in Washington and nearly 100 more across the county in the legal and technical skills needed to confront the increase in cyber threats to national security...

Under the reorganization, teams of specialized lawyers within NSD in Washington will work with other agencies, the military and companies facing cyber intrusions. They will develop protocols for the intelligence community and federal agents in how to deal with private companies that are victims of cyber attacks. The issues revolve around how to build possible prosecutions within guidelines covering information sharing, privacy and civil liberties.

At least one prosecutor in each of the 94 U.S. attorney’s offices around the country has been designated and will be trained to gather evidence and prosecute cyber espionage and similar Internet-related cases.

This is very interesting if the focus is truly on cyber espionage cases. DOJ persecutes physical espionage cases routinely (albeit with difficulty due to the nature of the laws). Cyber espionage cases are almost never pursued. Working with private companies will be key to this problem, and that aspect is mentioned specifically in the article.

Let's see what happens!

Rabu, 25 November 2009

Tort Law on Negligence

If any lawyers want to contribute to this, please do. In my post Shodan: Another Step Towards Intrusion as a Service, some comments claim "negligence" as a reason why intruders aren't really to blame. I thought I would share this case from Tort Law, page 63:

In Stansbie v Troman [1948] 2 All ER 48 the claimant, a householder, employed the defendant, a painter. The claimant had to be absent from his house for a while and he left the defendant working there alone. Later, the defendant went out for two hours leaving the front door unlocked. He had been warned by the claimant to lock the door whenever he left the house.

While the house was empty someone entered it by the unlocked front door and stole some of the claimant's posessions. The defendant was held liable for the claimant's loss for, although the criminal action of a third party was involved, the possibility of theft from an unlocked house was one which should have occurred to the defendant.


So, the painter was liable. However, that doesn't let the thief off the hook. If the police find the thief, they will still arrest, prosecute, and incarcerate him. The painter won't serve part of the thief's jail time, even though the painter was held liable in this case. So, even in the best case scenario for those claiming "negligence" for vulnerable systems, it doesn't diminish the intruder's role in the crime.

Rabu, 10 Oktober 2007

Be the Caveman Lawyer

A few weeks ago I recommended security people to at least Be the Caveman and perform basic adversary simulation / red teaming. Now I read Australia's top enterprises hit by laymen hackers in less than 24 hours:

A penetration test of 200 of Australia's largest enterprises has found severe network security flaws in 79 percent of those surveyed.

The tests, undertaken by University of Technology Sydney (UTS), saw 25 non-IT students breach security infrastructure and gain root or administration level access within the networks of Australia's largest companies, using hacking tools freely available on the Internet.

The students - predominately law practitioners - were given 24 hours to breach security infrastructure on each site and were able to access customer financial details, including confidential insurance information, on multiple occasions.

High-level business executives from the companies surveyed, rather than IT staff, were informed of the tests so the "day-to-day network security" of businesses could be tested.
(emphasis added)

Again, my advice is simple, but now it is modified. Be the Caveman Lawyer.

One other point from the article:

Most of the 21 percent of companies who passed the penetration tests owed their success to freeware Intrusion Detection Systems (IDSs), according to Ghosh.

Snort was mentioned earlier in the article. That means you can be a Cheap Caveman Lawyer and prepare for common threats.

Selasa, 19 Juni 2007

More on Enterprise Data Centralization

I'd like to respond to a few comments to my post Enterprise Data Centralization. The first paragraph includes the following:

However, I haven't written about a natural complement to thin client computing -- enterprise data centralization. In this world, the thin client is merely a window to a centralized data store (sufficiently implemented according to business continuity processes and methods like redundancy, etc.).

The bolded part is my answer to those who think my "centralization" plan means building the Mother of All Storage Servers/Networks. Please. Do you think I would really advocate that? The bolded part is my shorthand for saying I do NOT mean to build the Mother of All Storage Servers/Networks.

Instead, I envision something similar to the way Google operates. One of you used Google as an example of data decentralization. Sure, the data is decentralized at the level of bits on media, but it's exceptionally centralized where it matters -- the user interface. I can access all of my Google-related content through one portal. If my data needed to be explored for ediscovery purposes, all you need is my Google login. Easy. That's the kind of centralization I'm talking about.

That explanation should also calm those who think I'm building the Mother of All Targets; i.e., nuke the primary and secondary data centers and the whole company is dead. Again, you're thinking at the level of bits and media. I'm thinking in terms of a single interface to all company data.

Now you might be thinking that what I'm advocating isn't all that special. Consider this: do you have a single place to go for all of your company data? If you do, that is awesome. I doubt that it's the case for most of us, however. Unfortunately, we have to move in that direction if we wish to meet legal business requirements.

Christopher Hoff used the term "agile" several times in his good blog post. I think "agile" is going to be thrown out the window when corporate management is staring at $50,000 per day fines for not being able to produce relevant documents during ediscovery. When a company loses a multi-million dollar lawsuits because the judge issued an adverse inference jury instruction, I guarantee data will be centralized from then forward.

The May 2007 ISSA Journal features a great article titled E-discovery: Implications of FRPC Changes on IT Risk Management by Bradley J. Schaufenbuel. It features this excerpt:

Adverse inference jury instruction: If electronic evidence is not produced in a timely manner, a judge may instruct the jury to assume that the missing evidence would have been adverse to the party that failed to produce it. This will greatly diminish this party’s chances of legal success.

Two highly visible examples include Zubulake v. UBS Warburg and Coleman v. Morgan Stanley. The defendant financial institutions in both lawsuits lost their cases due to their failure to adequately produce e-mail evidence, and the resulting assumption that evidence was willfully destroyed or withheld. Laura Zubulake, a former UBS employee, was awarded $29 million in 2005 in her sexual discrimination lawsuit.

And billionaire Ronald Perelman was awarded $1.45 billion in 2005 based on his claim that Morgan Stanley defrauded him in the 1998 sale of his company, camping goods manufacturer Coleman.


Email provides a good example of a place to start centralizing data. Look at the trouble the White House has created in the story House Report Shows White House Officials Sent Thousands of Official Emails Using Outside Accounts.

It's fine to be advocating Google Gears and all these other Web 2.0 applications and systems. There's one force in the universe that can slap all that down, and that's corporate lawyers. If you disagree, whom do you think has a greater influence on the CEO: the CTO or the corporate lawyer? When the lawyer is backed by stories of lost cases, fines, and maybe jail time, what hope does a CTO with plans for "agility" have?

Incidentally, I wouldn't be promoting centralization if I thought it was impossible. Centralization was a word in the first sentence the GE CTO said to me during out first meeting.

Kamis, 19 April 2007

CALEA Mania

CALEA is the Communications Assistance for Law Enforcement Act. I wrote about CALEA three years ago in Excellent Coverage of Wiretapping:

CALEA requires telecommunications carriers to allow law enforcement "to intercept, to the exclusion of any other communications, all wire and electronic communications carried by the carrier" and "to access call-identifying information," among other powers.

A lot has happened since then. Basically, all facilities-based broadband access providers and interconnected VoIP service providers must be CALEA-compliant by 14 May 2007. This means a lot of companies, of all sizes, are scrambling to deploy processes and tools to collect information in accordance with the law, as well as filing the right reports with the FCC.

If you're affected by CALEA I don't think you'll learn much from this post. However, those who do not work for ISPs might like to know a little bit about what is happening. (Note: I am not personally affected, so this post is based on some research I did this morning.) This post CALEA Mediation provides a lot of details and links, and the Wikipedia entry is good (as long as no one makes crazy changes). WISPA's mailing lists have carried several extended threads on CALEA compliance for wireless ISPs. The definitive blog on CALEA appears to be Demystifying Lawful Intercept and CALEA, by Scott Coleman, Director of Marketing for Lawful Intercept at SS8 Networks.

What started me looking at CALEA again was the story Solera Networks' CALEA Compliance Device, which talked about this Solera Networks appliance. The article mentioned OpenCALEA, which was new to me.

I checked out OpenCALEA via SVN from its OpenCALEA Google code site. Jesse Norell was helpful in #calea on irc.freenode.net. I installed the code on two FreeBSD 6.x boxes, cel433 (the "sensor") and poweredge (the box a Fed might use to collect data).

First I started a collector on the "Fed" box.

poweredge:/usr/local/opencalea_rev38/bin# ./lea_collector -t /tmp/cmii.txt
-u richard -f /tmp/cmc.pcap

Next I started a "tap" on the sensor to watch port 6667 traffic.

cel433:/usr/local/opencalea_rev38/bin# ./tap -x x -y y -z z -f "port 6667"
-i dc0 -d 10.1.13.2 -c

As I typed traffic in an IRC channel on a connection watched by the tap...

13:25 < helevius> This is another CALEA test

...the tap sent traffic to the Fed box.

13:26:28.795644 IP cel433.taosecurity.com.62576 >
poweredge.taosecurity.com.6666: UDP, length 265
0x0000: 4500 0125 80ca 0000 4011 cdf8 0a01 0a02 E..%....@.......
0x0010: 0a01 0d02 f470 1a0a 0111 44ce 7800 0000 .....p....D.x...
0x0020: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0030: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0040: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0050: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0060: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0070: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0080: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0090: 0000 0000 0000 0000 0000 0000 3230 3037 ............2007
0x00a0: 2d30 342d 3139 5431 373a 3236 3a32 382e -04-19T17:26:28.
0x00b0: 3430 3600 015c 22aa c200 02b3 0acd 5e08 406..\".......^.
0x00c0: 0045 0000 64c3 8f40 003f 0635 8245 8fca .E..d..@.?.5.E..
0x00d0: 1c8c d3a6 0380 331a 0b4f bb43 bfc4 6a95 ......3..O.C..j.
0x00e0: e080 187f ffe4 cc00 0001 0108 0a52 0b91 .............R..
0x00f0: ad05 c1a5 e150 5249 564d 5347 2023 736e .....PRIVMSG.#sn
0x0100: 6f72 742d 6775 6920 3a54 6869 7320 6973 ort-gui.:This.is
0x0110: 2061 6e6f 7468 6572 2043 414c 4541 2074 .another.CALEA.t
0x0120: 6573 740d 0a est..
13:26:28.795810 IP cel433.taosecurity.com.54296 >
poweredge.taosecurity.com.6667: UDP, length 423
0x0000: 4500 01c3 80cb 0000 4011 cd59 0a01 0a02 E.......@..Y....
0x0010: 0a01 0d02 d418 1a0b 01af 3d00 7900 0000 ..........=.y...
0x0020: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0030: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0040: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0050: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0060: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0070: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0080: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0090: 0000 0000 0000 0000 0000 0000 7a00 0000 ............z...
0x00a0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x00b0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x00c0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x00d0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x00e0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x00f0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0100: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0110: 0000 0000 0000 0000 0000 0000 3230 3037 ............2007
0x0120: 2d30 342d 3139 5431 373a 3236 3a32 382e -04-19T17:26:28.
0x0130: 3430 3678 0000 0000 0000 0000 0000 0000 406x............
0x0140: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0150: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0160: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0170: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0180: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x0190: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x01a0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0x01b0: 0000 00bf 0080 0508 1cca 8f45 03a6 d38c ...........E....
0x01c0: 8033 1a .3.

The traffic on port 6666 UDP is the content and the traffic on port 6667 UDP is a connection record of some kind.

After shutting down the tap and collector, I checked the files the collector created.

poweredge:/usr/local/opencalea_rev38/bin# cat /tmp/cmii.txt
x, y, z, 2007-04-19T17:26:28.406, 69.143.202.28, 69.143.202.28, 32819, 6656
x, y, z, 2007-04-19T17:26:28.514, 140.211.166.3, 140.211.166.3, 6667, 32768
x, y, z, 2007-04-19T17:26:34.195, 140.211.166.3, 140.211.166.3, 6667, 32768
x, y, z, 2007-04-19T17:26:34.196, 69.143.202.28, 69.143.202.28, 32819, 6656

CMII is Communications Identifying Information. Here's the content, which is saved in Libpcap form.

poweredge:/usr/local/opencalea_rev38/bin# tcpdump -n -r /tmp/cmc.pcap -X
reading from file /tmp/cmc.pcap, link-type EN10MB (Ethernet)
13:26:28.406000 IP 69.143.202.28.32819 > 140.211.166.3.6667:
P 1337672639:1337672687(48) ack 3295319520 win 32767

0x0000: 4500 0064 c38f 4000 3f06 3582 458f ca1c E..d..@.?.5.E...
0x0010: 8cd3 a603 8033 1a0b 4fbb 43bf c46a 95e0 .....3..O.C..j..
0x0020: 8018 7fff e4cc 0000 0101 080a 520b 91ad ............R...
0x0030: 05c1 a5e1 5052 4956 4d53 4720 2373 6e6f ....PRIVMSG.#sno
0x0040: 7274 2d67 7569 203a 5468 6973 2069 7320 rt-gui.:This.is.
0x0050: 616e 6f74 6865 7220 4341 4c45 4120 7465 another.CALEA.te
0x0060: 7374 0d0a st..


Jesse told me there's a lot of work to be done with this open source suite. The idea is to give businesses that can't afford a commercial CALEA solution the option of open source.

I plan to keep an eye on the OpenCALEA mailing list and try new versions as they are released.

Kamis, 05 April 2007

Monitoring and Investigation Lessons

Thanks to 27B Stroke 6 I learned that cybercriminal Jerome Heckenkamp (sorry Kevin, he's no "superhacker") will stay a criminal. The U.S. 9th Circuit Court of Appeals refused to overturn Heckenkamp's conviction. According to this DoJ announcement:

Mr. Heckenkamp's sentence results from his guilty pleas in January 2004 to two counts of gaining unauthorized access into a computer and recklessly causing damage, in violation of 18 U.S.C. §§ 1030(a)(5)(B). In pleading guilty, Mr. Heckenkamp admitted that he gained unauthorized access to eBay computers during February and March 1999. Using this unauthorized access, Mr. Heckenkamp admitted that he defaced an eBay Web page using the name "MagicFX," and that he installed "trojan" computer programs - or programs containing malicious code masked inside apparently harmless programs - on the eBay computers that secretly captured usernames and passwords that Mr. Heckenkamp later used to gain unauthorized access into other eBay computers.

Mr. Heckenkamp also admitted that he gained unauthorized access to Qualcomm computers in San Diego in late 1999 using a computer from his dorm room at the University of Wisconsin-Madison. Once he gained this unauthorized access, Mr. Heckenkamp admitted that he installed multiple "trojans" programs which captured usernames and passwords he later used to gain unauthorized access into more Qualcomm computers.


The new court decision involves the Qualcomm intrusion. The source of the intrusion was traced to UWM, where network investigator Jeffrey Savoy discovered that Heckenkamp's machine was attacking Qualcomm. Essentially, Savoy logged into Heckenkamp's machine to validate that it was the machine in question, and then contacted the authorities to physically visit Heckenkamp's dorm room.

I found these excerpts from the ruling (.pdf) to be noteworthy:

The government does not dispute that Heckenkamp had a subjective expectation of privacy in his computer and his dormitory room, and there is no doubt that Heckenkamp’s subjective expectation as to the latter was legitimate and objectively reasonable...

We hold that he also had a legitimate, objectively reasonable expectation of privacy in his personal computer...

The salient question is whether the defendant’s objectively reasonable expectation of privacy in his computer was eliminated when he attached it to the university network. We conclude under the facts of this case that the act of attaching his computer to the network did not extinguish his legitimate, objectively reasonable privacy expectations...

However, privacy expectations may be reduced if the user is advised that information transmitted through the network is not confidential and that the systems administrators may monitor communications transmitted by the user...

In the instant case, there was no announced monitoring policy on the network. To the contrary, the university’s computer policy itself provides that “[i]n general, all computer and electronic files should be free from access by any but the authorized users of those files. Exceptions to this basic principle shall be kept to a minimum and made only where essential to . . . protect the integrity of the University and the rights and property of the state.”

When examined in their entirety, university policies do not eliminate Heckenkamp’s expectation of privacy in his computer. Rather, they establish limited instances in which university administrators may access his computer in order to protect the university’s systems. Therefore, we must reject the government’s contention that Heckenkamp had no objectively reasonable expectation of privacy in his personal computer, which was protected by a screensaver password, located in his dormitory room, and subject to no policy allowing the university actively to monitor or audit his computer usage.
(emphasis added)

Wow, so far it's looking good for Jerome. So what happened?

Although we conclude that Heckenkamp had a reasonable expectation of privacy in his personal computer, we conclude that the search of the computer was justified under the “special needs” exception to the warrant requirement. Under the special needs exception, a warrant is not required when “ ‘special needs, beyond the normal need for law enforcement, make the warrant and probable-cause requirement impracticable.’ ”

If a court determines that such conditions exist, it will “assess the constitutionality of the search by balancing the need to search against the intrusiveness of the search..."

Here, Savoy provided extensive testimony that he was acting to secure the Mail2 server, and that his actions were not motivated by a need to collect evidence for law enforcement purposes or at the request of law enforcement agents. This undisputed evidence supports Judge Jones’s conclusion that the special needs exception applied.

The integrity and security of the campus e-mail system was in jeopardy... Under these circumstances, a search warrant was not necessary because Savoy was acting purely within the scope of his role as a system administrator. Under the university’s policies, to which Heckenkamp assented when he connected his computer to the university’s network, Savoy was authorized to “rectif[y] emergency situations that threaten the integrity of campus computer or communication systems[,] provided that use of accessed files is limited solely to maintaining or safeguarding the system.”

Savoy discovered through his examination of the network logs, in which Heckenkamp had no reasonable expectation of privacy, that the computer that he had earlier blocked from the network was now operating from a different IP address, which itself was a violation of the university’s network policies.

This discovery, together with Savoy’s earlier discovery that the computer had gained root access to the university’s Mail2 server, created a situation in which Savoy needed to act immediately to protect the system.


That is fascinating. Because administrator Savoy sought to protect university resources when he logged into Heckenkamp's computer, Savoy's search was justified. Also, Heckenkamp had no expectation of privacy over network logs, which also traced Heckenkamp's computer to Qualcomm.

This may be one small step towards taking the fight to the enemy, but please be aware of the extremely limited nature of this event. I recommend reading the whole ruling (it's only 13 pages) for details.

Update: In Jennifer Granick's story she notes that Savoy logged into Heckenkamp's computer as user temp password temp, based on credentials found in a file on his mail server.

Jumat, 30 Maret 2007

Full Content Monitoring as a Wiretap

I received the following question today:

When installing Sguil, what legal battles have you fought/won about full packet capture and its vulnerability to open records requests from outside parties? I am getting concerns, from various management, regarding the legal ramifications of the installation of a system similar to Sguil in the state government arena. Do you have any advice for easing their worries? I know how important full data capture is to investigating incidents, and I consider it of paramount importance to the security of our state that we do so. Are there any legal precedents that can be cited?

Before I say anything else it is important to realize I am not a lawyer, I don't play one on YouTube, and I recommend you consult your lawyer rather than listen to anything I might say.

With that out of the way, I have written about wiretaps a few times before. Let me get these generic wiretapping issues out of the way before addressing the question specifically.

The pertinent Federal law is 18 U.S.C. §2511.

A great place to look for commentary and precedents on digital security issues is Orin Kerr's Computer Crime Case Updates. This search for wiretap may or may not be helpful.

Finally, for recent commentary by a lawyer (but not your lawyer), I recommend Sysadmins, Network Managers, and Wiretap Law (.pdf slides) by Alex Muentz. These notes from his LISA 2006 talk are helpful too.

I think the key element of the question originally posed was full packet capture and its vulnerability to open records requests from outside parties. It sounds like the question asker is worried about discoverability of full content data. I touched on this briefly in The Revolution Will Be Monitored.

My answer to this problem is what I would consider both practical and technically limiting: do not store more full content data than you need. For any modern production network, capturing and storing days or weeks of full content traffic can be an expensive proposition. For example, in one client location I have about 200 GB of space available for full content storage. That space allows me to save a little more than 10 days of full content, even with fairly draconian BPFs limiting what is stored. If for some reason I needed to produce that data to management or attorneys, I could only provide the last 10 days of information. If the event in question occured prior to that period, I just don't have it.

I do know of some locations that operate massive storage area networks to save TBs of full content. I do not advocate that for anyone but the most specialized of clients. I do recommend collecting the amount of full content (if possible, legally and technically) that works for your investigative window. For example, if you have a requirement to review your alert and session data such that you are never more than 5 days past an event of interest, you might want to save 7 days of full content. From an investigation point of view, more is always better. From a practical point of view, it might be too costly.

Remember that any network data collection should be considered a wiretap. Full content is the form of network data that most resembles a wiretap.

With respect to session data, I recommend saving as much of that as possible. In practical terms it comes down to the amount of space you're willing to devote to database files. At the same client I am collecting as many sessions as I can, without filters. 30 days of such session data is producing about 20 GB of uncompressed MySQL table files. As you can see I can store many more days of session data as compared to full content data. That means much more session data is discoverable. I might choose to limit storage of that session data to meet whatever guidance corporate legal counsel might provide.

Session data is like pen register/trap and trace data, because it does reveal content. I still treat it like a wiretap but it probably does not meet the same standards.

Event data, i.e. IDS alerts, take so little space as to not require any real storage consideration (compared to full content and session data). Therefore, the primary limiting factor is legal and policy, not technical.

I think anyone who really wants a better answer would do well to check our Prof Kerr's list, and potentially ask him. Alex Muentz would be another good resource.

Rabu, 21 Maret 2007

When Lawsuits Attack

I haven't said anything about the intrusions affecting TJX until now because I haven't felt the need to contribute to this company's woes. Today I read TJX Faces Suit from Shareholder:

The Arkansas Carpenters Pension Fund owns 4,500 shares of TJX stock, and TJX denied its request to access documents outlining the company's IT security measures and its response to the data breach.

The shareholder filed the lawsuit in Delaware's Court of Chancery Monday afternoon under a law permitting shareholders to sue for access to corporate documents in certain cases, The Associated Press reported. The pension fund wants the records to see whether TJX's board has been doing its job in overseeing the company's handling of customer data, the news agency said.


Imagine having your security measures and incident response procedures laid bare for everyone to see. (It's possible there might not be anything to review!) How would your policies and procedures fare?

The following sounds like many incidents I've investigated.

The TJX breach was worse than first thought, TJX officials recently admitted. The company initially believed that attackers had access to its network between May 2006 and January 2007. However, the ongoing investigation has turned up evidence that the thieves also were inside the network several other times, beginning in July 2005.

Originally the company was compromised for nine months, but now the scope could reach almost a year prior. The question is whether this is evidence of compromise by another group or the same group. In either case the company's security posture looks terrible.

The sad part about this sort of incident is that most if not all of the preventative systems TJX might have applied are worthless for response and forensics. I'm guessing TJX is relying on host-centric forensics like analysis of MAC times of files on artifacts on victim servers to scope the incident. I bet TJX is paying hundreds of thousands of dollars in investigative consulting right now, beyond the damage to their brand and other technical and financial recovery costs.

Hopefully these lawsuits will shed some light on TJX's security practices so other companies can learn from their mistakes. This is the sort of incident that my future National Digital Security Board would do well to investigate and report.

Selasa, 07 November 2006

When Laws Aren't Enough

CIO Magazine published The Global State of Information Security 2006. The story contained what I consider to be some fairly disappointing results.

Complacency, it seems, abounds. A large proportion of security execs admitted they're not in compliance with regulations that specifically dictate security measures their organization must undertake or risk stiff sanctions, up to and including prison time for executives. Some of these regulations—such as California's security breach law, the Health Insurance Portability and Accountability Act (HIPAA), and non-U.S. laws such as the European Union Data Privacy Directive—have been around for years. ..

The information security discipline still suffers from the fundamental problem of making a business value case for security. Security is still viewed and calculated as a cost, not as something that could add strategic value and therefore translate into revenue or even savings.
(emphasis added)

No one spends money on insurance because it "adds strategic value." At best security spending can produce "savings," i.e. avoid losses.

Perhaps the problem is ignorant management?

From 2003 to 2005, the percentage of survey respondents saying they had fewer than 10 negative information security incidents in the past year remained steady. But this year, we included the option to answer that you do not know how many negative security incidents occurred. This year, nearly one-third of respondents admitted that they do not know how many breaches or unauthorized access events occurred within their organizations.

To a certain extent, that's understandable. Attacks can be hard to identify, and networks can be extensive. What's less comprehensible is that a significant portion of respondents said they have not installed some of the most rudimentary network safeguards. Only one-third of respondents have put in place patch management tools or monitor user activity. Less than half use intrusion detection software or monitor log files (the two best methods organizations can employ to detect breaches) and even fewer use intrusion prevention tools. Surprisingly, more than 20 percent of respondents don't even have a network firewall.


Let's assume these managers are not being brutally honest, i.e., they are not recognizing that it can be impossible to know of every incident. Instead, I assume they are admitting they just don't have the tools and tactices to measure incidents. That's disappointing.

There is some hope in certain industries.

Companies in the financial services sector—banks, insurance companies, investment firms—are more likely to employ a CSO than other industries. Security budgets in the financial sector are typically a bigger slice of the IT budget as a whole and increase at a faster rate than in other sectors. That may be because financial services companies are more likely to link security policies and spending to business processes. These companies are proactive, instituting formal information security processes such as log file monitoring and periodic penetration tests. More of their employees follow company security policies. Not surprising, financial services companies also have deployed more information security technology gadgets, such as intrusion detection and encryption tools, and identity management solutions.

It's obvious, therefore, that financial services organizations are far more likely—almost twice as likely, in fact—to have an overall strategic security plan in place. Consequently, they reported fewer financial losses, less network downtime and fewer incidents of stolen private information than any other vertical.

The reason for all this is also obvious. The product in the financial services industry is money, and money is the prime target of cybercriminals, including organized crime, insiders and even terrorists. Protecting the money is the industry's most critical concern. The past few years have seen a sharp increase in cybercrime (phishing, identity theft, extortion and spyware, to name a few). Anytime a security executive can demonstrate to top executives that investing in security can protect and increase shareholder value, he will be more likely to convince the boardroom to make that investment and make security a strategic part of the organization.

Financial services companies are more likely than enterprises in other industries to use ROI to measure the effectiveness of security investments (29 percent versus an average of 25 percent), and they also are more likely to use potential impact on revenue to justify investments (36 percent versus an average of 27 percent). These arguments work. More financial services companies saw a double-digit increase in their 2006 security budgets than those in any other sector.

Regulation plays a part too. The financial industry must adhere to the most stringent information security laws, and therefore it leads other industries in following proven, strategic information security practices.


I'd like to provide a slightly different interpretation. Financial services companies are used to dealing with threats as well as protecting assets. Everyone has assets to protect, but not until recently has everyone been within the reach of threats. Your risk is zero if you face no threats, no matter how vulnerable you are or how important your assets.

Jumat, 30 Juni 2006

Signs of Desperation from Duronio Defense Team

It sounds to me like the Duronio defense team has nothing left in its tank, so it's attacking Keith Jones directly. The latest reporting, UBS Trial: Defense Suggests Witness Altered Evidence, shows how ridiculous the defense team sounds:

"So when you talked about putting pieces of the puzzle together, you were missing three-quarters of the pieces for the [central file server] alone?"" [defense attorney] Adams asked.

"The puzzle pieces I had to put together formed the picture I needed," Jones replied. "If the puzzle was of a boat, then I had enough pieces to form the picture of the boat."

Adams countered, "But you might not see all the other boats around it."

Jones replied, "But the second boat won't get rid of the first boat. It's simple mathematics that when you add data, you don't subtract data. There was nothing in that data set that could remove the data I already had."


It sounds like Keith has more testifying in store for next week. Stay tuned.

Senin, 26 Juni 2006

Cluelessness at Harvard Law Review

Articles like Immunizing the Internet, or: How I Learned To Stop Worrying and Love the Worm (.pdf) in the June 2006 (link will work shortly) Harvard Law Review make me embarrassed to be a Harvard graduate. This is the central argument:

[C]omputer networks, particularly the Internet, can be thought of as having immune systems that are strengthened by certain attacks. Exploitation of security holes prompts users and vendors to close those holes, vendors to emphasize security in system development, and users to adopt improved security practices. This constant strengthening of security reduces the likelihood of a catastrophic attack -- one that would threaten national or even global security. In essence, certain cybercrime can create more benefits than costs, and cybercrime policy should take this concept into account.

Apparently Harvard lawyers do not take economics classes. If they did (or paid attention) they would know of Frédéric Bastiat's parable of the broken window. The story demonstrates that crime, warfare, and other destructure behavior does not benefit society, since it shifts resources from productive behavior towards repair, recovery, and other defensive activities.

The HLR article continues:

Cybercrime is also different from other crime because it is amenable to innovative law enforcement approaches that exploit its unique underlying psychology. The objective of a bank robbery is to obtain money. Terrorists usually wish to maximize damage. Cybercrime, however, often provides no financial gain; many cyberattacks seem to originate from a desire for fame and attention or fun and challenge. Hackers often cause little to no permanent damage to the systems they successfully penetrate. This is true even of many high-profile cyber-attacks, in which damage initially appears to be widespread.

Wow, was this article published in 1996 or 2006? "No financial gain?" "Little to no permanent damage?" Welcome to the modern world, HLR. What would you consider permanent damage -- loss of life? Everything else can be repaired, even blasts by 2,000 pound bombs. Money spent on incident response and recovery, future lost revenue from decreased customer trust, insurance payments, spending on infrastructure -- all of this could be avoided in a world without "beneficial cybercrime."

Am I being too harsh? I don't think so. This is Harvard we're talking about, not Bunker Hill Community College.

Update: HLR should read Meet the Hackers.

Jumat, 23 Juni 2006

A Real Logic Bomb

Logic bomb is a term often used in the media, despite the fact that almost all reporters (there are notable exceptions) have no clue what it means. Well, now we can look at a real one, thanks to forensics work by Keith Jones. He found a real logic bomb while doing forensics on the United States v. Duronio case. I worked the very beginning of this case while Keith and I were both at Foundstone. My small part involved trying to figure out how to restore images of AIX machines from tape. I even bought an AIX box on eBay for experimentation.

You can read about Keith's testimony in this Information Week article. This is the "logic bomb" Keith recovered:



One of the neat aspects of this case is its age: over four years. The media and elsewhere are abuzz with stories of "insider threats," but this has been a problem for a very long time. Congratulations to Keith for testifying on such an important case. If the jury has a clue, the defendant doesn't have a chance.

Update: This story specifically examines the code in question.

Senin, 20 Februari 2006

Monitoring the Wrong Places

I am obviously a proponent of network security monitoring, but I am also a strong believer in privacy. The sort of attitude demonstrated in this article disturbs me greatly:

Houston's police chief on Wednesday proposed placing surveillance cameras in apartment complexes, downtown streets, shopping malls and even private homes to fight crime during a shortage of police officers.

"I know a lot of people are concerned about Big Brother, but my response to that is, if you are not doing anything wrong, why should you worry about it?" Chief Harold Hurtt told reporters Wednesday at a regular briefing.


Sure Chief, why don't you lead by example and install cameras in your home. You're not doing anything wrong, are you?

Building permits should require malls and large apartment complexes to install surveillance cameras, Hurtt said. And if a homeowner requires repeated police response, it is reasonable to require camera surveillance of the property, he said...

So, the power of the state should be used to meet the police's wishes?

Andy Teas with the Houston Apartment Association said that although some would consider cameras an invasion of privacy, "I think a lot of people would appreciate the thought of extra eyes looking out for them."

What planet are these people from?

If you don't want your network traffic inspected, you can encrypt it. Unfortunately, there is no encryption in the analog world.

Rabu, 21 Desember 2005

Two Great Wiretapping Articles

Given the recent coverage of wiretapping in the mainstream media, I thought I would point out two excellent articles in the latest issue of IEEE Security & Privacy Magazine. Thankfully, both are available online:

Both concentrate on technical issues of wiretapping. The first concentrates on how to tap a physical line or switch, and ways to defeat those taps. The second describes why incorporating wiretap features into VoIP is a bad idea. Each article discusses relevant laws.

Selasa, 08 November 2005

Congratulations to Feds

I'd like to congratulate the United States Attorney's Office, Central District of California for indicting a bot net controller. According to the press release and the indictment (.pdf), up to 400,000 victims were compromised. You can track the progress of this case through the Post Indictment Arraignment Calendar.

This is exactly the sort of work that needs to be done. Security professionals cannot win against intruders if only the "vulnerability" variable of the risk equation is addressed. We need law enforcement to reduce the "threat" variable as well. The suspect in this case is a 20-year-old living in California. This is the sort of perpetrator who can be deterred, unlike a foreign intelligence agent or member of organized crime. The more bot net operators who are put in jail, the fewer lower-end threats we will need to stop.

Kamis, 10 Februari 2005

Mark Rasch on Cabellas Case

Last month I wrote on the Caballes drug case. On Tuesday the former head of the US DoJ's computer crimes squad wrote Of Dog Sniffs and Packet Sniffs. In his article Mark Rasch says:


"[T]he search by the dog into, effectively, the entire contents of a closed container inside a locked trunk, without probable cause, was 'reasonable' even though the driver and society would consider the closed container 'private' because the search only revealed criminal conduct.

The same reasoning could easily apply to an expanded use of packet sniffers for law enforcement."

Since Rasch is a Senior Vice President and the Chief Security Counsel (i.e., a lawyer) at Solutionary Inc., he may be on to something. The comments on Mark's article by those not trained as lawyers are in some cases amusing. He responds to several of them.

Selasa, 25 Januari 2005

US Supreme Court Rules on Real False Positives

Last year when US Senator Ted Kennedy was detained for being on a no-fly list, I discussed his plight in relation to intrusion detection system "false positives." If an IDS is operating correctly, every alert it sees is the result of an action it was programmed to take. In other words, when a functioning IDS sees "cmd.exe", it reports seeing "cmd.exe".

It doesn't matter if the appearance of "cmd.exe" on the wire is not part of an actual intrusion; a rule to alert on "cmd.exe" does not cause "false positives" if the IDS reports seeing "cmd.exe". A real false positive involves the IDS reporting "cmd.exe" when no such content passed on the wire. Therefore, there are no such things as false positives. Blame the signature writer or IDS developer, not the IDS.

Let's move from the realm of IDS false positives to the land of canine false positives. Yesterday the US Supreme Court issued its opinion in the case of ILLINOIS, PETITIONER v. ROY I. CABALLES. This is a case where false positives involve a dog's ability to sniff for illegal drugs. Justice Ginsberg's dissenting opinion summarizes the facts of the case:

"Illinois State Police Trooper Daniel Gillette stopped Roy Caballes for driving 71 miles per hour in a zone with a posted speed limit of 65 miles per hour. Trooper Craig Graham of the Drug Interdiction Team heard on the radio that Trooper Gillette was making a traffic stop. Although Gillette requested no aid, Graham decided to come to the scene to conduct a dog sniff.

Gillette informed Caballes that he was speeding and asked for the usual documents–driver’s license, car registration, and proof of insurance. Caballes promptly provided the requested documents but refused to consent to a search of his vehicle. After calling his dispatcher to check on the validity of Caballes’ license and for outstanding warrants, Gillette returned to his vehicle to write Caballes a warning ticket. Interrupted by a radio call on an unrelated matter, Gillette was still writing the ticket when Trooper Graham arrived with his drug-detection dog.

Graham walked the dog around the car, the dog alerted at Caballes’ trunk, and, after opening the trunk, the troopers found marijuana."

Justice Stevens' majority opinion held that "the dog sniff was performed on the exterior of respondent's car while he was lawfully seized for a traffic violation. Any intrusion on respondent's privacy expectations does not rise to the level of a constitutionally cognizable infringement... A dog sniff conducted during a concededly lawful traffic stop that reveals no information other than the location of a substance that no individual has any right to possess does not violate the Fourth Amendment." In other words, it's ok for police to use dogs to inspect cars for drugs during traffic violation stops (or at other times), even if there is no suspicion of drugs involved.

I do not agree with this opinion, for several reasons. The first reason involves false positives, and was correctly diagnosed in Justice David Souter's dissenting opinion:

"I would hold that using the dog for the purposes of determining the presence of marijuana in the car’s trunk was a search unauthorized as an incident of the speeding stop and unjustified on any other ground...

The infallible dog, however, is a creature of legal fiction... [T]heir supposed infallibility is belied by judicial opinions describing well-trained animals sniffing and alerting with less than perfect accuracy, whether owing to errors by their handlers, the limitations of the dogs themselves, or even the pervasive contamination of currency by cocaine...

In practical terms, the evidence is clear that the dog that alerts hundreds of times will be wrong dozens of times.

Once the dog’s fallibility is recognized, however... the sniff alert does not necessarily signal hidden contraband, and opening the container or enclosed space whose emanations the dog has sensed will not necessarily reveal contraband or any other evidence of crime."

Justice Ginsberg expresses the second reason for my disagreement. Returning to her dissent, we see that beyond a Fourth Amendment violation, there are other problems with allowing canine searches prone to false positives:

"A drug-detection dog is an intimidating animal... Injecting such an animal into a routine traffic stop changes the character of the encounter between the police and the motorist. The stop becomes broader, more adversarial, and (in at least some cases) longer. Caballes –- who, as far as Troopers Gillette and Graham knew, was guilty solely of driving six miles per hour over the speed limit -– was exposed to the embarrassment and intimidation of being investigated, on a public thoroughfare, for drugs...

Under today’s decision, every traffic stop could become an occasion to call in the dogs, to the distress and embarrassment of the law-abiding population...

Today’s decision... clears the way for suspicionless, dog-accompanied drug sweeps of parked cars along sidewalks and in parking lots... Nor would motorists have constitutional grounds for complaint should police with dogs, stationed at long traffic lights, circle cars waiting for the red signal to turn green."

My third and final reason for disagreeing with the Court's opinion is based on Justice Stevens' majority opinion. He writes for the Court:

"We have held that any interest in possessing contraband cannot be deemed 'legitimate,' and thus, governmental conduct that only reveals the possession of contraband 'compromises no legitimate privacy interest.'"

Now, what if the definition of contraband is extended beyond illegal drugs? How about music or movies in digital form, or pirated software? Is the Court opening the door to knock down privacy rights, since means to discover contraband do not infringe Fourth Amendment rights? The Court continues:

"The legitimate expectation that information about perfectly lawful activity will remain private is categorically distinguishable from respondent’s hopes or expectations concerning the nondetection of contraband in the trunk of his car."

The Court also brushes aside the false positive concerns:

"Although respondent argues that the error rates, particularly the existence of false positives, call into question the premise that drug-detection dogs alert only to contraband, the record contains no evidence or findings that support his argument."

I find this ruling very disturbing. I expect to see canine units used in increasing numbers in the coming months, where false positives will continue to plague innocent people. For example, yesterday National Public Radio reported that a man carrying cash to close on his house purchase was arrested when a dog alerted to supposed traces of illegal drugs on the money. Apparently traces of drugs on US currency is not an urban legend!

Selasa, 23 November 2004

Prof Kerr on KeyKatcher Case

I always enjoy reading Professor Orin Kerr's Computer Crime Case Updates. This week he comments on the dismissed wiretapping case mentioned by SecurityFocus.com and Slashdot. Although some commentary from the likes of Slashdot is helpful, I prefer reading the opinions of a Harvard Law graduate and former Supreme Court clerk.

The case is simple: does use of a keystroke logger constitute a wiretap? The judge in the case said no. I agree with Prof Kerr's assessment that the opinion is wrong. If someone listens in on a phone between the handset and the base unit, it's still a wiretap. It's no different if someone collects keystrokes using a device between a keyboard and CPU.

However, I disagree with Prof Kerr's reasoning concerning interstate commerce. Plenty of judges disagree with me, but I don't think connecting to the Internet makes a system automatically engaged in "interstate commerce." I think the use of the so-called Interstate Commerce Clause allows Congress to pass laws that far exceed their true Constitutional mandate.

If you've never read Prof Kerr's opinions, I recommend you browse his mailing list archives. They're incredibly enlightening.

Senin, 14 April 2003

Holding Owners of Compromised Computers Responsible

I've heard several people refer to legal activity in Texas, where victims of intrusions were being sued when the original victim's systems attacked third parties. This happened in 2001, when systems at Exodus were allegedly compromised and used to attack Web-hosting company C.I. Host. Marc Zwillinger mentioned this is this webcast, saying the suit was moved to Federal court and then settled out of court. His slides included this scan of the indictment. From this article:


JUST BEFORE 8 A.M. ON FEB. 1, 2001, C.I. Host, a Web-hosting company with 90,000 customers, was hit with a crippling denial-of-service attack. By the end of the day, after outage complaints from what CEO Christopher Faulkner described as "countless" customers, the Fort Worth, Texas-based company got its lawyers involved. . . In an injunction filed in a Texas district court and later moved to a U.S. district court, C.I. Host alleged that the defendants committed or allowed a third party to commit a denial-of-service attack on C.I. Host's systems. The defendants insisted that they were victims of a hacker themselves, not the perpetrators of a crime. The case never made it to trial, but C.I. Host's lawyers did convince a Texas judge to issue a temporary restraining order shutting down three of the Web servers involved in the attack until the companies could prove the vulnerabilities had been fixed.


The other popular case is well-documented in the 2001 CSI/FBI Study:


The U.S. Navy's Criminal Investigative Service (NCIS) is in the throes of an investigation into how and why an as yet unidentified hacker stole the source code to OS/Comet from a computer at the U.S. Navy's naval research lab in Washington, D.C. in an attack conducted on Christmas Eve, 2000. OS/Comet was developed by Exigent International (Melbourne,FL), a U.S. government contractor. The software has been deployed by the U.S. Air Force on the NAVSTAR Global Positioning System (GPS) from its Colorado Springs Monitor Station, which is part of the U.S. Space Command. A copy of the OS/Comet source code was found during a police swoop in Sweden on a computer company whose identity has not been revealed. The intrusion appears to have emanated from a computer at the University of Kaiserslauten in Germany, which was used to download the software's source code via the Web and the service provider Freebox.com, which is owned by the Swedish firm Carbonide. The hacker known only as "Leeif" was able to hide his or her true identity by breaking into the account of a legitimate Freebox.com user and then using that person's account to distribute the source code to others. Exigent has filed suit against both Carbonide and the University of Kaiserlautern in Germany. The NCIS's inquiry is being headedby the NCIS headquarters for European affairs in Naples and by its London bureau, which deals specifically with Scandinavia.