One of my favorite sections in Information Security Magazine is the "face-off" between Bruce Schneier and Marcus Ranum. Often they agree, but offer different looks at the same issue. In the latest story, Face-Off: Is vulnerability research ethical?, they are clearly on different sides of the equation.
Bruce sees value in vulnerability research, because he believes that the ability to break a system is a precondition for designing a more secure system:
[W]hen someone shows me a security design by someone I don't know, my first question is, "What has the designer broken?" Anyone can design a security system that he cannot break. So when someone announces, "Here's my security system, and I can't break it," your first reaction should be, "Who are you?" If he's someone who has broken dozens of similar systems, his system is worth looking at. If he's never broken anything, the chance is zero that it will be any good.
This is a classic cryptographic mindset. To a certain degree I could agree with it. From my own NSM perspective, a problem I might encounter is the discovery of covert channels. If I don't understand how to evade my own monitoring mechanisms, how am I going to discover when an intruder is taking that action? However, I don't think being a ninja "breaker" makes one a ninja "builder." My "fourth Wise Man," Dr Gene Spafford, agrees in his post What Did You Really Expect?:
[S]omeone with a history of breaking into systems, who had “reformed” and acted as a security consultant, was arrested for new criminal behavior...
Firms that hire “reformed” hackers to audit or guard their systems are not acting prudently any more than if they hired a “reformed” pedophile to babysit their kids. First of all, the ability to hack into a system involves a skill set that is not identical to that required to design a secure system or to perform an audit. Considering how weak many systems are, and how many attack tools are available, “hackers” have not necessarily been particularly skilled. (The same is true of “experts” who discover attacks and weaknesses in existing systems and then publish exploits, by the way — that behavior does not establish the bona fides for real expertise. If anything, it establishes a disregard for the community it endangers.)
More importantly, people who demonstrate a questionable level of trustworthiness and judgement at any point by committing criminal acts present a risk later on... (emphasis added)
So, in some ways I agree with Bruce, but I think Gene's argument carries more weight. Read his whole post for more.
Marcus' take is different, and I find one of his arguments particularly compelling:
Bruce argues that searching out vulnerabilities and exposing them is going to help improve the quality of software, but it obviously has not--the last 20 years of software development (don't call it "engineering," please!) absolutely refutes this position...
The biggest mistake people make about the vulnerability game is falling for the ideology that "exposing the problem will help." I can prove to you how wrong that is, simply by pointing to Web 2.0 as an example.
Has what we've learned about writing software the last 20 years been expressed in the design of Web 2.0? Of course not! It can't even be said to have a "design." If showing people what vulnerabilities can do were going to somehow encourage software developers to be more careful about programming, Web 2.0 would not be happening...
If Bruce's argument is that vulnerability "research" helps teach us how to make better software, it would carry some weight if software were getting better rather than more expensive and complex. In fact, the latter is happening--and it scares me. (emphasis added)
I agree with 95% of this argument. The 5% I would change is that identifying vulnerabilities addresses problems in already shipped code. I think history has demonstrated that products ship with vulnerabilities and always will, and that the vast majority of developers lack the will, skill, resources, business environment, and/or incentives to learn from the past.
Marcus unintentionally demonstrates that analog security is threat-centric (i.e., the real world focuses on threats), not vulnerability-centric, because vulnerability-centric security perpetually fails.
Jumat, 23 Mei 2008
Response to Is Vulnerability Research Ethical?
Langganan:
Posting Komentar (Atom)
0 komentar:
Posting Komentar