It's been nine days since Dan Kaminsky publicized his DNS discovery. Since then, we've seen a Blackberry vulnerability which can be exploited by a malicious .pdf, a Linux kernel flaw which can be remotely exploited to gain root access, Kris Kaspersky promising to present Remote Code Execution Through Intel CPU Bugs this fall, and David Litchfield reporting "a flaw that, when exploited, allows an unauthenticated attacker on the Internet to gain full control of a backend Oracle database server via the front end web server." That sounds like a pretty bad week!
It's bad if you think of R only in terms of V and forget about T and A. What do I mean? Remember the simplistic risk equation, which says Risk = Vulnerability X Threat X Asset value. Those vulnerabilities are all fairly big V's, some bigger than others depending on the intruder's goal. However, R depends on the values of T and A. If there's no T, then R is zero.
Verizon Business understood this in their post DNS Vulnerability Is Important, but There’s No Reason to Panic:
Cache poisoning attacks are almost as old as the DNS system itself. Enterprises already protect and monitor their DNS systems to prevent and detect cache-poisoning attacks. There has been no increase in reports of cache poisoning attacks and no reports of attacks on this specific vulnerability...
The Internet is not at risk. Even if we started seeing attacks immediately, the reader, Verizon Business, and security and network professionals the world-over exist to make systems work and beat the outlaws. We’re problem-solvers. If, or when, this becomes a practical versus theoretical problem, we’ll put our heads together and solve it. We shouldn’t lose our heads now.
However, this doesn’t mean we discount the potential severity of this vulnerability. We just believe it deserves a place on our To-Do lists. We do not, at this point, need to work nights and weekends, skip meals or break dates any more than we already do. And while important, this isn’t enough of an excuse to escape next Monday’s budget meeting.
It also doesn’t mean we believe someone would be silly to have already patched and to be very concerned about this issue. Every enterprise must make their own risk management decisions. This is our recommendation to our customers. In February of 2002, we advised customers to fix their SNMP instances due to the BER issue discovered by Oulu University, but there have been no widespread attacks on those vulnerabilities for nearly six years now. We were overly cautious. We also said the Debian RNG issue was unlikely to be the target of near-term attacks and recommended routine maintenance or 90 days to update. So far, it appears we are right on target.
There have been no increase in reports of cache poisoning attempts, and none that try to exploit this vulnerability. As such, the threat and the risk are unchanged.
I think the mention of the 2002 SNMP fiasco is spot on. A lot of us had to deal with people running around thinking the end of the world had arrived because everything runs SNMP, and everything is vulnerable. It turns out hardly anything happened at all, and we were watching for it.
Halvar Flake was also right when he said:
I personally think we've seen much worse problems than this in living memory. I'd argue that the Debian Debacle was an order of magnitude (or two) worse, and I'd argue that OpenSSH bugs a few years back were worse.
Looking ahead, I thought this comment on the Kaspersky CPU attacks was interesting: CPU Bug Attacks: Are they really necessary?:
But every year, at every security conference, there are really interesting presentations and lot of experienced people talking about theorically serious threats. But this doesn't necessarily mean that an exposed PoC will become a serious threat in the wild. Many of these PoCs require high levels of skill (which most malware authors do not have) to actually make them work in other contexts.
And, I feel sorry to say this, but being in the security industry my thoughts are: do malware writers really need to develop highly complex stuff to get milions of pcs infected? The answer is most likely not.
I think that insight applies to the current DNS problems. Are those seeking to exploit vulnerable machines so desperate that they need to leverage this new DNS technique (whatever it is)? Probably not.
At the end of the day, those of us working in production networks have to make choices about how we prioritize our actions. Evidence-based decision-making is superior to reacting to the latest sensationalist news story. If our monitoring efforts demonstrate the prevalance of one attack vector over another, and our systems our vulnerable, and those systems are very valuable, then we can make decisions about what gets patched or mitigated first.
Jumat, 18 Juli 2008
Vulnerabilities in Perspective
16.59
No comments
Langganan:
Posting Komentar (Atom)
0 komentar:
Posting Komentar