One of the most significant numbers in 2010 is 6253, which is the number of potentially exploitable vulnerabilities (PEVs) discovered last year. A PEV represents an exploitable hole in an operating system (OS), a piece of software or hardware or 6253 new opportunities for the bad guys to move into and disrupt the digital world. That’s 17 a day. And if you are a customer of Microsoft, Adobe, Cisco or using any ubiquitous piece of software, hardware or OS your odds of being exposed to one daily is actually very high.
That number has consequences for both security and compliance. For security it represents potentially exploitable weaknesses that can be leveraged to disrupt operations or processes. Those are written by the malware writers that are “notoriety driven,” and the anti-virus industry has pretty well reined those guys in. But more problematically, malware is now used to drop malicious code into environments to capture and remove data for future profit. Typically, the propagation techniques are much better architected and traditional security technologies have difficulty dealing with this type of threat. Hence, they are more successful and costly.
In 2009 the average lag time from the initial exposure of a PEV to the release of Beta code to exploit that PEV was 3.5 days. There are development teams designed to write and release that code quickly so the hole can be exploited before the targets are prepared to deal with it or are even aware that the issue even exists. The average lag time from PEV exposure to the release of the appropriate patch to resolve the issue was 45 days. This obviously represents a huge issue. That’s 41.5 days of exposure.
There is also the impact that 6253 PEVs bring to the compliance requirement table. Fact: None of the compliance documents (US or international) say (I will paraphrase) “thou shall fix the hole!” There is simply no requirement to patch or resolve a PEV. However, all of them say, and again I paraphrase, “You better know that the PEV is in your environment, document exactly how many instances of it you have and where they are located and have some 3rd party documentation of the probability of that PEV being exploited. If it’s low enough, you can use that as a rational for not fixing the hole.”
Those 41.5 days of exposure drive me back “the elephant” I talked about in the previous blog. It’s all about the knowledge component. Security technologies are only as good as the real-time pre-emptive knowledge that sits behind them and alignment or misalignment of the external risk landscape with the internal working environment. Today, it’s not how fast you can respond; it’s what do you already know about tomorrow.
Neils Johnson
Security@acgresearch.net
www.acgresearch.net
That number has consequences for both security and compliance. For security it represents potentially exploitable weaknesses that can be leveraged to disrupt operations or processes. Those are written by the malware writers that are “notoriety driven,” and the anti-virus industry has pretty well reined those guys in. But more problematically, malware is now used to drop malicious code into environments to capture and remove data for future profit. Typically, the propagation techniques are much better architected and traditional security technologies have difficulty dealing with this type of threat. Hence, they are more successful and costly.
In 2009 the average lag time from the initial exposure of a PEV to the release of Beta code to exploit that PEV was 3.5 days. There are development teams designed to write and release that code quickly so the hole can be exploited before the targets are prepared to deal with it or are even aware that the issue even exists. The average lag time from PEV exposure to the release of the appropriate patch to resolve the issue was 45 days. This obviously represents a huge issue. That’s 41.5 days of exposure.
There is also the impact that 6253 PEVs bring to the compliance requirement table. Fact: None of the compliance documents (US or international) say (I will paraphrase) “thou shall fix the hole!” There is simply no requirement to patch or resolve a PEV. However, all of them say, and again I paraphrase, “You better know that the PEV is in your environment, document exactly how many instances of it you have and where they are located and have some 3rd party documentation of the probability of that PEV being exploited. If it’s low enough, you can use that as a rational for not fixing the hole.”
Those 41.5 days of exposure drive me back “the elephant” I talked about in the previous blog. It’s all about the knowledge component. Security technologies are only as good as the real-time pre-emptive knowledge that sits behind them and alignment or misalignment of the external risk landscape with the internal working environment. Today, it’s not how fast you can respond; it’s what do you already know about tomorrow.
Neils Johnson
Security@acgresearch.net
www.acgresearch.net
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.