In a study completed and published by Avira (http://www.avira.com/en/company_news/recognition_performance_virus_protection.html) The results of the survey showed that for 34 percent (3,207 respondents) a long-established, trustworthy brand was key. Almost as many users, 33 percent (3,077 respondents), based their decision on the virus detection rates achieved in independent tests.
Detection rates - lets call this effectiveness of the control - as this is the key metric used to measure effectiveness. This is a skewed metric as for the large majority of evaluations (ICSALabs, VB100, etc) use the "in-the-wild" or ITW list of viruses to perform the evaluations. There is no evaluation of these product's ability to respond or even detect newly released virus and malware.
In all honesty really what we are dealing with here is preventative vulnerability management not virus detection and correction, and in my opinion there are four types of preventative protections required for the average consumer (some are currently reality - others not):
1. The consumers buying products based on their security. This does not exist in any meaningful way for the general community. Lets get someone to independently evaluate the software makers on this and publish it for consumers to make choices based on their performance.
2. A service used to update software code quickly. There should also be an independent evaluation of a code's susceptibility to vulnerabilities and speed in which these are patched by the vendor. This should apply to all software not just operating systems and browsers. Again there could be independent evaluations of the companies policies, practices and past performance related to this.
3. A perfect ITW detection engine - 100% - there is no reason a product should be less than this for KNOWN viral code. Really this should be combined with #4.
4. A product to detect and respond to new threats - ones without signatures - which is a significantly larger threat as they are generally being developed with more financial motivation. Apple's and Microsoft's authorization of unsigned code is a good first step but this should be done at the CPU level to detect suspicious behavior by software and apply a policy to it. Do consumers actually read a warning about unsigned code? or do they just click "continue". AMD - Intel - Other chip makers? Is this possible at a low level? and how do we trust these companies themselves.
Anyone else have thoughts on other ways of preventing the impacts of vulnerabilties?
Detection rates - lets call this effectiveness of the control - as this is the key metric used to measure effectiveness. This is a skewed metric as for the large majority of evaluations (ICSALabs, VB100, etc) use the "in-the-wild" or ITW list of viruses to perform the evaluations. There is no evaluation of these product's ability to respond or even detect newly released virus and malware.
In all honesty really what we are dealing with here is preventative vulnerability management not virus detection and correction, and in my opinion there are four types of preventative protections required for the average consumer (some are currently reality - others not):
1. The consumers buying products based on their security. This does not exist in any meaningful way for the general community. Lets get someone to independently evaluate the software makers on this and publish it for consumers to make choices based on their performance.
2. A service used to update software code quickly. There should also be an independent evaluation of a code's susceptibility to vulnerabilities and speed in which these are patched by the vendor. This should apply to all software not just operating systems and browsers. Again there could be independent evaluations of the companies policies, practices and past performance related to this.
3. A perfect ITW detection engine - 100% - there is no reason a product should be less than this for KNOWN viral code. Really this should be combined with #4.
4. A product to detect and respond to new threats - ones without signatures - which is a significantly larger threat as they are generally being developed with more financial motivation. Apple's and Microsoft's authorization of unsigned code is a good first step but this should be done at the CPU level to detect suspicious behavior by software and apply a policy to it. Do consumers actually read a warning about unsigned code? or do they just click "continue". AMD - Intel - Other chip makers? Is this possible at a low level? and how do we trust these companies themselves.
Anyone else have thoughts on other ways of preventing the impacts of vulnerabilties?
Comments