They claim that the methodology used in the exploit detection test released by Secunia was flawed

Oct 16, 2008 13:13 GMT  ·  By

A few days ago, the well known vulnerability tracking company Secunia released the results of a test aimed at comparing vulnerability exploit detection rates of Internet Security Suite-type products from several anti-virus vendors. The results were surprisingly disappointing, with a single product scoring a rate higher than 3%. This prompted several AV analysts to fight back and strongly criticize the methodology used in the test.

Secunia performed their test on twelve Internet Security Suite products from McAfee, Symantec, Microsoft, ZoneAlarm, AVG, CA, F-Secure, TrendMicro, BitDefender, Panda, Kaspersky and Norman (listed in no particular order). A total number of 300 exploits were scanned using the on-demand scanning component of the products. The exploits were split into two groups, with one group containing only exploits for browser and ActiveX vulnerabilities.

Out of the 300 exploits, 126 were considered critically important because they affect highly popular software products, because they were zero-day exploits (released before the vulnerabilities were patched), or because they were custom developed by Secunia. Thus, a separate detection rate has been determined for these 126 exploits alone in addition to the general detection rate for all test cases.

Symantec's Norton Internet Security scored a general detection rate of 21,33%, which is almost ten times higher than the 2.33% registered by Bitdefender's and TrendMicro's Internet Security Suites that came in second place. The rest of the products registered detection rates of 2% and below and ended with Norman Security Suite, which was not able to detect ANY of the 300 exploits.

“These results clearly show that the major security vendors do not focus on vulnerabilities,” conclude the Secunia analysts in the report, and Thomas Kristensen, Chief Technology Officer and co-founder of Secunia, wrote that “while we did suspect that the popular security vendors would score quite poorly in detecting exploits, the extremely low detection rate took us by surprise and this really begs the question: Do the customers get their money's worth?”

Analysts from several anti-virus companies wasted no time in answering Kristensen's question with yes. They claim that their products are a lot more efficient in detecting and protecting against vulnerability exploits than outlined by the results, because the used testing methodology was flawed to begin with. While the majority agreed that this would have been an interesting and useful test if done properly, some of them went as far as claiming that Secunia tried to pull a publicity stunt and promote their own commercial vulnerability scanning product.

One of the more acid responses came from Pedro Bustamante, malware analyst at Panda Security. He wrote on the Panda Research Blog that “Secunia's ‘test methodology’ only takes into consideration manually scanning 144 different inactive exploit files. This is very much like saying that you're going to test a car’s ABS breaks by throwing it down a 200 meter cliff. Absurd, sensationalist and misleading at best”. He also fires a rhetorical question back at Secunia - “Well duh, if you only test traditional signatures and neglect the other technologies included in the product which ARE designed to block exploits, what do you expect?”.

The disapproval over only using the signature based on-demand scanning feature of the tested products seems to be consistent with many other professionals working in the industry, regardless if they are directly affected by this test or not. Such is the case of Andreas Marx, Manager of AV-Test GmbH, who is well aware of testing standards and procedures. He believes that "a better test setup would… have the vulnerable applications installed on the test PC, together with the security suite… Then the tester would need to trigger the exploit, and see whether the machine was exploited successfully or not… really focusing on the entire suites’ features and not only on the ‘traditional’ scanner part of an AV product".

Interestingly enough, Secunia did not include ESET's Internet Security Suite product called Smart Security in their test. The product is based on the rather popular ESET NOD32 Antivirus. Even so, this didn't stop David Harley from ESET's Malware Intelligence Team to write about the Secunia test on their Threat Blog. “It seems very archaic to assume that a modern anti-malware program (let alone a suite) is wholly dependent upon signature scanning,” notes Harley and he also points out to the testing recommendations issued by the Anti-Malware Testing Standards Organization (AMTSO).

Microsoft's Live OneCare Team wrote on their blog about the Secunia test too, expressing the same concerns. They note that “when testing is done on a specific feature within a security suite, without consideration of the role that feature  plays in the broader solutions environment, the results can be misleading and confusing to average consumers who rely on the information to remain protected and secure against threats”.

Graham Cluley, Senior Technology Consultant for Sophos, thinks the same thing. According to The Register, he commented that "it sounds like only one aspect of the suites was tested, rather than all of the ways in which they might have been able to protect the users". Mr. Cluley concludes that "there's no such thing as a perfect security suite, but security software reduces threats and people shouldn't come away from these tests with the conclusion that they these products are ineffective".

Alex Eckelberry, CEO of Sunbelt Software, wrote that, in his opinion, the Secunia test is “a silly and useless PR stunt”. He also thinks that “they were just trying to get some news for their business of patch scanning or something, and decided to kick the AV players around for fun”. This is also suggested by Panda's Pedro Bustamante who ironically noted “Oh, wait, I just saw on their website that Secunia actually sells a vulnerability scanner! Hmmm, I wonder if that has something to do with the flawed conclusions of this test...”.

The Register also reported that Thomas Kristensen, Secunia CTO, responded to the accusations by pointing out that "it is obviously much better to be able to detect malicious content while it is passive instead of relying on (hopefully) being able to catch it once executed". However, he added that "we find the criticism from Panda useful and if we do conduct another test of the file-based test cases, then we will categorise their performance into: Unzipping, manual scan, and opening of test case with vulnerable application".