And downplays the positive results that Microsoft's antivirus got in latest independent tests

Jun 7, 2007 07:06 GMT  ·  By

Concomitantly with the customer launch of Windows Vista at the end of January 2007, Microsoft also made available Windows Live OneCare 1.5, the product being synonymous with the company's baby steps into the security industry. Microsoft did not have an easy ride going against house hold names such as Symantec, Kaspersky, McAfee and Sophos and this is a status quo that will prolong itself, as the generalized perspective is that the Redmond Company and security are not synonymous concepts. To further prove this view, Windows Live OneCare managed to underperform in a series of tests, most notably Andreas Clementi's AV-Comparatives.

In an initial series of bulk-detection tests in February 2007, Windows Live OneCare came in dead last. The evaluation involved on-demand scanning of a collection of malicious items and it is in fact an assessment of the virus signature definitions. But while OneCare failed the initial test, Microsoft's antivirus did perform a tad better in a subsequent retrospective/proactive test at the end of May, also courtesy of AV-Comparatives.

Joe Telafici, director of operations at McAfee's Avert labs, found amusing the fact that the latest AV-Comparatives' evaluation was interpreted as a positive sign for Windows Live OneCare, arguing that the two separate analysis cannot stand comparison. "Unfortunately, these are two completely different kinds of tests, so this is kind of like comparing apples and hammers," Telafici stated.

While the first tests were designed to measure the accuracy virus signature definitions, the latter were focused on "the ability of the products to detect new malware. In other words, these are samples they could not possibly have written signatures for, because they did not exist at the time the signatures were written. What this means is that you can write signatures that detect everything that exists today, and nothing that comes into being tomorrow. Or vice versa. So our quote is like saying that my car, which failed its safety crash test last week has improved because it completed the quarter mile in less time than someone else. Although it doesn't mean that both areas haven't improved, it certainly doesn't tell you that they have," Telafici commented.

According to the director of operations at McAfee's Avert labs, there are a suite of details impacting the core that any vendor will get from either bulk-detection or proactive tests. Telafici explained that the results are not always correlated with the quality of their heuristics, but "but might also be related to the size, distribution, and false-tolerance of their user base." Additionally, the balance that each security developer understands to make between aggressive heuristics and limiting the volume of false signatures is also a relevant issue. Telafici also noted that none of the malware items are actually executed, which completely takes out of the equation behavioral technologies implemented with the security products.

"That being said, from years of experience I can say that higher numbers on tests do not always correlate to improved performance. Optimizing for one kind of result is likely to cause worse performance in other areas, be they the size of definitions, system performance, false-positive rates, removal effectiveness, or supportability," Telafici added.