Many users may not know what online astroturfing is, but most of those who use the Internet on a regular basis have surely felt its effects.
To learn more on this topic we’ve had an interview with Shuman Ghosemajumder, former click-fraud czar at Google, currently the VP of Marketing at Shape Security
. He was kind enough to detail astroturfing and its impact on today’s online environment.
Please introduce yourself for our readers who may not already know who you are. Shuman:
I'm Shuman and I am the Vice President of Marketing at Shape Security, a web security startup in Silicon Valley. I previously worked at Google for six years, where I led product management for protecting Google’s advertising products against click fraud and other threats.
I joined Google in 2003 as an early product manager for AdSense and was also part of the team that launched Gmail. Softpedia:
Can you explain the concept of online astroturfing? Shuman:
Online astroturfing is something we are seeing a lot more of these days, in particular, it's being used to discredit or distract people from a core opinion or political issue. In light of the election, it's extremely timely to bring to light the evolution of botnets responsible for creating this type of manipulation.
We're seeing just how powerful it can be to create fake people and anyone who isn't taking measures to protect against these kinds of attacks will be strongly affected. Softpedia:
How does astroturfing influence the threat landscape? Shuman:
Online astroturfing is a higher level threat in the sense that it isn’t directly attacking system infrastructure, but is instead manipulating it to achieve a purpose. That purpose is the creation of content, making it a kind of spam. The most common type of fake content creation we see online is spam for commercial products.
But online astroturfing is different in that it attempts to create the perception of an opinion held by a large number of real people. Examples of this include manipulation of online polls, orchestration of letter-writing campaigns, posting in political forums, and fake tweets and blog posts.
The methods used to create this type of spam include large-scale automated account creation, automated content generation, and automated attacks on the APIs associated with the systems they are trying to manipulate. Softpedia:
Can you provide specific situations in which astroturfing was used to influence decisions? Shuman:
Al Gore recently highlighted
an example of a political letter-writing campaign which contained hundreds of fake names. Fake product reviews
are common throughout the web, and are beginning to attract regulator attention.
There are also examples
of corporations using persona management software to influence public opinion.
A less serious but high profile example was when the Time 100 online poll was hacked
by members of 4chan to put 4chan’s founder at the top of the list. Softpedia:
What can be done against this phenomenon? Shuman:
The associated attack vectors -- large-scale automated account creation, automated content generation, and automated attacks on the APIs -- are the primary areas organizations need to defend against.
Most systems in most companies today are relatively unsophisticated, and online astroturfing isn’t something that is usually proactively detected or protected against.
Where there are protections, they are focused on detecting either automated content or automated accounts, but not all of the possible attack vectors.
It’s definitely important for organizations to do the best that they can to be proactive in deflecting this activity in all of the different forms it can enter their systems. Softpedia:
Can you provide examples of solutions that can be used to protect against astroturfing? Shuman:
Right now, the best defense is to do everything one can to protect the attack vectors listed. For example, to protect against large-scale automated account creation, services often use CAPTCHAs to ensure that the creators of accounts are humans.
But CAPTCHAs aren't a perfect solution, as they can be beaten by machines, and can also be difficult for many people to solve.
Similarly, protections on content submission forms have pros and cons. Our company has developed technology that we believe represents a better solution, which we're building into products to be available next year.