Earlier this month, search engine optimization gurus revealed a black hat SEO technique that was being used to increase a website's page rank by receiving "link juice" from Twitter. Most of the links pointing out of Twitter have the parameter, which tells search engine robots not to follow them.
However, one type of Twitter links that lacked this attribute were the ones pointing to the oauth client used to post a particular tweet, like Tweetdeck, Seesmic, Tweetie or others. These links appear underneath the actual message of each status update posted from outside of Twitter and reads something like: "[x] minutes ago from [Application Name] (linked)."
As it turns out, Twitter allows users to configure the name, description and URL for these third-party applications used to post tweets. SEO black hats figured out that the accuracy of this information is not verified in any way, meaning someone could have easily obtained a dofollow link for their own website, that gets repeated with each tweet.
Twitter blocked this SEO hack by adding to all oauth client application links. While attempting to bypass this fix, a blogger named James Slater noticed that the "Application Website" field in the form used to configure a Twitter client application allows for malformed input. He was surprised when http://www.example.com/" rel="external" was accepted as input and produced a valid rel="external" link overwriting the parameter (see image below).
Twitter has been alerted about this vulnerability and John Adams of Twitter Operations confirmed yesterday evening that "We have patched this issue as of a few hours ago." Nevertheless, the level of security on Twitter is placed under the microscope again by the security community. Many professionals warned back in April after Mikeyy's worms were blocked, that it wasn't going to be the last time when critical cross-site scripting flaws would be discovered on the micro-blogging platform.