Thanks to a heavy optimization process

Feb 19, 2010 09:41 GMT  ·  By
Thanks to a heavy optimization process, Facebook is twice as fast as it was just six months ago
   Thanks to a heavy optimization process, Facebook is twice as fast as it was just six months ago

It looks like Google isn't the only web company with a need for speed. While its efforts border on obsession, another Silicon Valley-based startup turned internet giant is starting to be a contender for the title, one that is quickly becoming Google's main adversary. Facebook has caught the speed bug and seems to be serious. A couple of weeks ago, the social network's engineers showed off a technology, HipHop, which reduced server CPU usage by 50 percent in some cases and now Facebook is boasting that it managed to make the site twice as fast in just six months thanks to heavy optimization and front end.

"From early 2008 to mid 2009, we spent a lot of time following the best practices laid out by pioneers in the web performance field to try and improve TTI [Time to Interact]," Facebook software engineer Jason Sobel writes.

"By June of 2009 we had made significant improvements, cutting median render time in half for users in the United States... We decided to measure TTI at the 75th percentile for all users as a better way to represent how fast the site felt. After looking at the data, we set an ambitious goal to cut this measurement in half by 2010; we had about six months to make Facebook twice as fast," he explains.

"I'm pleased to say that on December 22nd, as a result of these and other efforts, we declared victory on our goal to make the site twice as fast. We even had 9 whole days to spare!" Sobel announces.

Facebook was mostly pleased with the performance of its infrastructure but felt that page loading times could be shorter. Significantly shorter, actually. The team focused on several aspects, which could yield faster loading speeds for most people. One obvious way to do this, though it's easier said than done, was to cut down on the size of the files that had to be loaded for the site to run in the browser.

The engineers were able to reduce the average size of the cookies used by the site by 42 percent and that's before compressing the files. Another area that was improved was in the way the various pages on Facebook share common HTML and CSS code. Before, each page loaded its own custom code even though the elements were the same across many pages. Now, Facebook uses shared code for multiple pages, meaning that once the users download it once, the code will be reused by any other Facebook page they load. This enabled Facebook to decrease the size of the CSS files loaded for every page by 19 percent, and the compression of the HMTL ones by 44 percent.

One final area of the code that needed some optimizing was JavaScript. As the site grew and added more features, the code base needed to grow as well. But after analyzing the various JavaScript code used on the site, the Facebook engineers realized that many of the features could use common code to accomplish much of what they required. Facebook created a new base library it called the Primer, which is reused by various features on the site. This resulted in a 40 percent decrease in JavaScript file size downloaded per page.

Finally, Facebook decided to split up the elements on a page so that they load separately. "We call the whole system BigPipe and it allows us to break our web pages up in to logical blocks of content, called Pagelets, and pipeline the generation and render of these Pagelets. Looking at the home page, for example, think of the newsfeed as one Pagelet, the Suggestions box another, and the advertisement yet another," Sobel reveals. The idea is to give the browser something to do while the other components are still being processed. This also makes everything seem faster for the user even if the load times are the same as just showing everything in one big push.