Why would you want to do that?

Nov 14, 2006 15:19 GMT  ·  By

While every web developer tries to improve the quality of his content and to increase the position in Google's rank, there are some persons that, following different reasons, are trying hardly to remove their site from Google's Index.

The question is "Why'? Google says that "we stop indexing pages of a site only at the request of the webmaster who's responsible for those pages, when it's spamming our index, or as required by law. This policy is necessary to ensure that pages aren't inappropriately removed from our index."

If you don't want to be responsible for the content published on the website, then you're right, it's better to remove it from Google's Index. How can you do this? The company answers: "If you wish to exclude your entire website from Google's index, you can place a file at the root of your server called robots.txt. This is the standard protocol that most web crawlers observe for excluding a web server or directory from an index."

"If you believe your request is urgent and cannot wait until the next time Google crawls your site, use our automatic URL removal system. In order for this automated process to work, the webmaster must first create and place a robots.txt file on the site in question," they added.

Google crawls a website and removes all files if it contains robots.txt in the server root. If you want to exclude only a single part or an image from Google's Index, it's enough to place robots.txt in the same directory with the files you want to remove. For more information about how robots.txt works and other useful details, you can visit this page before you'll starting the removing process.