Improving your blog’s SEO through Robots.txt and Canonical Headers

Search engine like Google gathers information and search results in the world wide web through their crawlers which has been also known as bots or spiders. They scan your site or blog and take every information they can, index them to throw them in your searches if ever you’re asking for that related information. The bots can even look at your sites confidential information if ever you’re not aware on how to maintain the robots.txt which somehow acts like a security guard to tell spiders on which directory they can indexed. Without this, your site might be in compromised for possible online leakage –with your sensitive information popping up on search results.NetDNA-Blog-RobotsTxt-R11

In simple terms, without this robots.txt file which is commonly found in the root directory of your site, the crawlers will assume that they can grab all the information inside your blog or site.

For WordPress users, I know this file is already configured as per installation so all you need to do is customized it the way you want the crawlers index your site. And to know further about on how to improve your SEO just by maximizing and configuring the robots.txt as well figuring out the use of canonical headers, you can check out the post from NetDNA blog through our source link.

We are sharing this post as we thing this will greatly help the blogging community.

Source: NetDNA Blog

  • that’s like spider net,,, robot detect always error in my blog , i think i must fix it