robots.txt

Overview #

WordPress.com uses the core robots.txt file and adds a number of default entries, such as the sitemap, to optimize your site. To modify the file, you can hook into the do_robotstxt action, or filter the output by hooking into robots_txt (source).

↑ Top ↑

Example: Mark a directory as “nofollow” #

function my_disallow_directory() {
	echo "User-agent: *" . PHP_EOL;
	echo "Disallow: /path/to/your/directory/" . PHP_EOL;
}
add_action( 'do_robotstxt', 'my_disallow_directory' );

↑ Top ↑

Caching #

Note that we cache the robots.txt for long periods of time. This means that you’ll need to force the caches to clear after any changes, by going to Settings > Reading from your Dashboard and toggling the privacy settings.

↑ Top ↑

On convenience domains #

On any convenience domain (a subdomain of go-vip.net or go-vip.co), the robots.txt output will be hard-coded to return a “Disallow for all user agents” result. This is to prevent search engines from indexing content hosted on development/staging sites.

Ready to get started?

Drop us a note.

No matter where you are in the planning process, we’re happy to help, and we’re actual humans here on the other side of the form. 👋 We’re here to discuss your challenges and plans, evaluate your existing resources or a potential partner, or even make some initial recommendations. And, of course, we’re here to help any time you’re in the market for some robust WordPress awesomeness.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.