Robots.txt Generator Tool

Generate custom robots.txt files to control search engine crawling on your website.

Configure Your Robots.txt

This will be used for the Sitemap directive

User Agents Configuration

Googlebot
Google's main crawler
All User Agents (*)
All search engine crawlers
Bingbot
Microsoft Bing's crawler

Disallow Specific Directories

/admin/, /wp-admin/, /administrator/
/cgi-bin/, /scripts/
/tmp/, /temp/, /cache/
/logs/, /error_log/

Sitemap Configuration

Add sitemap location to robots.txt

About Robots.txt

The robots.txt file tells search engine crawlers which pages or files they can or cannot request from your site.

  • Controls search engine crawling
  • Prevents indexing of private areas
  • Improves crawl efficiency

Best Practices

What to Block

Admin areas, login pages, temporary files, duplicate content.

What Not to Block

Public content, CSS/JS files (for proper rendering).

Testing

Always test robots.txt in Google Search Console before deploying.

Common Rules Examples

User-agent: *
Disallow: /admin/
Blocks admin area from all crawlers
User-agent: Googlebot
Allow: /
Allows Google to crawl entire site
Sitemap: https://example.com/sitemap.xml
Specifies sitemap location