Free Robots.txt Generator

Create a perfect robots.txt file for your website. Visual editor with bot presets, path rules, and live preview.

Quick Templates

User-Agent Rules

Quick Add Bots

Click to add a block for this bot with default Disallow rules.

Global Settings

📄 Live Preview

User-agent: * Allow: /

Privacy Policy Playbook

Need a privacy policy for your website? Get our step-by-step playbook to create a legally-compliant privacy policy that protects your business.

Get the Playbook - $9

Frequently Asked Questions

What is a robots.txt file?

A robots.txt file tells search engine crawlers which pages or sections of your website they can or cannot access. It is placed in the root directory of your website (e.g., https://example.com/robots.txt) and is one of the first files crawlers check before indexing your site.

How do I use the generated robots.txt file?

Copy the generated content and save it as a file named "robots.txt" in the root directory of your website. Make sure it is accessible at yourdomain.com/robots.txt. Most web hosting platforms and CMS systems have a dedicated place to upload or edit this file.

What does "User-agent: *" mean?

The asterisk (*) is a wildcard that matches all web crawlers. Rules under "User-agent: *" apply to every bot that visits your site. You can also create rules for specific bots like Googlebot or Bingbot by using their exact names.

Should I block AI crawlers like GPTBot?

It depends on your preference. If you do not want your content used to train AI models, you can block bots like GPTBot (OpenAI), CCBot (Common Crawl), Google-Extended, and Claude-Web. Use our "Block AI Crawlers" template to set this up quickly.

Does robots.txt affect my SEO?

Yes. A misconfigured robots.txt can accidentally block search engines from indexing important pages. Always make sure you are not disallowing pages you want to appear in search results. Use the "Allow" directive or leave paths unblocked for pages you want indexed.

What is the Crawl-delay directive?

Crawl-delay tells bots how many seconds to wait between requests. This can reduce server load from aggressive crawlers. Note that Google does not honor Crawl-delay (use Google Search Console instead), but Bing, Yandex, and others do respect it.

Related Tools: Meta Tag Generator | Schema Markup Generator | Favicon Generator