Create a perfect robots.txt file for your website. Visual editor with bot presets, path rules, and live preview.
Click to add a block for this bot with default Disallow rules.
Need a privacy policy for your website? Get our step-by-step playbook to create a legally-compliant privacy policy that protects your business.
Get the Playbook - $9A robots.txt file tells search engine crawlers which pages or sections of your website they can or cannot access. It is placed in the root directory of your website (e.g., https://example.com/robots.txt) and is one of the first files crawlers check before indexing your site.
Copy the generated content and save it as a file named "robots.txt" in the root directory of your website. Make sure it is accessible at yourdomain.com/robots.txt. Most web hosting platforms and CMS systems have a dedicated place to upload or edit this file.
The asterisk (*) is a wildcard that matches all web crawlers. Rules under "User-agent: *" apply to every bot that visits your site. You can also create rules for specific bots like Googlebot or Bingbot by using their exact names.
It depends on your preference. If you do not want your content used to train AI models, you can block bots like GPTBot (OpenAI), CCBot (Common Crawl), Google-Extended, and Claude-Web. Use our "Block AI Crawlers" template to set this up quickly.
Yes. A misconfigured robots.txt can accidentally block search engines from indexing important pages. Always make sure you are not disallowing pages you want to appear in search results. Use the "Allow" directive or leave paths unblocked for pages you want indexed.
Crawl-delay tells bots how many seconds to wait between requests. This can reduce server load from aggressive crawlers. Note that Google does not honor Crawl-delay (use Google Search Console instead), but Bing, Yandex, and others do respect it.