Easy to Use:
It's never been easier to test the accuracy of your robots.txt file. Just paste your complete URL, with /robots.txt, click enter, and your report will be ready quickly.
Not only will our robots.txt checker find mistakes due to typos, syntax, and "logic" errors, it will also give you helpful optimization tips.
Taking into account both the Robots Exclusion Standard and spider-specific extensions, our robots.txt checker will generate an easy to read report that will help correct any errors you may have in your robots.txt file.
Frequently Asked Questions
This tool is simple to use and gives you a report in seconds – just type in your full website URL, followed by /robots.txt (e.g. yourwebsite.com/robots.txt) and click on the ‘check’ button. Our robots.txt checker will find any mistakes (such as typos, syntax and ‘logic’ errors) and give you tips for optimizing your robots.txt file.
Checking the file before your website is crawled means that you can avoid issues such as all your website content being crawled and indexed rather than just the pages you want indexing. For example, if you have a page that you only want visitors to access after filling in a subscription form, or a member’s login page, but don’t exclude it in your robot.txt file, it could end up being indexed.
Errors you may see include:
Invalid URL – You’ll see this error if your robots.txt file is missing completely
Potential wildcard error – Although technically a warning rather than an error, if you see this message it’s usually because your robots.txt file contains a wildcard (*) in the Disallow field (e.g. Disallow: /*.rss). This is a best practice issue – Google allows wildcards in the Disallow field but it’s not a recommended practice.
Generic and specific user-agents in the same block of code – This is a syntax error in your robots.txt file and should be corrected to avoid problems with crawling your website.
Warnings you may see include:
Allow: / – Using the allow order isn’t going to damage your ranking or affect your website, but it’s not standard practice. Major robots including Google and Bing will accept this directive, but not all crawlers do – and generally speaking, it’s best to make your robots.txt file compatible with all crawlers, not just the big ones.
Field name capitalization – While field names are not necessarily case sensitive, some crawlers may require capitalization, therefore it’s a good idea to capitalize field names for specific user-agents
Sitemap support – Many robots.txt files include the details of the sitemap for the website, but this is not considered to be best practice. Google and Bing, however, both support this feature.
Some website builders like Wix don’t allow you to edit your robots.txt file directly but do allow you to add no-index tags for specific pages.
So happy you liked it!