Websites are often visited by programs that wish to extract information about the website. These programs are called Webcrawlers or spiders. One of these spiders is for example "GoogleBot". Google needs to visit your website to know about your content so they can add your pages to its index.
Bots can cause a lot of traffic and strain on your server, in case of Google that is something good. Because you are getting traffic in return for it. Other spiders are just costing you resources.
There is a way for webmasters to tell these spiders to not visit your website. By adding an Robots.txt file to their website they can either block parts of your website, or just disallow the whole website for specific spiders.
These statistics are collected from the robots.txt file of the websites in our database. We measure the crawlers that have been disallowed all access to a website.
Websites that do not have a robots.txt file are excluded.