Bad webcrawlers

Websites are often visited by programs that wish to extract information about the website. These programs are called Webcrawlers or spiders. One of these spiders is for example "GoogleBot". Google needs to visit your website to know about your content so they can add your pages to its index.
Bots can cause a lot of traffic and strain on your server, in case of Google that is something good. Because you are getting traffic in return for it. Other spiders are just costing you resources.

There is a way for webmasters to tell these spiders to not visit your website. By adding an Robots.txt file to their website they can either block parts of your website, or just disallow the whole website for specific spiders.
These statistics are collected from the robots.txt file of the websites in our database. We measure the crawlers that have been disallowed all access to a website. Websites that do not have a robots.txt file are excluded.
Most banned webcrawlers
# Spider/Crawler Percentage
1 Mj12bot (Majestic 12 Search Engine) 19.01%
2 Nutch (Webcrawler from Apache) 17.74%
3 Baiduspider (Chinese Search Engine Baidu) 17.08%
4 Ahrefsbot (Seo website Ahrefs) 15.8%
5 Yandex (Russian Search Engine) 13.05%
6 Ia_archiver (Wayback machine - Archive.org) 11.03%
7 Blexbot 6.84%
8 Semrushbot 6.82%
9 Bingbot 6.01%
10 Webzip (Offline Browser) 5.28%