Dear researchers,
There are some Web sources which are accessed just by Web robots and human users are unaware of them. 'robots.txt' file is a good example of such sources which is practically used as an indicator for detecting Web robots.
I would be appreciative if you let me know what are other Web sources (like files) which are just accessed by Web robots and used as an indicator for these crawlers.