txt file is then parsed and can instruct the robot as to which pages usually are not being crawled. To be a internet search engine crawler might hold a cached duplicate of the file, it may every now and then crawl webpages a webmaster isn't going to need to crawl. Webpages generally prevented from remaining crawled include login-distinct web pages