++ed by:

133 non-PAUSE users.

Gisle Aas


WWW::RobotsRules - Parse robots.txt files


 $robotsrules = new WWW::RobotRules 'MOMspider/1.0';

 $robotsrules->parse($url, $content);
 if($robotsrules->allowed($url)) {


This module parses a "/robots.txt" file as specified in "A Standard for Robot Exclusion", described in http://web.nexor.co.uk/users/mak/doc/robots/norobots.html.

Webmasters can use this file to disallow conforming robots access to parts of their WWW server.

The parsed file is kept as a Perl object that support methods to check if a given URL is prohibited.

Note that the same RobotRules object can parse multiple files.

new RobotRules 'MOMspider/1.0'

The argument given to new() is the name of the robot.

parse($url, $content)

Parse takes the URL that was used to retrieve the /robots.txt file, and the contents of the file.


Returns TRUE if this robot is allowed to retrieve this URL.