++ed by:
ASHLEY SAPER BHSERROR DBOEHMER XIAODONG

68 PAUSE user(s)
32 non-PAUSE user(s).

Gisle Aas

NAME

WWW::RobotsRules - Parse robots.txt files

SYNOPSIS

 $robotsrules = new WWW::RobotRules 'MOMspider/1.0';

 $robotsrules->parse($url, $content);
    
 if($robotsrules->allowed($url)) {
     ...
 }

DESCRIPTION

This module parses a "/robots.txt" file as specified in "A Standard for Robot Exclusion", described in http://web.nexor.co.uk/users/mak/doc/robots/norobots.html.

Webmasters can use this file to disallow conforming robots access to parts of their WWW server.

The parsed file is kept as a Perl object that support methods to check if a given URL is prohibited.

Note that the same RobotRules object can parse multiple files.

new RobotRules 'MOMspider/1.0'

The argument given to new() is the name of the robot.

parse($url, $content)

Parse takes the URL that was used to retrieve the /robots.txt file, and the contents of the file.

allowed($url)

Returns TRUE if this robot is allowed to retrieve this URL.