The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.

NAME

lwp-rget - Retrieve WWW documents recursively

SYNOPSIS

 lwp-rget [--verbose] [--depth=N] [--limit=N] [--prefix=URL] <URL>
 lwp-rget --version

DESCRIPTION

This program will retrieve a document and store it in a local file. It will follow any links found in the document and store these documents as well, patching links so that they refer to these local copies. This process continues until there are no more unvisited links or the process is stopped by the one or more of the limits which can be controlled by the command line arguments.

This program is useful if you want to make a local copy of a collection of documents or want to do web reading off-line.

All documents are stored as plain files in the current directory. The file names chosen are derived from the last component of URL paths.

The options are:

--depth=n

Limit the recursive level. Embedded images are always loaded, even if they fall outside the --depth. This means that one can use --depth=0 in order to fetch a single document together with all inline graphics.

The default depth is 5.

--limit=n

Limit the number of documents to get. The default limit is 50.

--prefix=url_prefix

Limit the links to follow. Only URLs that start the prefix string are followed.

The default prefix is set as the "directory" of the initial URL to follow. For instance 'http://www.sn.no/sn'

--sleep=n

Sleep n seconds before retrieving each document. This options allows you to go slowly, not loading the server you visiting too much.

--verbose

Make more noise while running.

--version

Print program version number and quit.

--help

Print the usage message and quit.

SEE ALSO

lwp-request, LWP

AUTHOR

Gisle Aas <aas@sn.no>