The London Perl and Raku Workshop takes place on 26th Oct 2024. If your company depends on Perl, please consider sponsoring and/or attending.

NAME

fetch_story - fetch a story from the internet

VERSION

version 0.1704

SYNOPSIS

fetch_story [ --help | --manpage | --list ]

fetch_story [ --basename string ] [ --firefox_cookies filename | --wget_cookies filename ] [ --toc | --epub ] [ --use_wget ] [ --yaml ] url

DESCRIPTION

This fetches a story from the net (including multi-part stories) tidies up the HTML, and saves it. If a story is deduced to be a multi-chapter story, all chapters of the story are downloaded, and saved to separate files.

Fetcher Plugins

In order to tidy the HTML and parse the pages for data about the story, site-specific "Fetcher" plugins have been written for various sites such as fanfiction.net, LiveJournal and others. These plugins can scrape meta-information about the story from the given page, including the URLs of all the chapters of a multi-chapter story.

OPTIONS

--basename string

This option uses the given string as the "base" name of the story file(s) instead of constructing the name from the title of the story.

--epub

Convert the story file(s) into EPUB format.

--firefox_cookies filename

The name of a Firefox 4+ cookie file to use for cookies, for logging in to restricted sites. This option will be ignored if using wget.

--help

Print help and exit.

--list

List the available fetcher plugins.

--manpage

Print manual page and exit. Requires "perldoc" to be installed.

--toc

Build a table-of-contents file for the story.

--use_wget

Use wget to download files rather than the default Perl module, LWP. This may be necessary for some sites (such as LiveJournal) which require cookies to access restricted content, but which also don't play nice with the way LWP handles cookies.

--verbose

Be verbose.

--wget_cookies filename

The name of a cookie file in the format used by wget for cookies, (also known as Netscape cookie format) for logging in to restricted sites. This option works whether one is using wget or not.

--yaml

Put the meta-data into a YAML file.