The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.

NAME

File::Scan - file parser intended for big files that doesn't fit into main memory.

VERSION

version 0.001

DESCRIPTION

In most of the cases, you don't want to use this, but File::Slurp instead.

This class is able to slurp a line from a file without loading the whole file in memory. When you want to deal with files of millions of lines, on a limited environment, brute force isn't an option.

An index of all the lines in the file is built in order to be able to access them almost instantly.

The memory used is then limited to the size of the index (HashRef of line numbers / position values) plus the size of the line that is read.

It also provides a way to nicely iterate over all the lines of the file, using only the amount of memory needed to store one line at a time, not the whole file.

ATTRIBUTES

path

Required, file path as a string.

line_separator

Optional, regular expression of the newline seperator, default is /(\015\012|\015|\012)/.

is_utf8

Optional, flag to tell if the file is utf8-encoded, default is true.

If true, the line returned by slurp_line will be decoded.

index

Index that contains positions of all lines of the file, usage:

    $self->index->{ $line_number } = $seek_position;

METHODS

slurp_line

Return the line content at the given position.

    my $line_content = $self->slurp_line( $line_number );

ACKNOWLEDGMENT

This module was written at Weborama when dealing with huge raw files, where huge means "oh no, it really won't fit anymore in this compute slot!" (which are limited in main-memory).

AUTHORS

This module has been written at Weborama by Alexis Sukrieh and Bin Shu.

AUTHOR

Alexis Sukrieh <sukria@sukria.net>

COPYRIGHT AND LICENSE

This software is copyright (c) 2014 by Weborama.

This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.