The London Perl and Raku Workshop takes place on 26th Oct 2024. If your company depends on Perl, please consider sponsoring and/or attending.

NAME

Text::xSV - read character separated files

SYNOPSIS

  use Text::xSV;
  my $csv = new Text::xSV;
  $csv->open_file("foo.csv");
  $csv->bind_header();
  # Make the headers case insensitive
  foreach my $field ($csv->get_fields) {
    if (lc($field) ne $field) {
      $csv->alias($field, lc($field));
    }
  }
  
  $csv->add_compute("message", sub {
    my $csv = shift;
    my ($name, $age) = $csv->extract(qw(name age));
    return "$name is $age years old\n";
  });

  while ($csv->get_row()) {
    my ($name, $age) = $csv->extract(qw(name age));
    print "$name is $age years old\n";
    # Same as
    #   print $csv->extract("message");
  }

DESCRIPTION

This module is for reading character separated data. The most common example is comma-separated. However that is far from the only possibility, the same basic format is exported by Microsoft products using tabs, colons, or other characters.

The format is a series of rows separated by returns. Within each row you have a series of fields separated by your character separator. Fields may either be unquoted, in which case they do not contain a double-quote, separator, or return, or they are quoted, in which case they may contain anything, and will encode double-quotes by pairing them. In Microsoft products, quoted fields are strings and unquoted fields can be interpreted as being of various datatypes based on a set of heuristics. By and large this fact is irrelevant in Perl because Perl is largely untyped. The one exception that this module handles that empty unquoted fields are treated as nulls which are represented in Perl as undefined values. If you want a zero-length string, quote it.

People usually naively solve this with split. A next step up is to read a line and parse it. Unfortunately this choice of interface (which is made by Text::CSV on CPAN) makes it impossible to handle returns embedded in a field. Therefore you may need access to the whole file.

This module solves the problem by creating a CSV object with access to the filehandle, if in parsing it notices that a new line is needed, it can read at will.

USAGE

First you set up and initialize an object, then you read the CSV file through it. The creation can also do multiple initializations as well. Here are the available methods

new

This is the constructor. It takes a hash of optional arguments. They are the filename of the CSV file you are reading, the fh through which you read, an optional filter, the error_handler that is called for errors, and the one character sep that you are using. If the filename is passed and the fh is not, then it will open a filehandle on that file and sets the fh accordingly. The separator defaults to a comma.

The filter is an anonymous function which is expected to accept a line of input, and return a filtered line of output. The default filter removes \r so that Windows files can be read under Unix. This could also be used to, eg, strip out Microsoft smart quotes.

The error handler is an anonymous function which is expected to take an error message and do something useful with it. The default error handler just calls Carp::confess. Error handlers that do not trip exceptions (eg with die) are less tested and may not work perfectly in all circumstances.

set_error_handler
set_filename
set_fh
set_filter
set_sep

Set methods corresponding to the optional arguments to new.

open_file

Takes the name of a file, opens it, then sets the filename and fh.

bind_fields

Takes an array of fieldnames, memorizes the field positions for later use. bind_header is preferred.

bind_header

Reads a row from the file as a header line and memorizes the positions of the fields for later use. File formats that carry field information tend to be far more robust than ones which do not, so this is the preferred function.

get_row

Reads a row from the file. Returns an array or reference to an array depending on context. Will also store the row in the row property for later access.

extract

Extracts a list of fields out of the last row read. In list context returns the list, in scalar context returns an anonymous array.

extract_hash

Extracts all fields that it knows about into a hash. In list context returns the hash. In scalar context returns a reference to the hash.

fetchrow_hash

Combines get_row and extract_hash to fetch the next row and return a hash or hashref depending on context.

alias

Makes an existing field available under a new name.

  $csv->alias($old_name, $new_name);
get_fields

Returns a list of all known fields in no particular order.

add_compute

Adds an arbitrary compute. A compute is an arbitrary anonymous function. When the computed field is extracted, Text::xSV will call the compute in scalar context with the Text::xSV object as the only argument.

Text::xSV caches results in case computes call other computes. It will also catch infinite recursion with a hopefully useful message.

TODO

Allow blank fields at the end to be optionally undef? (Requested by Chad Simmons.)

Think through a writing interface. (Suggested by dragonchild on perlmonks.)

Add utility interfaces. (Suggested by Ken Clark.)

Offer an option for working around the broken tab-delimited output that some versions of Excel present for cut-and-paste.

BUGS

When I say single character separator, I mean it.

Performance could be better. That is largely because the API was chosen for simplicity of a "proof of concept", rather than for performance. One idea to speed it up you would be to provide an API where you bind the requested fields once and then fetch many times rather than binding the request for every row.

Also note that should you ever play around with the special variables $`, $&, or $', you will find that it can get much, much slower. The cause of this problem is that Perl only calculates those if it has ever seen one of those. This does many, many matches and calculating those is slow.

I need to find out what conversions are done by Microsoft products that Perl won't do on the fly upon trying to use the values.

ACKNOWLEDGEMENTS

My thanks to people who have given me feedback on how they would like to use this module, and particularly to Klaus Weidner for his patch fixing a nasty segmentation fault from a stack overflow in the regular expression engine on large fields.

AUTHOR AND COPYRIGHT

Ben Tilly (ben_tilly@operamail.com). Originally posted at http://www.perlmonks.org/node_id=65094.

Copyright 2001-2003. This may be modified and distributed on the same terms as Perl.