The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.

NAME

Hadoop::Streaming::Reducer - Simplify writing Hadoop Streaming jobs. Write a reduce() function and let this role handle the Stream interface. This Reducer roll provides an iterator over the multiple values for a given key.

VERSION

version 0.143060

SYNOPSIS

    #!/usr/bin/env perl

    package WordCount::Reducer;
    use Moo;
    with qw/Hadoop::Streaming::Reducer/;

    sub reduce {
        my ($self, $key, $values) = @_;

        my $count = 0;
        while ( $values->has_next ) {
            $count++;
            $values->next;
        }

        $self->emit( $key => $count );
    }

    package main;
    WordCount::Reducer->run;

Your mapper class must implement map($key,$value) and your reducer must implement reduce($key,$value). Your classes will have emit() and run() methods added via the role.

METHODS

run

    Package->run();

This method starts the Hadoop::Streaming::Reducer instance.

After creating a new object instance, it reads from STDIN and calls $object->reduce( ) passing in the key and an iterator of values for that key.

Subclasses need only implement reduce() to produce a complete Hadoop Streaming compatible reducer.

AUTHORS

  • andrew grangaard <spazm@cpan.org>

  • Naoya Ito <naoya@hatena.ne.jp>

COPYRIGHT AND LICENSE

This software is copyright (c) 2014 by Naoya Ito <naoya@hatena.ne.jp>.

This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.