NAME

Net::Amazon::DynamoDB

VERSION

version 0.002001

SYNOPSIS

my $ddb = Net::Amazon::DynamoDB->new(
    access_key => $my_access_key,
    secret_key => $my_secret_key,
    tables     => {

        # table with only hash key
        sometable => {
            hash_key   => 'id',
            attributes => {
                id          => 'N',
                name        => 'S',
                binary_data => 'B'
            }
        },

        # table with hash and reange key key
        othertable => {
            hash_key   => 'id',
            range_key  => 'range_id',
            attributes => {
                id       => 'N',
                range_id => 'N',
                attrib1  => 'S',
                attrib2  => 'S'
            }
        }
    }
);

# create both tables with 10 read and 5 write units
$ddb->exists_table( $_ ) || $ddb->create_table( $_, 10, 5 )
    for qw/ sometable othertable /;

# insert something into tables
$ddb->put_item( sometable => {
    id         => 5,
    name       => 'bla',
    binary_data => $some_data
} ) or die $ddb->error;
$ddb->put_item( othertable => {
    id        => 5,
    range_id  => 7,
    attrib1   => 'It is now '. localtime(),
    attrib2   => 'Or in unix timstamp '. time(),
} ) or die $ddb->error;

DESCRIPTION

Simple to use interface for Amazon DynamoDB

If you want an ORM-like interface with real objects to work with, this is implementation is not for you. If you just want to access DynamoDB in a simple/quick manner - you are welcome.

See https://github.com/ukautz/Net-Amazon-DynamoDB for latest release.

NAME

Net::Amazon::DynamoDB - Simple interface for Amazon DynamoDB

CLASS ATTRIBUTES

tables

The table definitions

use_keep_alive

Use keep_alive connections to AWS (Uses LWP::ConnCache experimental mechanism). 0 to disable, positive number sets value for LWP::UserAgent attribute 'keep_alive' Default: 0

lwp

Contains LWP::UserAgent instance.

json

Contains JSON instance for decoding/encoding json.

JSON object needs to support: canonical, allow_nonref and utf8

host

DynamoDB API Hostname. Your table will be in this region only. Table names do not have to be unique across regions. This is how you specify other regions. See Amazon's documentation for other available endpoints.

Default: dynamodb.us-east-1.amazonaws.com

access_key

AWS API access key

Required!

secret_key

AWS API secret key

Required!

api_version

AWS API Version. Use format "YYYYMMDD"

Default: 20111205

read_consistent

Whether reads (get_item, batch_get_item) consistent per default or not. This does not affect scan_items or query_items, which are always eventually consistent.

Default: 0 (eventually consistent)

namespace

Table prefix, prepended before table name on usage

Default: ''

raise_error

Whether database errors (eg 4xx Response from DynamoDB) raise errors or not.

Default: 0

max_retries

Amount of retries a query will be tries if ProvisionedThroughputExceededException is raised until final error.

Default: 0 (do only once, no retries)

derive_table

Whether we parse results using table definition (faster) or without a known definition (still requires table definition for indexes)

Default: 0

retry_timeout

Wait period in seconds between tries. Float allowed.

Default: 0.1 (100ms)

cache

Cache object using Cache interface, eg Cache::File or Cache::Memcached

If set, caching is used for get_item, put_item, update_item and batch_get_item.

Default: -

cache_disabled

If cache is set, you still can disable it per default and enable it per operation with "use_cache" option (see method documentation) This way you have a default no-cache policy, but still can use cache in choosen operations.

Default: 0

cache_key_method

Which one to use. Either sha1_hex, sha256_hex, sha384_hex or coderef

Default: sha1_hex

request_id

The x-amzn-RequestId header returned by the service. This is needed by Amazon tech support for debugging service issues

METHODS

create_table $table_name, $read_amount, $write_amount

Create a new Table. Returns description of the table

my $desc_ref = $ddb->create_table( 'table_name', 10, 5 )
$desc_ref = {
    count           => 123,         # amount of "rows"
    status          => 'CREATING',  # or 'ACTIVE' or 'UPDATING' or some error state?
    created         => 1328893776,  # timestamp
    read_amount     => 10,          # amount of read units
    write_amount    => 5,           # amount of write units
    hash_key        => 'id',        # name of the hash key attribute
    hash_key_type   => 'S',         # or 'N',
    #range_key      => 'id',        # name of the hash key attribute (optional)
    #range_key_type => 'S',         # or 'N' (optional)
}

delete_table $table

Delete an existing (and defined) table.

Returns bool whether table is now in deleting state (succesfully performed)

describe_table $table

Returns table information

my $desc_ref = $ddb->describe_table( 'my_table' );
$desc_ref = {
    existing        => 1,
    size            => 123213,      # data size in bytes
    count           => 123,         # amount of "rows"
    status          => 'ACTIVE',    # or 'DELETING' or 'CREATING' or 'UPDATING' or some error state
    created         => 1328893776,  # timestamp
    read_amount     => 10,          # amount of read units
    write_amount    => 5,           # amount of write units
    hash_key        => 'id',        # name of the hash key attribute
    hash_key_type   => 'S',         # or 'N',
    #range_key      => 'id',        # name of the hash key attribute (optional)
    #range_key_type => 'S',         # or 'N' (optional)
}

If no such table exists, return is

{
    existing => 0
}

update_table $table, $read_amount, $write_amount

Update read and write amount for a table

exists_table $table

Returns bool whether table exists or not

list_tables

Returns tables names as arrayref (or array in array context)

put_item $table, $item_ref, [$where_ref], [$args_ref]

Write a single item to table. All primary keys are required in new item.

# just write
$ddb->put_item( my_table => {
    id => 123,
    some_attrib => 'bla',
    other_attrib => 'dunno'
} );

# write conditionally
$ddb->put_item( my_table => {
    id => 123,
    some_attrib => 'bla',
    other_attrib => 'dunno'
}, {
    some_attrib => { # only update, if some_attrib has the value 'blub'
        value => 'blub'
    },
    other_attrib => { # only update, if a value for other_attrib exists
        exists => 1
    }
} );
  • $table

    Name of the table

  • $item_ref

    Hashref containing the values to be inserted

  • $where_ref [optional]

    Filter containing expected values of the (existing) item to be updated

  • $args_ref [optional]

    HashRef with options

    • return_old

      If true, returns old value

    • no_cache

      Force not using cache, if enabled per default

    • use_cache

      Force using cache, if disabled per default but setupped

batch_write_item $tables_ref, [$args_ref]

Batch put / delete items into one ore more tables.

Caution: Each batch put / delete cannot process more operations than you have write capacity for the table.

Example:

my ( $ok, $unprocessed_count, $next_query_ref ) = $ddb->batch_write_item( {
    table_name => {
        put => [
            {
                attrib1 => "Value 1",
                attrib2 => "Value 2",
            },
            # { .. } ..
        ],
        delete => [
            {
                hash_key => "Hash Key Value",
                range_key => "Range Key Value",
            },
            # { .. } ..
        ]
    },
    # table2_name => ..
} );

if ( $ok ) {
    if ( $unprocessed_count ) {
        print "Ok, but $unprocessed_count still not processed\n";
        $ddb->batch_write_item( $next_query_ref );
    }
    else {
        print "All processed\n";
    }
}
$tables_ref

HashRef in the form

{ table_name => { put => [ { attribs }, .. ], delete => [ { primary keys } ] } }
$args_ref

HashRef

  • process_all

    Keep processing everything which is returned as unprocessed (if you send more operations than your table has write capability or you surpass the max amount of operations OR max size of request (see AWS API docu)).

    Caution: Error handling

    Default: 0

update_item $table, $update_ref, $where_ref, [$args_ref]

Update existing item in database. All primary keys are required in where clause

# update existing
$ddb->update_item( my_table => {
    some_attrib => 'bla',
    other_attrib => 'dunno'
}, {
    id => 123,
} );

# write conditionally
$ddb->update_item( my_table => {
    some_attrib => 'bla',
    other_attrib => 'dunno'
}, {
    id => 123,
    some_attrib => { # only update, if some_attrib has the value 'blub'
        value => 'blub'
    },
    other_attrib => { # only update, if a value for other_attrib exists
        exists => 1
    }
} );
  • $table

    Name of the table

  • $update_ref

    Hashref containing the updates.

    • delete a single values

      { attribname => undef }
    • replace a values

      {
          attribname1 => 'somevalue',
          attribname2 => [ 1, 2, 3 ]
      }
    • add values (arrays only)

      { attribname => \[ 4, 5, 6 ] }
  • $where_ref [optional]

    Filter HashRef

  • $args_ref [optional]

    HashRef of options

    • return_mode

      Can be set to on of "ALL_OLD", "UPDATED_OLD", "ALL_NEW", "UPDATED_NEW"

    • no_cache

      Force not using cache, if enabled per default

    • use_cache

      Force using cache, if disabled per default but setupped

get_item $table, $pk_ref, [$args_ref]

Read a single item by hash (and range) key.

# only with hash key
my $item1 = $ddb->get_item( my_table => { id => 123 } );
print "Got $item1->{ some_key }\n";

# with hash and range key, also consistent read and only certain attributes in return
my $item2 = $ddb->get_item( my_other_table =>, {
    id    => $hash_value, # the hash value
    title => $range_value # the range value
}, {
    consistent => 1,
    attributes => [ qw/ attrib1 attrib2 ]
} );
print "Got $item2->{ attrib1 }\n";
  • $table

    Name of the table

  • $pk_ref

    HashRef containing all primary keys

    # only hash key
    {
        $hash_key => $hash_value
    }
    
    # hash and range key
    {
        $hash_key => $hash_value,
        $range_key => $range_value
    }
  • $args_ref [optional]

    HashRef of options

    • consistent

      Whether read shall be consistent. If set to 0 and read_consistent is globally enabled, this read will not be consistent

    • attributes

      ArrayRef of attributes to read. If not set, all attributes are returned.

    • no_cache

      Force not using cache, if enabled per default

    • use_cache

      Force using cache, if disabled per default but setupped

batch_get_item $tables_ref, [$args_ref]

Read multiple items (possible accross multiple tables) identified by their hash and range key (if required).

my $res = $ddb->batch_get_item( {
    table_name => [
        { $hash_key => $value1 },
        { $hash_key => $value2 },
        { $hash_key => $value3 },
    ],
    other_table_name => {
        keys => [
            { $hash_key => $value1, $range_key => $rvalue1 },
            { $hash_key => $value2, $range_key => $rvalue2 },
            { $hash_key => $value3, $range_key => $rvalue3 },
        ],
        attributes => [ qw/ attrib1 attrib2 / ]
    ]
} );

foreach my $table( keys %$res ) {
    foreach my $item( @{ $res->{ $table } } ) {
        print "$item->{ some_attrib }\n";
    }
}
$tables_ref

HashRef of tablename => primary key ArrayRef

$args_ref

HashRef

  • process_all

    Batch request might not fetch all requested items at once. This switch enforces to batch get the unprocessed items.

    Default: 0

delete_item $table, $where_ref, [$args_ref]

Deletes a single item by primary key (hash or hash+range key).

# only with hash key
  • $table

    Name of the table

  • $where_ref

    HashRef containing at least primary key. Can also contain additional attribute filters

  • $args_ref [optional]

    HashRef containing options

    • return_old

      Bool whether return old, just deleted item or not

      Default: 0

    • no_cache

      Force not using cache, if enabled per default

    • use_cache

      Force using cache, if disabled per default but setupped

query_items $table, $where, $args

Search in a table with hash AND range key.

my ( $count, $items_ref, $next_start_keys_ref )
    = $ddb->query_items( some_table => { id => 123, my_range_id => { GT => 5 } } );
print "Found $count items, where last id is ". $items_ref->[-1]->{ id }. "\n";

# iterate through al all "pages"
my $next_start_keys_ref;
do {
    ( my $count, my $items_ref, $next_start_keys_ref )
        = $ddb->query_items( some_table => { id => 123, my_range_id => { GT => 5 } }, {
            start_key => $next_start_keys_ref
        } );
} while( $next_start_keys_ref );
  • $table

    Name of the table

  • $where

    Search condition. Has to contain a value of the primary key and a search-value for the range key.

    Search-value for range key can be formated in two ways

  • $args

    {
        limit => 5,
        consistent => 0,
        backward => 0,
        #start_key =>  { .. }
        attributes => [ qw/ attrib1 attrib2 / ],
        #count => 1
    }

    HASHREF containing:

    • limit

      Amount of items to return

      Default: unlimited

    • consistent

      If set to 1, consistent read is performed

      Default: 0

    • backward

      Whether traverse index backward or forward.

      Default: 0 (=forward)

    • start_key

      Contains start key, as return in LastEvaluatedKey from previous query. Allows to iterate above a table in pages.

      { $hash_key => 5, $range_key => "something" }
    • attributes

      Return only those attributes

      [ qw/ attrib attrib2 / ]
    • count

      Instead of returning the actual result, return the count.

      Default: 0 (=return result)

    • all

      Iterate through all pages (see link to API above) and return them all.

      Can take some time. Also: max_retries might be needed to set, as a scan/query create lot's of read-units, and an immediate reading of the next "pages" lead to an Exception due to too many reads.

      Default: 0 (=first "page" of items)

scan_items $table, $filter, $args

Performs scan on table. The result is eventually consistent. Non hash or range keys are allowed in the filter.

See query_items for argument description.

Main difference to query_items: A whole table scan is performed, which is much slower. Also the amount of data scanned is limited in size; see http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Scan.html

request

Arbitrary request to DynamoDB API

error [$str]

Get/set last error

AUTHOR

COPYRIGHT

Copyright (c) 2012 the "AUTHOR" as listed above

LICENCSE

Same license as Perl itself.

AUTHORS

  • Arthur Axel "fREW" Schmidt <frioux+cpan@gmail.com>

  • Ulrich Kautz <uk@fortrabbit.de>

COPYRIGHT AND LICENSE

This software is copyright (c) 2017 by Ulrich Kautz.

This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.