KiokuDB::Backend::BDB - BerkeleyDB backend for KiokuDB.


    KiokuDB->connect( "bdb:dir=/path/to/storage", create => 1 );


This is a BerkeleyDB based backend for KiokuDB.

It is the best performing backend for most tasks, and is very feature complete as well.

The KiokuDB::Backend::BDB::GIN subclass provides searching support using Search::GIN.



The BerkeleyDB::Manager instance that opens up the BerkeleyDB databases.

This will be coerced from a hash reference too, so you can do something like:

        manager => {
            home => "/path/to/storage",
            create => 1,
            transactions => 0,

to control the various parameters.

WHen using connect all the parameters are passed through to the manager as well:

        create => 1,
        transactions => 0,


Berkeley DB has extensive support for backup archival and recovery.

Unfortunately the default settings also mean that log files accumilate unless they are cleaned up.

If you are interested in creating backups look into the db_hotback or db_archive utilities.

Using BerkeleyDB's backup/recovery facilities

Read the Berkeley DB documentation on recovery procedures:

Depending on what type of recovery scenarios you wish to protect yourself from, set up some sort of cron script to routinely back up the data.


In order to properly back up the directory log files need to be checkpointed. Otherwise log files remain in use if the environment is still open and cannot be backed up.

BerkeleyDB::Manager sets auto_checkpoint by default, causing checkpoints to happen if enough data has been written after every top level txn_commit.

You can disable that flag and run the db_checkpoint utility from cron, or let it run in the background.

More information about checkpointing can be found here:

Information about the db_checkpoint utility can be found here:


db_archive can be used to list unused log files. You can copy these log files to backup media and then remove them.

Using db_archive and cleaning files yourself is recommended for catastrophic recovery purposes.


If catastrophic recovery protection is not necessary you can create hot backups instead of full ones.

Running the following command from cron is an easy way to have maintain a backup directory with and clean your log files:

    db_hotbackup -h /path/to/storage -b /path/to/backup -u -c

This command will checkpoint the logs, and then copy or move all the files to the backup directory, overwriting previous copies of the logs in that directory. Then it runs db_recover in catastrophic recovery mode in the backup directory, bringing the data up to date.

This is essentially db_checkpoint, db_archive and log file cleanup all rolled into one command. You can write your own hot backup utililty using db_archive and db_recover if you want catastrophic recovery ability.

Automatically cleaning up log files

If you don't need recovery support at all you can specify log_auto_remove to BerkeleyDB::Manager

    KiokuDB->connect( "bdb:dir=foo", log_auto_remove => 1 );

This instructs Berkeley DB to clean any log files that are no longer in use in an active transaction. Backup snapshots can still be made but catastrophic recovery is impossilbe.



Yuval Kogman <>


    Copyright (c) 2008, 2009 Yuval Kogman, Infinity Interactive. All
    rights reserved This program is free software; you can redistribute
    it and/or modify it under the same terms as Perl itself.

2 POD Errors

The following errors were encountered while parsing the POD:

Around line 244:

You forgot a '=back' before '=head1'

Around line 322:

=back without =over