The London Perl and Raku Workshop takes place on 26th Oct 2024. If your company depends on Perl, please consider sponsoring and/or attending.


DBIx::Class::Journal - Auditing for tables managed by DBIx::Class


Load the module into your DBIx::Class Schema Class:

 package My::Schema;
 use base 'DBIx::Class::Schema';


Optionally set where the journal is stored:


And then call $schema->bootstrap_journal (once only) to create all the tables necessary for the journal, in your database.

Later on, in your application, wrap operations in transactions, and optionally associate a user with the changeset:

 my $new_artist = $schema->txn_do( sub {
    return $schema->resultset('Artist')->create({ name => 'Fred' });


The purpose of this DBIx::Class component module is to create an audit-trail for all changes made to the data in your database (via a DBIx::Class schema). It creates changesets and assigns each create/update/delete operation an id. The creation and deletion date of each row is stored, as well as the historical contents of any row that gets changed.

All queries which need auditing must be called using "txn_do" in DBIx::Class::Schema, which is used to create changesets for each transaction.

To track who did which changes, the user_id (an integer) of the current user can be set, and a session_id can also be set; both are optional. To access the auditing schema to look at the auditdata or revert a change, use $schema->_journal_schema.


Currently the module expects to be deployed alongside a new database schema, and track all changes from first entry. To do that you need to create some tables in which to store the journal, and you can opitonally configure which data sources (tables) have their operations journalled by the module.

Connect to your schema and deploy the journal tables as below. The module automatically scans your schema and sets up storage for journal entries.

 # optional - defaults to all sources
 My::Schema->journal_sources([qw/ table1 table2 /]);

 $schema = My::Schema->connect(...);

Note that if you are retrofitting journalling to an existing database, then as well as creating the journal you will need to populate it with a history so that when rows are deleted they can be mapped back to a (fake) creation.

If you ever update your original schema, remember that you must then also update the journal's schema to match, so that the AuditHistory has the corresponding new columns in which to save data.


The journal schema contains a number of tables. These track row creation, update and deletion, and also are aware of multiple operations taking place within one transaction.


Each changeset row has an auto-incremented ID, optional user_id and session_id, and a set_date which defaults to the current datetime. This is the authoritative log of one discrete change to your database, which may possible consist of a number of ChangeLog operations within a single transaction.


Each operation done within the transaction is recorded as a row in the ChangeLog table. It contains an auto-incrementing ID, the changeset_id and an order column to establish the order in which changes took place.


For every table in the original database that is to be audited, an AuditLog table is created. When a row appears in the original database a corresponding row is added here with a ChangeLog ID in the create_id column, and when that original row is deleted the AuditLog is updated to add another ChangeLog ID this time into the delete_id column. A third id column contains the primary key of the original row, so you can find it in the AuditHistory.

Note that currently only integer-based single column primary keys are supported in your original database tables.


For every table in the original database to be audited, an AuditHistory table is created. This is where the actual field data from your original table rows are stored on creation and on each update.

Each row in the AuditHistory has a change_id field containing the ID of the ChangeLog row. The other fields correspond to all the fields from the original table (with any constraints removed). Each time a column value in the original table is changed, the entire row contents after the change are added as a new row in this table.


Call these in your Schema Class such as the My::Schema package file, as in the SYNOPSIS, above.

journal_connection \@connect_info

Set the connection information for the database to save your audit information to.

Leaving this blank assumes you want to store the audit data into your current database. The storage object will be shared by the regular schema and the journalling schema.

journal_components @components

If you want to add components to your journal (DBIx::Class::Schema::Versioned for example) pass them here.

journal_sources \@source_names

Set a list of source names you would like to audit. If unset, all sources are used.

NOTE: Currently only sources with a single-column integer PK are supported, so use this method if you have sources which don't comply with that limitation.

journal_storage_type $type

Enter the special storage type of your journal schema if needed. See DBIx::Class::Storage::DBI for more information on storage types.

journal_user \@rel

The user_id column in the "ChangeSet" will be linked to your user id with a belongs_to relation, if this is set with the appropriate arguments. For example:

 __PACKAGE__->journal_user(['My::Schema::User', {'foreign.userid' => 'self.user_id'}]);


Once you have a connection to your database, call these methods to manage the journalling.


This calls journal_schema_deploy followed by prepopulate_journal to create your journal tables and if necessary populate them with a snapshot of your current original schema data.

Do not run this method more than once on your database, as redeploying the journal schema is not supported.


Will use "deploy" in DBIx::Class::Schema to set up the tables for journalling in your schema. Use this method to set up your journal.

Note that if you are retrofitting journalling to an existing database, then as well as creating the journal you will need to populate it with a history so that when rows are deleted they can be mapped back to a (fake) creation.

Do not run this method more than once on your database, as redeploying the journal schema is not supported.


Will load the current state of your original source tables into the audit history as fake inserts in a single initial changeset. The advantage to this is that later deletetions of the row will be consistent in the journal with an initial state.

Note that this can be an intensive and time consuming task, depending on how much data you have in your original sources; all of it will be copied to the journal history. However this step is essential if you are retrofitting Journalling to a schema with existing data, otherwise when you delete a row the Journal will die because it cannot relate that to an initial row insert.

changeset_user $user_id

Set the user_id for the following changeset(s). This must be an integer.

changeset_session $session_id

Set the session_id for the following changeset(s). This must be an integer.


Overloaded "deploy" in DBIx::Class::Schema which will deploy your original database schema and following that will deploy the journal schema.

txn_do $code_ref, @args

Overloaded "txn_do" in DBIx::Class::Schema, this must be used to start a new ChangeSet to cover a group of changes. Each subsequent change to an audited table will use the changeset_id created in the most recent txn_do call.

Currently nested txn_do calls cause a single ChangeSet object to be created.

Deprecated Methods

journal_deploy_on_connect $bool

If set to a true value will cause journal_schema_deploy to be called on connect. Not recommended (because re-deploy of a schema is not supported), but present for backwards compatibility.


For PostgreSQL databases you must enable quoting on SQL command generation by passing { quote_char => q{`}, name_sep => q{.} } when connecting to the database.



  • Only single-column integer primary key'd tables are supported for auditing.

  • Rows changed as a result of CASCADE settings on your database will not be detected by the module and hence not journalled.

  • Updates made via "update" in DBIx::Class::ResultSet are not yet supported.

  • No API for viewing or restoring changes yet.

Patches for the above are welcome ;-)


Jess Robinson <>

Matt S. Trout <> (ideas and prodding)


You may distribute this code under the same terms as Perl itself.