++ed by:

3 PAUSE users
7 non-PAUSE users.

Laurent Dami
and 1 contributors


DBIx::DataModel::Doc::Cookbook - Helpful recipes


This chapter is part of the DBIx::DataModel manual.


This chapter provides some recipes for common ORM tasks.


Automatically generate a schema

A schema skeleton can be produced automatically from the following external sources : a DBI connection, a SQL::Translator parser, or a DBIx::Class schema. See DBIx::DataModel::Schema::Generator.

Object inflation/deflation

Here is an example of inflating/deflating a scalar value from the database into a Perl object :

  # declare column type
  use Date::Simple;
  MySchema->ColumnType(Date_simple => 
    fromDB => sub {Date::Simple->new($_[0]) if $_[0] },
    toDB   => sub {$_[0] = $_[0]->as_str    if $_[0] },
  # apply column type to columns
  My::Table1->ColumnType(Date_simple => qw/d_start d_end/);
  My::Table2->ColumnType(Date_simple => qw/d_birth/);

Caveat: the fromDB / toDB functions do not apply automatically within -where conditions. So the following would not work :

  use Date::Simple qw/today/;
  my $rows = My::Table->select(-where => {d_end => {'<' => today()}});

because today() returns a Date::Simple object that will not be understood by SQL::Abstract when generating the SQL query. DBIx::DataModel is not clever enough to inspect the -where conditions and decide which column types to apply, so you have to do it yourself :

  my $today = today()->as_str;
  my $rows = My::Table->select(-where => {d_end => {'<' => $today}});

Schema versioning

Currently DBIx::DataModel has no specific support for schema versioning. Choose DBIx::Class instead.


Aggregator functions

Use normal SQL syntax for aggregators, and give them column aliases (with a vertical bar |) in order to retrieve the results.

  my $row = $source->select(-columns => [qw/MAX(col1)|max_col1
                            -where    => ...,
                            -resultAs => 'firstrow');
  print "max is : $row->{max_col1}, average is $row->{foo}";

Or you can dispense with column aliases, and retrieve the results directly into an arrayref, using -resultAs => 'flat_arrayref' :

  my $array_ref = $source->select(-columns => [qw/MAX(col1)
                                  -where    => ...,
                                  -resultAs => 'flat_arrayref');
  my ($max_col1, $avg_col2, $count_col3) = @$array_ref;

Caveat: currently, fromDB handlers do not apply to aggregator functions (this might be improved in a future version).

Database functions or stored procedures

Like above: normal SQL syntax and column aliases.

  my $rows = $source->select(-columns => [qw/FOOBAR(col1,col2)|foobar
                             -where    => ...,
  print "$_->{foobar} and $_->{big_col3}" foreach @$rows;

Nested queries

Requires new version of SQL::Abstract (coming soon), which will take a ref to a string as literal SQL, possibly associated with bind values. So we will be able to write something like

  my $subquery = $source1->select(..., -resultAs => 'subquery');
  my $rows     = $source2->select(
      -columns => ...,
      -where   => {foo => 123, bar => {-not_in => $subquery}}

Hashref inflation

There is no need for a hashref inflator: rows returned by a select() can be used directly as hashrefs. For example here is a loop that prints a hash slice from each row :

  my $rows       = My::Table->select(...);
  my @print_cols = qw/col3 col6 col7/;
  foreach my $row (@$rows) {
    print @{$row}{@print_cols};

In fact, each row is a blessed hashref. This can be a problem with some external modules like JSON that croaks when encoding a blessed reference. In that case you can use the unbless function

  foreach my $row (@$rows) {
    print to_json($row);

Custom SQL

Create a DBIx::DataModel::View to encapsulate your SQL.



  # anonymous sub containing the work to do
  my $to_do = sub {
  # so far nothing has happened in the database
  # now do the transaction

Nested transaction

  MySchema->doTransaction(sub {
    MySchema->doTransaction(sub { 
    MySchema->doTransaction(sub { 

Generating random keys

Override the _singleInsert method

  package MySchema::SomeTable;

  sub _singleInsert {
    my ($self) = @_;
    my $class = ref $self;

    my ($key_column) = $class->primKey;

    for (1..$MAX_ATTEMPTS) {
      my $random_key = int(rand($MAX_RANDOM));

        $self->{$key_column} = $random_key;
        eval {$self->_rawInsert; 1} 
          and return $random_key;   # SUCCESS

        # if duplication error, try again; otherwise die
        last unless $DBI::errstr =~ $DUPLICATE_ERROR;
     croak "cannot generate a random key for $class: $@";

Cascaded insert

First insert an arrayref of subrecords within the main record hashref; then call insert on that main record. See example in insert() reference. The precondition for this to work is that the relationship between the two classes should be a Composition.

Cascaded delete

  # first gather information tree from the database
  my $author = Author->fetch($author_id);
  my $distribs = $author->expand('distributions');
  $_->expand('modules') foreach @$distribs;
  # then delete the whole tree from the database

This only works if the relationship between classes is a Composition. The expand operations retrieve related records and add them into a tree in memory. Then delete removes from the database all records found in the tree; therefore this is not a "true" cascaded delete, because the client code is responsible for fetching the related records.

True cascaded delete is best implemented directly in the database, rather than at the ORM layer.

Timestamp validation


(make sure that the record was not touched between the time it was presented to the user (display form) and the time the user wants to update or delete that record).