++ed by:

3 PAUSE users
7 non-PAUSE users.

Laurent Dami
and 1 contributors


DBIx::DataModel::Doc::Reference - General reference for DBIx::DataModel


This chapter is part of the DBIx::DataModel manual.


WARNING : Version 2 of DBIx::DataModel is a major refactoring from versions 1.*, with a number of incompatible changes in the API (classes renamed, arguments renamed or reorganized, etc. -- see DBIx::DataModel::Doc::Delta_2.0). The documentation below has not been fully updated yet to reflect these changes.

This chapter is the detailed description of DBIx::DataModels methods for creating and using schemas.

Automatic schema generation from external sources is not covered here; see DBIx::DataModel::Schema::Generator.

Detailed use of statements is not covered here either, because explicit programming of the various statement phases (sqlize, prepare, bind, execute, etc.) is seldom needed. If you want to finely tune these operations, read the DESIGN section of the manual (purpose, lifecycle, etc.), and the DBIx::DataModel::Statement page.


Methods listed below for declaring schema, tables, associations, etc. come in two flavours :

  • a "front-end" method, starting with an uppercase letter, that uses positional parameters. This version is prefered for conciseness and for backwards compatibility.

  • a "back-end" method, called define_schema, define_table, etc., that uses named parameters. This version is prefered for completeness.

The invocant for front-end methods can be either the Schema class name, or the meta-schema instance. The invocant for back-end methods must be the meta-schema instance.


Schema() / define_schema()

  # front-end method
  my $schema_class = DBIx::DataModel->Schema($schema_name, %options);
  # or (back-end method)
  my $meta_schema 
    = DBIx::DataModel->define_schema(class => $schema_name, %options);

Creates a new Perl class of name $schema_name that represents a database schema. That class inherits from DBIx::DataModel::Schema. At the same time, an instance of DBIx::DataModel::Meta::Schema is also created, for holding meta-information about that schema (for example lists of classes, associations, types, etc. that will be declared within the schema).

The front-end and back-end method illustrated above are both just a façade; they merely call "new" in DBIx::DataModel::Meta::Schema, where the real work is performed. Schema() returns the created class, while define_schema() returns the meta-schema instance.

Possible %options are :


Name of the Perl schema class to be created.


Parent class(es) for that schema class (scalar or arrayref). The default parent is DBIx::DataModel::Schema.


A hashref specifying columns to be automatically inserted in every table. See "define_auto_insert_columns()".


A hashref specifying columns to be automatically updated in every table. See "define_auto_update_columns()".


A hashref specifying columns to be automatically ignored in insert/update operations, for every table. See "define_no_update_columns()".


An optional boolean; if true, a LEFT OUTER JOIN in a multi-steps join specification will force all subsequent joins to be also LEFT. For example in the fake datamodel used in the rest of this documentation, a join like

  HR->join(qw/Employee activities department)->...

would be interpreted as

  T_Employee LEFT OUTER JOIN T_Activity ON ...
             LEFT OUTER JOIN T_Department ON ...

even if the association betwen Activity and Department is many-to-one (so theoretically this would result in a INNER JOIN by default).

table_parent, join_parent

Optional application-specific classes, to be used instead of the builtin classes, as parents for tables and joins declared in this schema.

table_metaclass, join_metaclass, association_metaclass, path_metaclass, type_metaclass

Optional application-specific classes, to be used instead of the builtin metaclasses, for instanciating meta-objects declared in this schema, .


Optional application-specific class to be used instead of the builtin class DBIx::DataModel::Statement for instanciating statements.


This chapter deals with all methods that will populate the structure of a meta-schema : declaring tables, associations, types, navigation methods, etc. Such objects are defined statically and will not change during program execution.

By contrast, the "SCHEMA PARAMETERS" chapter deals with properties specific to each schema instance, such as database connection, debugging properties, etc.

Table() / define_table()

  # front-end method
  $schema_class->Table($class_name, $db_name, @primary_key, \%options);

  # or (back-end method)

Creates a new Perl class of name $class_name that represents a database table. The new class inherits from the table_parent declared in the schema (by default : DBIx::DataModel::Source::Table). At the same time, an instance of the table_metaclass is also created (b< default : DBIx::DataModel::Meta::Source::Table), for holding meta-information about that table (database name, primary key, paths to other tables, etc.).

Both methods illustrated above call the "new" in DBIx::DataModel::Source::Table constructor, where the real work is performed. They return the $schema or $meta_schema on which they were invoked.

Possible %options are :

class => $string

Name of the class to be created. If $class_name contains no ::, then the schema name is prepended to it (so the new table class is created in the namespace of the schema, which is a recommended best practice).

db_name => $string

SQL name of the database table or view that will be accessed through this class. Actually, it might be any SQL clause, like for example Table1 t1 INNER JOIN Table2 t2 ON t1.fk=t2.pk; see the "View()" method below.

primary_key => $string | \@array

Name of the column (or list of columns) that hold the primary key for that table.

default_columns => $string

Whatever will be injected into SELECT statements, when no -columns parameter is explicitly supplied. The default is '*'.

where => \%hash | \@array

An optional reference to a WHERE condition, in SQL::Abstract::More format. That condition will be automatically injected into every SELECT statement on this table. When working with regular tables, this parameter is always empty; it is only useful for declaring DBIx::DataModel views (see the "View()" method below).

parents => [$parent1, ...]

List of meta-tables from which this table will inherit. This can be useful if your database supports table inheritance (like for example PostgreSQL), and you want to reflect the inheritance structure within the Perl table classes.







  {column1 => \%handlers1, ...}

where %handlers1 contains {handler_name_1 => coderef1, }

Associates some handlers to some columns in the current table class. Then, when you call $obj->apply_column_handler($handler_name) : each column having a handler of the corresponding name will execute the associated code. This can be useful for all sorts of data manipulation :

  • converting dates between internal database format and user presentation format

  • converting empty strings into null values

  • inflating scalar values into objects

  • column data validation









  $meta_schema->View($class_name, $default_columns, $db_tables, 
                     \%where, @parent_tables);

This is another front-end method for define_table(), for creating a new Perl class of name $class_name that represents a SQL SELECT request of shape

  SELECT $columns FROM $db_tables [ WHERE %where ]

The optional list of @parent_tables contains names of Perl table classes from which the view will also inherit. If the SQL code in $db_tables is a join between several tables, then it is a good idea to mention these tables in @parent_tables, so that their path methods become available to instances of this view. Be careful about table names : the SQL code in $db_tables should contain database table names, whereas the members of @parent_tables should be Perl table classes.

Perl views as defined here have nothing to do with views declared in the database itself. Perl views are totally unknown to the database, they are just abstractions of SQL statements. If you need to access database views, just use the Table() declaration, like for a regular table.

Association() / Composition() / define_association ()

  # front-end method
  $schema->Association([$class1, $role1, $multiplicity1, @columns1],
                       [$class2, $role2, $multiplicity2, @columns2]);

  # or (special kind of association)                       
  $schema->Composition([$class1, $role1, $multiplicity1, @columns1], 
                       [$class2, $role2, $multiplicity2, @columns2]);

  # or (back-end method)
    name   => $association_name,          # optional
    kind   => $kind,                      # 'Association' or 'Composition'
    A      => {
      table        => $meta_table_instance,
      role         => $role_name,         # optional
      multiplicity => $multiplicity_spec, # ex. "1..*"
      join_cols    => [$col1, ...]        # optional
    B      => { ... },                    # same structure as 'A'

Declares an association between two tables (or even two instances of the same table). The front-end methods shown above are equivalent to

    A => {table        => $class1->metadm, role       => $role1, 
          multiplicity => $multiplicity1,  join_cols  => \@columns1},
    B => {table        => $class2->metadm, role       => $role2, 
          multiplicity => $multiplicity2,  join_cols  => \@columns2},
    kind => 'Association' # or 'Composition'

The arguments are :


A description of the first association end, which is composed of


An instance of DBIx::DataModel::Meta::Source::Table.


The role name of that source within the association. A Perl method of the same name will be defined in the remote source (the other end of the association). Besides, the role name is also used when building joins through

  $schema->join(qw/FirstTable role1 role2 .../)

One of the role names in the association can be anonymous (undef), but not both. If anonymous, there will be no Perl method and no possibility to join in that direction, so it defines a unidirectional association.

If several associations attempt to define the same role name in the same table, an exception is generated.


The multiplicity specification, i.e. the minimum and maximum number of occurrences of that association end, for any given instance of the other end (if not clear, see UML textbooks).

The multiplicity can be expressed either as an arrayref [$min, $max], or as a string "$min..$max". The $max can be '*' or 'n', which is interpreted as the maximum integer value. If expressed as a string, a mere '*' is interpreted as '0..*', and a mere '1' is interpreted as '1..1'.

Numbers different from 0, 1 or * may be given as multiplicity bounds, but this will be just documentary : technically, all that matters is

  • whether the lower bound is 0 or more (if 0, generated joins will be left joins, otherwise inner joins)

  • whether the upper bound is 1 or more (if 1, the associated method returns a single object, otherwise it returns an arrayref)


An arrayref of columns that participate in the database join, for this side of the association. The full database join will be built by creating a LEFT|INNER JOIN ... ON .. clause in which the left-hand and right-hand sides of the ON subclause come from the join_cols of both association ends.

This argument is optional: if absent, it will be filled by default by taking the primary key of the table with minimum multiplicity 1, for both sides of the association. This default behaviour is convenient for data models where primary keys and foreign keys are identical (for example dpt_id both as primary key in Department and as foreign key in Activity). Some data models have a different policy, where primary keys are always named the same (for example id), and foreign keys are named after the related table name : in such models, the default does not work, and you have to specify the join columns explicitly.

If the association is many-to-many (i.e. if the maximum multiplicity is greater than 1 on both sides), then join_cols takes a special meaning : it no longer represents database columns, but rather represents two role names (in the sense just defined above) to follow for reaching the remote end of the association. Therefore join_cols must contain exactly 2 items in that case : the path to the intermediate table, and the path from the intermediate table to the remote end. Here is again the example from "SYNOPSIS" in DBIx::DataModel :

    kind => 'Association',
    A    => {
      table        => My::Schema::Department->metadm,
      role         => 'departments',
      multiplicity => '*',
      join_cols    => [qw/activities department/],
    B    => {
      table        => My::Schema::Employee->metadm,
      role         => 'employees',
      multiplicity => '*',
      join_cols    => [qw/activities employee/],

A description of the second association end, following exactly the same principles as for the 'A' end.


Optional name for the association (otherwise an implicit name will be built by default from the concatenation of the role names).


A string describing the association kind, i.e. one of : Association, Aggregation or Composition.

Compositions are associations with some additional semantics. In UML class diagrams, compositions are pictured with a black diamond on one side : this side will be called the composite class, while the other side will be called the component class. In DBIx::DataModel, the diamond (the composite class) corresponds to the A association end, and the component class corresponds to the B end, so the order is important (while for plain associations the order makes no difference).

The UML intended meaning of a composition is that objects of the component classes cannot exist outside of their composite class. Within DBIx::DataModel, the additional semantics for compositions is to support cascaded insertions and deletions, and auto-expansion; so the special semantis attached to kind Composition is :

  • the multiplicity must be 1-to-n

  • the 'B' end of the association (the "component" part) must not be component of another association (it can only be component of one single composite table).

  • this association can be used for auto-expanding the composite object (i.e. automatically fetching all component parts from the database) -- see "expand" in DBIx::DataModel::Source and "auto_expand"

  • this association can be used for cascaded inserts like

        column1 => $val1,
        $component_name1 => [{$sub_object1}, ...],

    see "insert"

    The main record will be inserted in the composite class, and within the same transaction, subrecords will be inserted into the component classes, with foreign keys automatically filled with appropriate values.

  • this association can be used for cascaded deletes : the argument to a delete may contain lists of component records to be deleted together with the main record of the composite class.

The association also creates instances of DBIx::DataModel::Meta::Path for representing the directional paths between those sources.


  # front_end_method
  $schema->Type($type_name => 
     $handler_name_1 => sub { ... },
     $handler_name_2 => sub { ... },

or in verbose form :

     name     => $type_name,
     handlers => {
       $handler_name_1 => sub { ... },
       $handler_name_2 => sub { ... },

This declares a type, which is just a hashref of handler names and handler bodies (coderefs). The type can then be applied to some columns in some tables; this is usually done in the Table declaration (column_types argument), or can be applied later through the "define_column_type" method.

Handlers receive the column value as usual through $_[0]. If the value is to be modified (for example for scalar conversions or while inflating values into objects), the result should be put back into $_[0]. In addition to the column value, other info is passed to the handler :

  $handler_body = sub {
    my ($column_value, $obj, $column_name, $handler_name) = @_;
    my $new_val = $obj->compute_new_val($column_value, ...);
    $column_value = $new_val; # WRONG : will be a no-op
    $_[0] = $new_val;         # OK    : value is converted

The second argument $obj is the object from where $column_value was taken -- most probably an instance of a Table or Join class. Use this if you need to read some contextual information, but avoid modifying $obj : you would most probably get unexpected results, since the collection of available columns may vary from one call to the other.

Other arguments $column_name and $handler_name are obvious.

Handler names from_DB and to_DB have a special meaning : they are called automatically just after reading data from the database, or just before writing into the database. Handler name validate is used by the method "has_invalid_columns()".

The "SYNOPSIS" in DBIx::DataModel shows some examples for types "Date", "Percent" and "Multivalue".

A Type does not create a Perl class.


This method builds or retrieves a pre-existing subclass of DBIx::DataModel::Source::Join, and returns the associated meta-object (an instance of DBIx::DataModel::Meta::Source::Join).

  my $meta_join = $meta_schema->define_join($table, $path1, $path2, ..);

The join starts from a given table class and follows one or several associations through their path names; the resulting SQL request contains parameters automatically inferred from the associations. So for example

  $meta_schema->define_join(qw/Department activities employee/);

is more or less equivalent to

  my $sql = <<_EOSQL_
    LEFT OUTER JOIN Activity ON Department.dpt_id = Activity.dpt_id
    LEFT OUTER JOIN Employee ON Activity.emp_id   = Employee.emp_id
  $schema->View("Department=>activities=>employee", '*', $sql, 
                qw/Department Activity Employee/);

For each pair of tables, the kind of join is chosen according to the multiplicity declared with that role in the association : if the minimum multiplicity is 0, the join will be LEFT OUTER JOIN; otherwise it will be a usual inner join (exception if $meta_schema->sql_no_inner_after_left_join is true : after a first left join, all remaining tables are also connected through additional left joins). The default kind of join chosen by this rule may be overriden by inserting intermediate connectors in the list, namely '<=>' for inner joins and '=>' for left joins. So for example

  $meta_schema->define_join(qw/Department <=> activities <=> employee/);

becomes equivalent to

  my $sql = <<_EOSQL_
    INNER JOIN Activity ON Department.dpt_id = Activity.dpt_id
    INNER JOIN Employee ON Activity.emp_id   = Employee.emp_id

Table names in the SQL statement correspond to what was defined in the $meta_schema->Table(...) declarations. However, tables may be aliased using | as a separator :

  $meta_schema->define_join(qw/Department|dpt activities|act employee|emp/)
              ->select(-columns => qw/dpt.name act.d_begin emp.lastname/,
                       -where   => {"dpt.dpt_name" => {-like => "A%"}});

which generates

  SELECT dpt.name, act.d_begin, emp.lastname
  FROM Department AS dpt
    LEFT OUTER JOIN Activity AS act ON dpt.dpt_id = act.dpt_id
    LEFT OUTER JOIN Employee AS emp ON act.emp_id = emp.emp_id
  WHERE dtp.dpt_name like 'A%'

All tables participating in a join are stacked, and further roles are found by walking up the stack. So in

  ..->define_join(qw/FirstTable path1 path2 path3/)

we must find a path1 in FirstTable, from which we know what will be the Table2. Then, we must find a path2 in in Table2, or otherwise in FirstTable, in order to know Table3. In turn, path3 must be found either in Table3, or in Table2, or in FirstTable, etc. To resolve ambiguities, prefix the path by the name or alias of the targeted source, such as :

  ..->define_join(qw/FirstTable path1|p1 

The name of the resulting join class will be composed by concatenating table, connectors and path names, including optional aliases. If the same sequence of table and paths was already encountered before, the Perl class already exists, and its corresponding meta-object is returned; otherwise, a new Perl class is created together with its meta-object.

The main purpose of define_join is to gain efficiency in interacting with the database. If we write

  foreach my $dpt (@{$schema->table('Department')->select}) {
    foreach my $act ($dpt->activities) {
      my $emp = $act->employee;
      printf "%s works in %s since %s\n", 
         $emp->{lastname}, $dpt->{dpt_name}, $act->{d_begin};

many database calls are generated behind the scene, in the loops that call the activities and employee methods. Instead we could write

  my $join = $meta_schema->define_join(qw/Department activities employee/);
  foreach my $row (@{$join->select}) {
    printf "%s works in %s since %s\n", 
      $row->{lastname}, $row->{dpt_name}, $row->{d_begin};

which generates one single call to the database.


  $table->define_navigation_method($meth_name => qw/role1 role2 .../, 

Inserts into the $table class a new method named $meth_name, that will automatically call "define_join()" and then select(), passing %optional_select_args to the select call. This is useful for joining several tables at once, so for example with

    employees => qw/activities employee/

we can then write

  my $empl_ref = $some_dept->employees(-where   => {gender => 'F'},
                                       -columns => [qw/firstname lastname]);

This method is used internally to implement many-to-many associations; so if you have only two roles to follow, you would probably be better off by defining the association, which is a more abstract notion. Direct calls to define_navigation_method are still useful if you want to follow three or more roles at once.

The last argument to define_navigation_method is an optional hashref; if present, the hash will be passed as initial argument to the select call.


  $meta_table->define_column_type($type_name, @column_names);

Registers type $type_name to be applied to columns with name in @column_names, within the target $meta_table.


    $handler_name_1 => sub { ... },

Registers handlers to be applied to $column_name, within the target $meta_table. The main purpose of this method is for implementing the higher-level "define_column_type()" method; however it can also be called directly, without the need for defining a type.



Generates an "auto_expand()" method for the class, that will autoexpand on the roles listed (i.e. will call the appropriate method and store the result in a local slot within the object). In other words, the object knows how to expand itself, fetching information from associated tables, in order to build a data tree in memory. Only component names declared through Composition() may be auto-expanded.

Be careful about performance issues: when an object uses auto-expansion through a call to "autoExpand()", every auto-expanded role will generate an additional call to the database. This might be slow, especially if there are recursive auto-expansions; so in some cases it will be more appropriate to flatten the tree and use database joins, typically through the method join().


Meta-schema methods


  my @meta_tables  = $meta_schema->tables;

Returns all of DBIx::DataModel::Meta::Source::Table instances declared in this $meta_schema.


  my $meta_table   = $meta_schema->table($table_name);

Returns the single instance of DBIx::DataModel::Meta::Source::Table with name $table_name, or undef.


  my @associations = $meta_schema->associations;

Returns all of DBIx::DataModel::Meta::Association instances declared in this $meta_schema.


  my $association  = $meta_schema->associations($association_name);

Returns the single instance of DBIx::DataModel::Meta::Source::Association with name $association_name, or undef.


  my @types        = $meta_schema->types;

Returns all of DBIx::DataModel::Meta::Type instances declared in this $meta_schema.


  my $type         = $meta_schema->type($type_name);

Returns the single instance of DBIx::DataModel::Meta::Type with name $type_name, or undef.


  my @joins        = $meta_schema->joins;

Returns all of DBIx::DataModel::Meta::Source::Join instances declared in this $meta_schema.


  my $join         = $meta_schema->join($join_name);

Returns the single instance of DBIx::DataModel::Meta::Source::Join with name $join_name, or undef.

other accessors

Accessor methods are defined for the following members of the $meta_schema :


Meta-source methods

Accessor methods are defined for the following members of a $meta_source (instance of either DBIx::DataModel::Meta::Source::Table or DBIx::DataModel::Meta::Source::Join) :

aliased tables

In addition, other methods are defined :


Returns a flattened list of recursive calls to the "parents()" method.


Returns a flattened hash, built from auto_insert_columns declared in this source, in its ancestors, or in the $meta_schema.


Returns a flattened hash, built from auto_update_columns declared in this source, in its ancestors, or in the $meta_schema.


Returns a flattened hash, built from no_update_columns declared in this source, in its ancestors, or in the $meta_schema. Keys are column names, values are insignificant.


Returns a flattened hash, built from no_update_columns declared in this source, in its ancestors, or in the $meta_schema.


Returns what will be injected as -from argument into the "select" in SQL::Abstract::More call.


Returns the optional "where" condition associated with this source (in the case of "View()").

Meta-table methods

In addition to the $meta_source methods above, the following methods are defined for an instance of DBIx::DataModel::Meta::Source::Table :


Returns the list of other meta-sources that have been declared as components of this source, through the "Composition()" declaration.

Association methods


The DBIx::DataModel::Meta::Schema instance in which this association is declared.


Returns the hashref describing the "A" end of the association. Keys in the hashref are :


The DBIx::DataModel::Meta::Source::Table instance.


The optional string defining the role of this source in the association.


An arrayref [$min, $max] of UML minimum and maximum multiplicity.


An arrayref of column names participating in the join on this side of the association.


Returns the hashref describing the "B" end of the association.


Returns the association name


Returns the association kind (Association or Composition).


Returns the DBIx::DataModel::Meta::Path object describing the path from A to B.


Returns the DBIx::DataModel::Meta::Path object describing the path from B to A.

Path methods


The name of this path.


Reference to the DBIx::DataModel::Meta::Source where this path starts.


Reference to the DBIx::DataModel::Meta::Source where this path ends.


Hashref for generating the join condition (keys are colums for the left-hand side, values are columns for the right-hand side).


see "Association methods"


Reference to the DBIx::DataModel::Meta::Association that created this path.


Either "AB" or "BA".

Type methods


The DBIx::DataModel::Meta::Schema instance in which this type is declared.


Name of this type.


Hashref of handlers declared in this type (keys are handler names, values are handler bodies, i.e. coderefs).


A $schema is an instance of a subclass of DBIx::DataModel::Schema. The subclass holds a reference to a $meta_schema where all information about tables, associations, etc. is kept. Each instance of that subclass holds information about the database connection, the debugging status, etc.

A schema subclass can have several instances; in that case, the application runs in multi-schema mode. However, multi-schema mode is only useful in some special situations, like for example transferring data between several databases; in most common cases, a single-schema mode is enough.

Single-schema mode is activated by default, which means that all method calls can be performed directly on the subclass; the subclass will manage a singleton instance, and will delegate calls to that singleton. This is the recommended way to work with DBIx::DataModel, because it uses less memory, and simplifies the application code (there is no need to pass a $schema reference around between all modules).

If you need it, multi-schema mode is activated by calling the new() method, as many times as you need. Once this mode is activated, it is not possible to go back to single-schema mode. Furthermore, multi-schema mode should be activated before the singleton has been created, i.e. before any call to the class methods below.

Because of this duality, all methods below, described as $schema->some_method(...), can be called either as class methods (single-schema mode), or as instance methods (multi-schema mode).


  my $schema = $schema_subclass->new(%options);

As just explained, this activates multi-schema mode, and returns a new schema instance. %options may contain some initial settings for dbh, debug, etc.; this is the same as creating the schema first, and then calling the setter methods below $schema->dbh(...), $schema->debug(...), etc.


  my $dbh = DBI::connect(...);
  $schema->dbh($dbh, %options);        # set
  $schema->dbh([$dbh, %options]);      # set, alternative syntax

  my $dbh             = $schema->dbh;  # get back just the dbh
  my ($dbh, %options) = $schema->dbh;  # get back all

Returns or sets the handle to a DBI database handle (see DBI). This handle is schema-specific. DBIx::DataModel expects the handle to be opened with RaiseError => 1 (see "Transactions and error handling").

In %options you may pass any key-value pairs, and retrieve them later by calling dbh in a list context, which may be useful for holding driver-specific information. DBIx::DataModel will only inspect those options for knowing how to retrieve database-generated keys (see the "insert" method below).

Changing the database handle through the dbh method is forbidden while a transaction is in course. However, a nested transaction may temporarily change the database handle by supplying it as argument to the "do_transaction" method.

To unset the database handle, call $schema->dbh(undef).


  $schema->debug(1);             # will warn for each SQL statement
  $schema->debug($debug_object); # will call $debugObject->debug($sql)
  $schema->debug(undef);         # turn off debugging

Debug mode is useful for seeing SQL statements generated by DBIx::DataModel. Enabling debugging with a $debug_object will typically be useful in conjunction with something like Log::Log4perl or Log::Dispatch. Otherwise, enabling debugging with just any true scalar will print SQL statements on STDERR.

There is also another way to see the SQL code for one particular statement :

  my $spy_sql = sub {my ($sql, @bind) = @_;
                     print STDERR join "\n", $sql, @bind;
                     return ($sql, @bind);};
  my $result = $source->select(-columns   => \@columns,
                               -where     => \%criteria,
                               -post_SQL  => $spy_sql);


  $schema->sql_abstract($an_SQL_Abstract_More_instance); # set
  my $sqlam = $schema->sql_abstract;                     # get

Sets or retrieves the instance of SQL::Abstract::More used by this $schema. If the client code does not set it explictly, an instance wil be implicitly created.


  $schema->dbi_prepare_method($method);      # set
  my $method = $schema->dbi_prepare_method;  # get

Sets or retrieves the method sent to DBI for preparing SQL statements. The default is "prepare"; it can be set to "prepare_cached" instead.


  $schema->placeholder_prefix($prefix);      # set
  my $prefix = $schema->placeholder_prefix;  # get

Sets or retrieves the prefix string to recognize "named placeholders" within a statement. That prefix should never match any regular data encountered in your application; the default is '?:'.


  $schema->select_implicitly_for($string);      # set
  my $string = $schema->select_implicitly_for;  # get

Sets or retrieves a default value for the -for argument to select().

This default value is not set for statements with -result_as => 'subquery' (because the FOR clause in an SQL statement only makes sense at the top level, not in a subquery).


    my $scope_guard = $schema->localize_state(@schema_members_to_localize);

    ... # do some work, possibly change state
  } # $scope_guard out of scope : previous state of $schema is restored 

Applies dynamic scoping to a $schema, like Perl's local construct (see "Temporary Values via local" in perlsub). Here however, since local is not perfectly suited for private class data, we provide a solution which is similar in spirit, but slightly different in implementation. The localize_state method internally takes a copy of the current state, and returns a handle to it. The handle should be stored in a my variable; when that variable goes out of scope (at block exit), then the previous state is restored.

The optional argument @schema_members_to_localize specifies precisely which schema members should be localized. If it is empty, the default list is : dbh, dbh_options, debug, select_implicitly_for, dbi_prepare_method.



  my $coderef = sub {$table1->insert(...); $table2->update(...); ...};

Evaluates the code within a transaction. In case of failure, the transaction is rolled back, and an exception is raised, with a message containing the initial error and the status of the rollback (because the rollback itself may also fail). If you need finer details, you can treat the exception as an object with two methods initial_error and rollback_errors:

    eval {$schema->do_transaction($coderef); 1}
    or do {my $err = $@;
           explain($err->initial_error, $err->rollback_errors)};

Usually the coderef passed as argument will be a closure that may refer to variables local to the environment where the closure was created.

Nested calls to do_transaction are supported : only the top-level call will actually initiate and then commit the transaction, while an exception at any level will abort and rollback the whole thing.

If the nested transaction needs to temporarily work on a different database handle, it may do so by supplying the dbh and its options as additional arguments :

  $schema->do_transaction($coderef, $new_dbh, %new_dbh_options);

When called in this form, do_transaction will temporarily set the dbh to the supplied value, and then return to the previous value when the nested transaction is finished. Commits on all involved database handles are delayed until the top-level transaction is finished.

There is no support (yet!) for nested transactions with intermediate savepoints.


  my $record = $table->fetch(@key_values, \%options);

Fetches a single record from a table, from its primary key value (on one or several columns). %options may specify things like -for, -pre_exec, -post_exec, etc.

This method is just syntactic sugar for

  my $record = $table->select(-fetch => \@key_values, %options);

(see below).


  my $record = $table->fetch_cached(@key_values, \%options);

Like fetch, except that the result is stored in a cache, and further calls to the same methods with the same parameters will return the cached record instead of going back to the database. The cache does not know about any updates to the database, so this is mainly useful for readonly data.

The cache is stored internally in $table->metadm->{fetch_cached}{$dbh_addr}{$freeze_args} (where $dbh_addr is Scalar::Util::refaddr(MyTable->dbh) and $freeze_args is Storable::freeze(@keyValues, \%options)). Client code may use this information to clear the cache or tie it to a more sophisticated caching module.


Given a $target, which could be either a table, a view or a statement :

  $records = $target->select(
     -columns       => \@columns, 
       # OR : -columns => [-distinct => @columns],
     -where         => \%where,
       # OR : -fetch => $key,
       # OR : -fetch => \@key,
     -group_by      => \@groupings,
     -having        => \%criteria,
     -order_by      => \@order,
     -for           => $purpose,
     -post_SQL      => sub {...},
     -pre_exec      => sub {...},
     -post_exec     => sub {...},
     -post_bless    => sub {...},
     -prepare_attrs => \%attrs,
     -limit         => $limit,
     -offset        => $offset,
     -page_size     => $pageSize,
     -page_index    => $pageIndex,
     -column_types  => \%columnTypes,
     -result_as     => 'rows'      || 'firstrow' 
                    || 'hashref'   || [hashref => @cols]
                    || 'sth'       || 'sql' 
                    || 'subquery'  || 'flat_arrayref' 
                    || 'statement' || 'fast_statement'

Applies a SQL SELECT to the $target, and returns a result as specified by the -result_as argument (see below).

Arguments are all optional and are passed by name.

Named arguments to select()

-columns => \@columns

\@columns is a reference to an array of SQL column specifications (i.e. column names, * or table.*, functions, etc.).

Initial words in @columns that start with a hyphen are treated as verbatim SQL : in particular, -columns => [-DISTINCT => qw/col1 col2 .../] will yield

  SELECT DISTINCT col1, col2, ... FROM ...

A '|' in a column is translated into an 'AS' clause, according to the current SQL dialect in SQL::Abstract::More : this is convenient when using perl qw/.../ operator for columns, as in

  -columns => [ qw/table1.longColumn|t1lc table2.longColumn|t2lc/ ]

Column aliasing should be avoided on key columns (either primary or foreign keys), because role methods will no longer be able to navigate through the joins (currently DBIx::DataModel is not clever enough to rename its internal join constraints according to column aliases). Aliasing on non-key columns is OK, and column handlers will operate properly on aliased columns.

The argument to -columns can also be a string instead of an arrayref, like for example "c1 AS foobar, MAX(c2) AS m_c2, COUNT(c3) AS n_c3"; however this is mainly for backwards compatibility. The recommended way is to use the arrayref notation as explained above :

  -columns => [ qw/  c1|foobar   MAX(c2)|m_c2   COUNT(c3)|n_c3  / ]

If omitted, \@columns takes the default, which is usually '*', unless modified through the default_columns argument to the table declaration.

No verification is done on the list of retrieved \@columns, so the list does not have to include the primary or foreign keys --- but then later attempts to perform joins or updates will obviously fail.

-distinct => \@columns

behaves like the -columns arguments, except that keyword DISTINCT will be included in the generated SQL.

-where => \%where

\%where is a reference to a hash or array of criteria that will be translated into SQL clauses. In most cases, this will just be something like {col1 => 'val1', col2 => 'val2'}; see SQL::Abstract::select for detailed description of the structure of that hash or array. It can also be a plain SQL string like "col1 IN (3, 5, 7, 11) OR col2 IS NOT NULL".

-fetch => \@columns

equivalent to

  ...->select(-where => {<primary_key_column0> => $columns[0],
                         <primary_key_column1> => $columns[1], ...},
              -result_as => "firstrow")

If the primary key ranges on one single column (which is the most frequent case), then the argument to -fetch can also be a single scalar value :

  ...->select(-fetch => $key)

When calling a table directly, the select(-fetch => ..) syntax is awkward; you will most certainly prefer the syntactic sugar offered by the fetch() method :


However, the -fetch notation is useful when walking through association roles :

  $employee->activities(-fetch => $act_id)

This example will generate the following SQL

  SELECT * FROM activity WHERE act_id=$act_id AND emp_id=$employee->{emp_id}

Notice how this is different from


which would generate

  SELECT * FROM activity WHERE act_id=$act_id

Both examples would end up with the same record, but in the first case there is an additional check that this record really belongs to the given employee.

In presence of -fetch, arguments -where and -select_as are not allowed.

-group_by => "string" or -group_by => \@array

adds a GROUP BY clause in the SQL statement. Grouping columns are specified either by a plain string or by an array of strings.

-having => "string" or -having => \%criteria

adds a HAVING clause in the SQL statement (only makes sense together with a GROUP BY clause). This is like a -where clause, except that the criteria are applied after grouping has occured.

-order_by => \@order

\@order is a reference to a list of columns for sorting. It can also be a plain SQL string like "col1 DESC, col3, col2 DESC". Columns can also be prefixed by '+' or '-' for indicating sorting directions, so for example -orderBy => [qw/-col1 +col2 -col3/] will generate the SQL clause ORDER BY col1 DESC, col2 ASC, col3 DESC.

-for => $clause

specifies an additional clause to be added at the end of the SQL statement, like -for => 'read only' or -for => 'update'.

-post_SQL => sub{...}

hook for specifying a callback function to be called on SQL code and bind values, before preparing the statement. It will be called as follows:

  ($sql, @bind) = $args->{-post_SQL}->($sql, @bind) if $args->{-post_SQL};
-pre_exec => sub{...}, -post_exec => sub{...}

hooks for specifying callback functions to be called on the DBI statement handle, just before or just after invoking execute(). So the sequence will be more or less like this:

  $sth = $dbh->prepare($sql_statement);
  $pre_exec_callback->($sth)  if $pre_exec_callback;
  $post_exec_callback->($sth) if $post_exec_callback;

This is mostly useful if you need to call driver-specific functions at those stages.

-post_bless => sub{...}

hook for specifying a callback function to be called on data rows. The callback will be called after bless_from_DB, i.e. the row is already an object of the proper class and column handlers have been applied.

-prepare_attrs => \%attrs

Optional attributes that will be transmitted to "prepare" in DBI.

-page_size => $page_size

specifies how many rows will be retrieved per "page" of data. Default is unlimited (or more precisely the maximum value of a short integer on your system). When specified, automatically implies -limit.

-page_index => $page_index

specifies the page number (starting at 1). Default is 1. When specified, automatically implies -offset.

-limit => $limit

limit to the number of rows that will be retrieved. Automatically implied by -page_size.

-offset => $offset

Automatically implied by -page_index.

-column_types => \%column_types

dynamically specifies some column types at the statement level. Usually column types are defined within tables at compile time, but it may be necessary to defined additional types within the statement, for example when using database functions and/or aliases. The argument is a hashref in which keys are names of column types defined within the schema, and values are arrayrefs of column names :

   select(-columns     => [qw/ MAX(date_col)|max_date 
                               MIN(date_col)|min_date ... /],
          -column_types => { Date => [qw/max_date min_date/] },
-result_as => $result_kind

specifies what kind of result will be produced. The result is always a scalar and does not depend on the calling context. Possible result kinds are :


The result will be a ref to an array of rows, blessed into objects of the class. This is the default result kind. If there are no data rows, a ref to an empty list is returned.


The result will be just the first data row, blessed into an object of the class. If there is no data, undef is returned.

hashref || [hashref => @cols]

The result will be a hashref. Keys in the hash correspond to distinct values of the specified columns, and values are data row objects. If the argument is given as [hashref => @cols], the column(s) are specified by the caller; otherwise if the argument is given as a simple string, @cols will default to $source->primary_key. If there is more than one column, the result will be a tree of nested hashes. This -result_as is normally used only where the key fields values for each row are unique. If multiple rows are returned with the same values for the key fields then later rows overwrite earlier ones.


The result will be a ref to an array that concatenates results from each row. Usually this is combined with a -columns argument with one single column, to get a vertical slice from a resultset, like in

  my $all_names = $schema->table('People')->select(
    -columns   => [-DISTINCT => qw/firstname/],
    -result_As => 'flat_arrayref',
  print sort @$all_names;

However, it may also be used for example to fill a hash from pairs retrieved from each row, like in

  my $pairs = $schema->table('People')->select(
    -columns   => [qw/pers_id firstname/],
    -result_as => 'flat_arrayref'
  my %hash = @$pairs;

Finally, it can be convenient for avoiding column aliases, when using aggregator functions :

  my $array_ref = $source->select(-columns  => [qw/MAX(col1)
                                  -where     => ...,
                                  -result_as => 'flat_arrayref');
  my ($max_col1, $avg_col2, $count_col3) = @$array_ref;

The result will be an instance of a DBIx::DataModel::Statement. That object has a next method that fetches the next datarow and blesses it into the appropriate object, or returns undef when there is no more data to fetch. So a typical usage pattern is :

  my $statement = $class->select(-where     => \%criteria, 
                                 -result_as => 'statement');
  while (my $row = $statement->next) {

If called with an argument, the next method returns an array ref of several rows, as in

 my $rows = $statement->next(10);

Finally, the statement also has an all method that returns an arrayref of all remaining rows.

When creating a new Schema, a different statement class can be specified via the statement_class argument. For example the DBIx::DataModel::Statement::JDBC subclass in this distribution is intended for DBD::JDBC data sources, and gives access to some JDBC methods over the statement (for example positioning instructions).


The result is like a normal statement, except that rows are successively fetched into the same memory location, using DBI's fetch and bind_columns methods. This is the fastest way to get data; however, pay attention to this warning from DBI's documentation : Note that the same array reference is returned for each fetch, so don't store the reference and then use it after a later fetch. Also, the elements of the array are also reused for each row, so take care if you want to take a reference to an element. Furthermore, any field added into the row will remain present for the next rows.

Of course it is not possible to call all or next(10) on a fast statement.


The result will be an executed DBI statement handle. Then it is up to the caller to retrieve data rows using the DBI API. If needed, these rows can be later blessed into appropriate objects through bless_from_DB().


In scalar context, the result will just be the generated SQL statement. In list context, it will be ($sql, @bind), i.e. the SQL statement together with the bind values.


Returns a ref to an arrayref containing \["($sql)", @bind]. This is meant to be passed to a second query through SQL::Abstract, as in :

  my $subquery = $source1->select(..., -resultAs => 'subquery');
  my $rows     = $source2->select(
      -columns => ...,
      -where   => {foo => 123, bar => {-not_in => $subquery}}


  $schema->unbless($obj1, $obj2, ...);

Recursively applies "damn" in Acme::Damn to remove all class information from its arguments : these become plain Perl hashrefs, arrayrefs or scalars.

This may be useful if the datatree returned by a select(..) / expand(..) needs to be exported through an external module that only considers plain datastructures; this is the case for example with "Dump" in YAML::Syck.


  my @ids = $table->insert({col1 => $val1, col2 => $val2, ...}, 
  # or
  my @ids = $table->insert([qw/ col1   col2   .../],
                           [    $val1, $val2, ... ],

Inserts a collection of rows into the database, given either as a list of hashrefs, or as a first arrayref containing the column names, followed by a list of arrayrefs containing values for each row to be inserted.

In either form, the method applies the to_DB handlers, removes the no_update columns, and then inserts the new records into the database. Because of the handlers, this operation may modify the argument data, so it is not safe to access $val1, $val2, etc. after the call.

Primary key column(s) should of course be present in the supplied hashrefs, unless the the key is auto-generated by the database (see below).

Each hashref will be blessed into the $table class, and will be inserted through the internal _singleInsert() method. The default implementation of this method should be good enough for most common uses, but you may want to refine it in your table classes if you need some fancy handling on primary keys (like for example computing a random key and checking whether that key is free). The default implementation uses the following algorithm to retrieve keys auto-generated by the database :

  • if a dbh option called last_insert_id is found (see options passed to the "dbh" method), this is taken as a callback function, which gets called as

      $dbh_options{last_insert_id}->($dbh, $table_name, $column_name)
  • if options called catalog and/or schema are found, DBIx::DataModel will call

      $dbh->last_insert_id($dbh_options{catalog}, $dbh_options{schema},
                           $table_name, $column_name)
  • otherwise, DBIx::DataModel will call

      $dbh->last_insert_id(undef, undef, undef, undef)

If the table is a composite class (see Composition() above), then the component parts may be supplied within the hashref, with keys equal to the role names, and values given as arrayrefs of sub-hashrefs; then these will be inserted into the database, at the same time as the main record, with join values automatically filled in (cascaded insert). For example :

   HR::Employee->insert({firstname  => "Johann Sebastian",  
                         lastname   => "Bach",
                         activities => [{d_begin  => '01.01.1695',
                                         d_end    => '18.07.1750',
                                         dpt_code => 'CPT'}]});

The insert() call may take a list of %options specified at the end of the argument list (notice they are not given as a hashref, but as a mere hash, or list of pairs). Actually the only option currently supported is -returning, with an argument that may be either a scalar, an arrayref or an empty hashref :

  • if it is a scalar or an arrayref, that value is passed to "insert" in SQL::Abstract and finally to the SQL level (INSERT ... RETURNING ...); whatever is returned from the database for each single record gets flattened into a single list transmitted back to the caller.

      my @result = $table->insert({...}, ..., -returning => $scalar_or_arrayref);
  • if it is an empty hashref, the return value is also a list of hashrefs (one for each inserted record), containing the column name(s) and value(s) of the primary key for that record, and possibly containing subhashes or subarrays for other records created through cascaded inserts. For example:

      my @result = HR::Employee->insert({..., activities => [{...}, ...]},
                                        -returning => {});
      my $prim_key_first_emp = $result[0]{emp_id};
      my $prim_key_first_act = $result[0]{activities}[0]{act_id};
  • if the -returning option is absent, values returned by calls to _singleInsert() are collected into a flattened array, and then returned by insert(); usually, these are the primary keys of the inserted records. If this array contains several values and insert() was called from a scalar context, a warning is issued.


  $table->update({col1 => $val1, ...});
  $table->update(@primary_key, {col1 => $val1, ...});
  $table->update(-set   => {col1 => $val1, ...},
                 -where => \%condition);

This is both a class and an instance method, with several syntaxes. It applies the to_DB handlers, removes the no_update columns, and then updates the database for the given record.

When called as a class method, the columns and values to update are supplied as a hashref. The second syntax with @primary_key is an alternate way to supply the values for the primary key (it may be more convenient because you don't need to repeat the name of primary key columns). So if emp_id is the primary key of table Employee, then the following are equivalent :

  HR::Employee->update({emp_id  => $eid, 
                        address => $newAddr, 
                        phone   => $newPhone});
  HR::Employee->update($eid => {address => $newAddr, 
                                phone   => $newPhone});

The third syntax with -set and -where keywords is used for simultaneously updating several records (bulk update); it will send one single SQL request of shape

  UPDATE table SET col1='val1', ... WHERE ...

but in that case, since the data did not transit through Perl objects, no column handers will be applied.

When called as an instance method, i.e.


the columns and values to update are taken from the object in memory (ignoring all non-scalar values).

The update method only updates the columns received as arguments : it knows nothing about other columns that may sit in the database table. Therefore if you have two concurrent clients doing

  (client1)  $table->update($id, {c1 => $v1, c2 => $v2});
  (client2)  $table->update($id, {c3 => $v3, c4 => $v4, c5 => $v5});

the final state of record $id in the database will reflect changes from both clients.


  $table->delete({column1 => value1, ...});
  $table->delete(-where => \%condition);

This is both a class and an instance method. It deletes a record from the database.

When called as a class method, the primary key of the record to delete is supplied either as a hashref, or directly as a list of values. Note that $table->delete(11, 22) does not mean "delete records with keys 11 and 22", but rather "delete record having primary key (11, 22)"; in other words, you only delete one record at a time.

The syntax with the -where keyword is used for simultaneously updating several records (bulk update); it will send one single SQL request of shape


When called as an instance method, the primary key is taken from object columns in memory. If the table is a composite class (see Composition() above), and if the object contains references to lists of component parts, then those will be recursively deleted together with the main object (cascaded delete). However, if there are other component parts in the database, not referenced in the hashref, then those will not be automatically deleted : in other words, the delete method does not go by itself to the database to find all component parts (this is the job of the client code, or sometimes of the database itself).



  $class ->applyColumnHandler($handlerName, \@objects);

Inspects the target object or list of objects; for every column that exists in the object, checks whether a handler named $handlerName was declared for that column (see method "ColumnHandlers"), and if so, calls the handler. By this definition, if a column is absent in an object, then the handler for that column is not called, even though it was declared in the class.

The results of handler calls are collected into a hashref, with an entry for each column name. The value of each entry depends on how applyColumnHandlers was called : if it was called as an instance method, then the result is something of shape

  {columnName1 => resultValue1, columnName2 => resultValue2, ... }

if it was called as a class method (i.e. if \@objects is defined), then the result is something of shape

  {columnName1 => [resultValue1forObject1, resultValue1forObject2, ...], 
   columnName2 => [resultValue2forObject1, resultValue2forObject2, ...], 
   ... }

If columnName is not present in the target object(s), then the corresponding result value is undef.


  my $invalid_columns = $obj->hasInvalidColumns;
  if ($invalid_columns) {
    print "wrong data in columns ", join(", ", @$invalid_columns);
  else {
   print "all columns OK";

Applies the 'validate' handler to all existent columns. Returns a ref to the list of invalid columns, or undef if there are none.

Note that this is validation at the column level, not at the record level. As a result, your validation handlers can check if an existent column is empty, but cannot check if a column is missing (because in that case the handler would not be called).

Your 'validate' handlers, defined through "ColumnHandlers", should return 0 or an empty string whenever the column value is invalid. Never return undef, because we would no longer be able to distinguish between an invalid existent column and a missing column.


  $obj->expand($role [, @args] )

Executes the method $role to follow an Association, stores the result in the object itself under $obj->{$role}, and returns that result. This is typically used to expand an object into a tree datastructure. Optional @args are passed to $obj->$role(@args), for example for specifying -where, -columns or -orderBy options.

After the expansion, further calls to $obj->$role (without any arguments) will reuse that same expanded result instead of calling again the database. This caching improves efficiency, but also introduces the risk of side-effects across your code : after

  $obj->expand(someRole => (-columns => [qw/just some columns/],
                            -where   => {someField => 'restriction'}))

further calls to $obj->someRole() will just return a dataset restricted according to the above criteria, instead of a full join. To prevent that effect, you would need to delete $obj->{someRole}, or to call the role with arguments : $obj->someRole('*').


  $record->autoExpand( $recurse );

Asks the object to expand itself with some objects in foreign tables. Does nothing by default. Should be redefined in subclasses, most probably through the "AutoExpand" method (with capital 'A'). If the optional argument $recurse is true, then autoExpand is recursively called on the expanded objects.



Blesses $record into an object of the table or view class, and applies the fromDB column handlers.


  my @primKeyColumns = $source->primKey;
  my @primKeyValues  = $obj->primKey;

If called as a class method, returns the list of columns registered as primary key for that table (via Schema->Table(..)), or computed as primary key for that view (concatenation of primary keys of joined tables that are in a 1-to-many association).

If called as an instance method, returns the list of values in those columns.

When called in scalar context and the primary key has only one column, returns that column (so you can call $my k = $obj->primKey).


Here are some deprecated methods from previous versions

  • ViewFromRoles is now called "join" (class method in Schema)

  • selectFromRoles is now called "join" (instance method in Source)

  • MethodFromRoles is now called "MethodFromJoin" (class method in Source)

  • selectSth is now writen select(..., resultAs => "sth")

  • SqlDialect is now expressed as argument to Schema()

  • applyColumnHandlers is now called applyColumnHandler

  • keepLasth and lasth are no longer needed; if you want to interact with the database statement handle, use the DBIx::DataModel::Statement API.

  • preselectWhere is no longer needed (superseded by the Statement::refine method).

  • -postFetch in select(..., -postFetch => sub {... } ... ) is now called -postBless



join as class method within a Table or View

  my $statement = $table->join(qw/role1 role2 .../)->prepare;

Starting from a given table, returns a reference to a statement that selects a collection of data rows from associated tables, performing the appropriate joins. Internally this is implemented throught the /define_join() method, with an additional join criteria to constrain on the primary key(s) of $table. That statement cannot be executed yet, because the values of the primary key are not known until we have an instance of $table; but the statement can already be prepared. Later on, we can execute the statement by binding it to an instance of $table :

   my $obj = $table->fetch(...);
   my $rows = $statement->execute($obj)->all;

join as instance method within a Table or View

Creates a statement as just explained above, and immediately binds it to the current object. So for example if $emp->{emp_id} == 987, then

  $emp->join(qw/activities department/)
      ->select(-where => {d_end => undef})

will generate

  SELECT * FROM Activity INNER JOIN Department 
                         ON Activity.dpt_id = Department.dpt_id
           WHERE  emp_id = 987 AND d_end IS NULL


  $schema->AutoInsertColumns( columnName1 => sub{...}, ... );
  $table ->AutoInsertColumns( columnName1 => sub{...}, ... );

Declares handler code that will automatically fill column names columnName1, etc. at each insert, either for a single table, or (if declared at the Schema level), for every table. For example, each record could remember who created it and when with something like

  $schema->AutoInsertColumns( created_by => 
    sub{$ENV{REMOTE_USER} . ", " . localtime}

The handler code will be called as

  $handler->(\%record, $table)

so that it can know something about its calling context. In most cases, however, the handler will not need these parameters, because it just returns global information such as current user or current date/time.


  $schema->AutoUpdateColumns( columnName1 => sub{...}, ... );
  $table ->AutoUpdateColumns( columnName1 => sub{...}, ... );

Just like AutoInsertColumns, but will be called automatically at each update and each insert. This is typically used to remember the author and/or date and time of the last modification of a record. If you use both AutoInsertColumns and AutoUpdateColumns, make sure that the column names are not the same.

When doing an update (i.e. not an insert), the handler code will be called as

  $handler->(\%record, $table, \%where)

where %record contains the columns to be updated and %where contains the primary key (column name(s) and value(s)).


  $table ->NoUpdateColumns(@columns);

Defines an array of column names that will be excluded from INSERT/UPDATE statements. This is useful for example when some column are set up automatically by the database (like automatic time stamps or user identification). It can also be useful if you want to temporarily add information to memory objects, without passing it back to the database.

NoUpdate columns can be set for a whole Schema, or for a specific Table class.

1 POD Error

The following errors were encountered while parsing the POD:

Around line 115:

Non-ASCII character seen before =encoding in 'façade;'. Assuming ISO8859-1