++ed by:

4 PAUSE users
6 non-PAUSE users.

Laurent Dami
and 1 contributors


DBIx::DataModel::Doc::Reference - General reference for DBIx::DataModel


This chapter is part of the DBIx::DataModel manual.


Content of this document

This chapter is the detailed reference description for DBIx::DataModel version 3.0. It covers most public methods of the whole framework, except for a couple of peripheric functionalities which are documented in their own classes.

Other documentation sources

Other documentation chapters include a glossary, a quickstart document, a cookbook and a design description.

Changes from earlier generations of DBIx::DataModel are described in DBIx::DataModel::Doc::Delta_v3, DBIx::DataModel::Doc::Delta_v2 and DBIx::DataModel::Doc::Delta_v1.

Automatic schema generation from external sources is described directly in the generator module. Subclasses of Statement for specific architectures are in DBIx::DataModel::Statement::JDBC and DBIx::DataModel::Statement::Oracle.

The syntax for producing SQL requests is documented in SQL::Abstract::More and its parent class SQL::Abstract.

Structure of the document

This reference starts with methods for defining the static structure of schemas, tables and associations. Then it goes into the runtime features of schema instances and into the details of the Statement class, which is at the core of database queries. Next come the methods for using data sources, i.e. searching, navigating, inserting and updating data. Finally the meta layer is described, which provides support for schema introspection.


Method families

Most methods for defining schema, tables, associations, etc. come in two flavours :

  • a "front-end" method, starting with an uppercase letter, that uses positional parameters. This version is concise and backwards-compatible.

  • a "back-end" method, starting with lowercase define_*, that uses named parameters. This version is more verbose but supports more options.

The invocant for front-end methods can be either the Schema class name, or the meta-schema instance. The invocant for back-end methods must be the meta-schema instance.



  my $schema_class = DBIx::DataModel->Schema($schema_name, \%options);

Front-end method for "define_schema()". The call above is equivalent to

  my $meta_schema = DBIx::DataModel->define_schema(
    class => $schema_name,
  my $schema_class = $meta_schema->class;


  my $meta_schema = DBIx::DataModel->define_schema(
     class => $schema_name, 

Creates a new Perl class of name $schema_name that represents a database schema. That class inherits from DBIx::DataModel::Schema. At the same time, an instance of DBIx::DataModel::Meta::Schema is also created, for holding meta-information about that schema (for example lists of classes, associations, types, etc. that will be declared within the schema). Possible %options are :


Name of the Perl schema class to be created (mandatory).


Parent class(es) for that schema class (scalar or arrayref). The default parent is DBIx::DataModel::Schema.


A hashref specifying columns to be automatically inserted in every table. Keys of the hashref are column names, values are handlers (coderefs). This can also be done separately for each table (see the auto_insert_columns parameter to "define_table()").

For example, each record could have a column to remember who created it and when, with something like

  sub who_and_when { return $ENV{REMOTE_USER} . " at " . localtime };

        auto_insert_colums => {created_by => \&who_and_when},

The handler code will be called as

  $record->{$column_name} = $handler->(\%record, $table_class);

Parameters passed to the handler may be useful to gather some information about the calling contexte; however in most cases the handler will not need them, because usually it just returns global information such as the current user or current date/time.


A hashref specifying columns to be automatically updated in every table. Keys of the hashref are column names, values are handlers (coderefs). This specification can also be done separately for each table (see the auto_update_columns parameter to "define_table()"). Such handlers are called at each update and each insert, receiving the record reference and the table class as arguments just like for auto_insert handlers described above. A column cannot have both an auto_insert and an auto_update handler.

For example, elaborating on the previous section, each record could also remember the last modification with something like

         auto_insert_colums => {created_by => \&who_and_when},
         auto_update_colums => {updated_by => \&who_and_when},

A hashref specifying columns to be automatically ignored in insert/update operations, for every table. This can also be done separately for each table (see the no_update_columns parameter to "define_table()"). Keys of the hashref are column names, values are indifferent (can be any true value).

This is useful for example when some column are set up automatically when reading from the database (like computed fields or automatic time stamps), but would be rejected when writing into the database.


An optional boolean; if true, a LEFT OUTER JOIN in a multi-steps join specification will force all subsequent joins to be also LEFT. For example in the fake datamodel used in the rest of this documentation, a join like

  HR->join(qw/Employee activities department/)->...

would be interpreted as

  T_Employee LEFT OUTER JOIN T_Activity ON ...
             LEFT OUTER JOIN T_Department ON ...

even if the association betwen Activity and Department is many-to-one (which theoretically would result in a INNER JOIN by default).


An optional boolean; if true, SQL joins will be expressed wherever possible with a USING clause instead of an ON clause, i.e.

  SELECT * FROM Table1 INNER JOIN Table2 USING(column)

This option can be overridden at the statement level. The advantages of joins with USING a discussed in "joins with USING clause instead of ON" in SQL::Abstract::More.

table_parent, join_parent

Optional application-specific classes, to be used instead of the builtin classes, as parents for tables and joins declared in this schema.

table_metaclass, join_metaclass, association_metaclass, path_metaclass, type_metaclass

Optional application-specific classes, to be used instead of the builtin metaclasses, for instanciating meta-objects declared in this schema, .


Optional application-specific class to be used instead of the builtin class DBIx::DataModel::Statement for instanciating statements. For example you may want to set this option to DBIx::DataModel::Statement::Oracle if you work with an Oracle database.


This chapter deals with all methods for populating the structure of a schema : defining tables, associations, types, navigation methods, etc.

Reflection methods for navigating in the schema structure and getting back information about tables, associations, etc. are described in chapter "META-SCHEMA NAVIGATION", at the end of this document; such methods are seldom needed for regular applications, but they can be useful for building tools around DBIx::DataModel, such as CRUD frameworks, editors, translators, etc.

Front-end methods for data definitions

This subsection quickly introduces the front-end methods for data definitions. The detailed behaviour is described under the back-end methods in the next subsection.


  $schema_class->Table($class_name, $db_name, @primary_key, \%options);

See "define_table()". The call above is equivalent to

  my $meta_schema = $schema_class->metadm;
  $meta_schema->define_table(class       => $class_name,
                             db_name     => $db_name,
                             primary_key => \@primary_key,


  $schema->View($class_name, $default_columns, $db_tables, 
                     \%where, @parent_tables, \%options);

This is another front-end method for "define_table()", for creating a new Perl class of name $class_name that represents a SQL SELECT request of shape

  SELECT $default_columns FROM $db_tables [ WHERE %where ]

The call above is equivalent to

  my @parents = map {$meta_schema->table($_)} @parent_tables;
  $schema->metadm->define_table(class           => $class_name,
                                db_name         => $db_tables,
                                where           => \%where,
                                default_columns => $default_columns,
                                parents         => \@parent_tables,

The $db_tables can be a single table name, or a join between several tables, expressed directly in SQL, like 'Foo INNER JOIN Bar ON Foo.fk=Bar.pk'. The @parent_tables list contains names of Perl table classes from which the view will also inherit; it is a good idea to include as parents all tables mentioned in the join, so that their path methods become available to instances of this view. Be careful about table names : the SQL code in $db_tables should contain database table names, whereas the members of @parent_tables should be Perl table classes. Additional arguments to the define_table() method can be passed in the optional hashref \%options. See also "Views within the ORM" in DBIx::DataModel::Doc::Cookbook.

Perl views as defined here are totally unknown to the database, they are just abstractions of SQL statements. If you need to access database views, just use the Table() declaration, like for a regular table.


  $schema->Association([$class1, $role1, $multiplicity1, @columns1],
                       [$class2, $role2, $multiplicity2, @columns2]);

See "define_association()". The call above is equivalent to

    A => {table        => $class1->metadm, role       => $role1, 
          multiplicity => $multiplicity1,  join_cols  => \@columns1},
    B => {table        => $class2->metadm, role       => $role2, 
          multiplicity => $multiplicity2,  join_cols  => \@columns2},
    kind => 'Association',


  $schema->Composition([$class1, $role1, $multiplicity1, @columns1],
                       [$class2, $role2, $multiplicity2, @columns2]);

See "define_association()". The call above is equivalent to

    A => {table        => $class1->metadm, role       => $role1, 
          multiplicity => $multiplicity1,  join_cols  => \@columns1},
    B => {table        => $class2->metadm, role       => $role2, 
          multiplicity => $multiplicity2,  join_cols  => \@columns2},
    kind => 'Composition',

See also "Special semantics for compositions" below.


  $schema->Type($type_name => %handlers);

See "define_type()". The call above is equivalent to

  $schema->metadm->define_type(name => $type_name, handlers => \%handlers);

Back-end methods for data definitions



Creates a new Perl class of name $class_name that represents a database table or database view. The new class inherits from the table_parent declared in the schema (by default : DBIx::DataModel::Source::Table). At the same time, an instance of the table_metaclass is also created (by default : DBIx::DataModel::Meta::Source::Table), for holding meta-information about that table (database name, primary key, paths to other tables, etc.).

Returns $meta_schema. Entries in %options are :

class => $string

Name of the class to be created (mandatory). If $class_name contains no ::, then the schema name is prepended to it (so the new table class is created in the namespace of the schema, which is a recommended best practice).

db_name => $string

SQL name of the database table or view that will be accessed through this class (mandatory). Actually, it might be any SQL clause, like for example Table1 t1 INNER JOIN Table2 t2 ON t1.fk=t2.pk; see the "View()" method above.

primary_key => $string | \@array

Name of the column (or list of columns) that hold the primary key for that table (mandatory).

default_columns => $string

Whatever will be injected into SELECT statements, when no -columns parameter is explicitly supplied. The default is '*'.

where => \%hash | \@array

An optional reference to a WHERE condition, in SQL::Abstract::More format. That condition will be automatically injected into every SELECT statement on this table. When working with regular tables, this parameter is always empty; it is only useful for declaring DBIx::DataModel views (see the "View()" method above).

parents => [$parent1, ...]

List of tables classes from which this table will inherit. This can be useful if your database supports table inheritance (like for example PostgreSQL), and you want to reflect the inheritance structure within the Perl table classes. Another use case is when the $db_name refers to a database view or to an ORM view : in that case it is probably a good idea to declare as parents all tables that participate in the join, so that the methods of those parents are available to instances of the view.


An optional hashref, where keys are type names and values are arrayrefs of columns on which this type should be applied; see "define_column_type()".


An optional hashref, where keys are column names and values are hashrefs of { $handler_name => $handler_body }; see "define_column_handlers()".


A hashref of column names and handlers to be called at each insert operation, like in "define_schema()", but here only for one specific table.


A hashref of column names and handlers to be called at each update opération, like in "define_schema()", but here only for one specific table.


A hashref of column names, like in "define_schema()", but only for one specific table.


Features common to all kinds of associations

    name   => $association_name,          # optional
    kind   => $kind,                      # 'Association' or 'Composition'
    A      => {
      table        => $meta_table_instance,
      role         => $role_name,         # optional
      multiplicity => $multiplicity_spec, # ex. "1..*"
      join_cols    => [$col1, ...]        # optional
    B      => { ... },                    # same structure as 'A'

Declares an association between two tables (or even two instances of the same table). The arguments are :


A description of the first association end, which is composed of


An instance of DBIx::DataModel::Meta::Source::Table (if you only know the table class, that instance can be obtained from $table_class->metadm).


The role name of that source within the association. A Perl method of the same name will be defined in the remote source (the other end of the association). Besides, the role name is also used when building joins through

  $schema->join(qw/FirstTable role1 role2 .../)

One of the role names in the association can be anonymous, which is expressed by undef, by an empty string, or one of the string '0', 'none' or '---'. If the role is anonymous, there will be no Perl method and no possibility to join in that direction, so this defines a unidirectional association. Anonymous roles in both directions are forbidden (because in that case the association would be useless).

If several associations attempt to define the same role name in the same table, an exception is generated.


The multiplicity specification, i.e. the minimum and maximum number of occurrences of that association end, for any given instance of the other end.

The multiplicity can be expressed either as an arrayref [$min, $max], or as a string "$min..$max". The $max can be '*' or 'n', which is interpreted as the maximum integer value. If expressed as a string, a mere '*' is interpreted as '0..*', and a mere '1' is interpreted as '1..1'.

Numbers different from 0, 1 or * may be given as multiplicity bounds, but these will be just documentary : technically, all that matters is

  • whether the lower bound is 0 or more (if 0, generated joins will be left joins, otherwise inner joins)

  • whether the upper bound is 1 or more (if 1, the associated method returns a single object, otherwise it returns an arrayref)

When the multiplicity is '*', a method named insert_into_... is installed in the class of the other association end. This method create new objects of the associated class, taking care of the linking automatically. For example :

  $an_employee->insert_into_activities({d_begin => $today,
                                        dpt_id  => $dpt});

This is equivalent to

  $schema->table('Activity')->insert({d_begin => $today,
                                      dpt_id  => $dpt,
                                      emp_id  => $an_employee->{emp_id}});

More detailed explanations of multiplicities can be found in UML textbooks.


An arrayref of columns that participate in the database join, for this side of the association. The full database join will be built by creating a LEFT|INNER JOIN ... ON .. clause in which the left-hand and right-hand sides of the ON subclause come from the join_cols of both association ends.

This argument is optional: if absent, it will be filled by default by taking the primary key of the table with minimum multiplicity 1, for both sides of the association. This default behaviour is convenient for data models where primary keys and foreign keys are identical (for example dpt_id both as primary key in Department and as foreign key in Activity). Some data models have a different policy, where primary keys are always named the same (for example id), and foreign keys are named after the related table name : in such models, the default behaviour does not work, and you have to specify the join columns explicitly.

If the association is many-to-many (i.e. if the maximum multiplicity is greater than 1 on both sides), then join_cols takes a special meaning : it no longer represents database columns, but rather represents two role names (in the sense just defined above) to follow for reaching the remote end of the association. Therefore join_cols must contain exactly 2 items in that case : the path to the intermediate table, and the path from the intermediate table to the remote end. Here is again the example from "SYNOPSIS" in DBIx::DataModel :

    kind => 'Association',
    A    => {
      table        => My::Schema::Department->metadm,
      role         => 'departments',
      multiplicity => '*',
      join_cols    => [qw/activities department/],
    B    => {
      table        => My::Schema::Employee->metadm,
      role         => 'employees',
      multiplicity => '*',
      join_cols    => [qw/activities employee/],

The design document has a section with more details about such associations.


A description of the second association end, following exactly the same principles as for the 'A' end.


Optional name for the association (otherwise an implicit name will be built by default from the concatenation of the role names).


A string describing the association kind, i.e. one of : Association, Aggregation or Composition. See "Composition()" below for the additional semantics associated with compositions.

The association also creates instances of DBIx::DataModel::Meta::Path for representing the directional paths between those sources.

Only binary associations can be declared; however, it is possible to define methods joining three or more tables : see "define_navigation_method()".

Special semantics for compositions

Compositions are associations with some additional semantics. In UML class diagrams, compositions are pictured with a black diamond on one side : this side will be called the composite class, while the other side will be called the component class. In DBIx::DataModel, the diamond (the composite class) corresponds to the A association end, and the component class corresponds to the B end, so the order is important (while for plain associations the order makes no difference).

In UML, the intended meaning of a composition is that objects of the component classes cannot exist outside of their composite class. Within DBIx::DataModel, the special semantics attached to associations of kind Composition is :

  • the multiplicity must be 1-to-n

  • the 'B' end of the association (the "component" part) must not be component of another association (it can only be component of one single composite table).

  • this association can be used for auto-expanding the composite object (i.e. automatically fetching all component parts from the database) -- see "expand()" and "auto_expand()"

  • this association can be used for cascaded inserts like

        column1 => $val1,
        $component_name1 => [{$sub_object1}, ...],

    The main record will be inserted in the composite class, and within the same transaction, subrecords will be inserted into the component classes, with foreign keys automatically filled with appropriate values.

  • this association can be used for cascaded deletes : the argument to a delete may contain lists of component records to be deleted together with the main record of the composite class.


     name     => $type_name,
     handlers => {
       $handler_name_1 => sub { ... },
       $handler_name_2 => sub { ... },

This declares a type, which is just a hashref of handler names and handler bodies (coderefs). The type can then be applied to some columns in some tables; this is usually done in the Table declaration (column_types argument), or can be applied later through the "define_column_type" method. Unlike tables or associations, a type does not create a Perl class nor does it create meta-objects; it is really just a plain hashref.

Handlers receive the column value through $_[0]. If the value is to be modified (for example for scalar conversions or for inflating values into Perl objects), the result should be put back into $_[0]. In addition to the column value, other info is passed to the handler :

  $handler_body = sub {
    my ($column_value, $obj, $column_name, $handler_name) = @_;
    my $new_val = $obj->compute_new_val($column_value, ...);
    $column_value = $new_val; # WRONG : will be a no-op
    $_[0] = $new_val;         # OK    : value is converted

The second argument $obj is the object from where $column_value was taken -- most probably an instance of a Table or Join class. Use this if you need to read some contextual information, but avoid modifying $obj : you would most probably get unexpected results or create undesired side-effects.

Other arguments $column_name and $handler_name are obvious.

Handler names from_DB and to_DB have a special meaning : they are called automatically just after reading data from the database, or just before writing into the database. Handler name validate is used by the method "has_invalid_columns()".

The "SYNOPSIS" in DBIx::DataModel shows some examples of types : "Date", "Percent", "Multivalue" or "XML".


Participants in the join

This method builds or retrieves a pre-existing subclass of DBIx::DataModel::Source::Join, and returns the associated meta-object (an instance of DBIx::DataModel::Meta::Source::Join).

  my $meta_join = $meta_schema->define_join($table, $path1, $path2, ..);

The arguments start from a given table class and follows one or several associations through their path names; the resulting SQL request is automatically inferred from the associations. For example

  $meta_schema->define_join(qw/Department activities employee/);

is more or less equivalent to

  my $sql = <<_EOSQL_
    LEFT OUTER JOIN Activity ON Department.dpt_id = Activity.dpt_id
    LEFT OUTER JOIN Employee ON Activity.emp_id   = Employee.emp_id
  $schema->View("Department=>activities=>employee", '*', $sql, 
                qw/Department Activity Employee/);

Join kinds (left or inner)

Kinds of joins are chosen according to the multiplicities declared in the associations : if the minimum multiplicity is 0, the join is LEFT OUTER JOIN; otherwise it is a INNER JOIN (except if $meta_schema->sql_no_inner_after_left_join is true, in which case a first left join forces all following joins to be also 'left', without considering the multiplicities any more). The default kind of join chosen by this rule may be overriden by inserting intermediate connectors in the list, namely '<=>' for inner joins and '=>' for left joins : compare the previous example with

  $meta_schema->define_join(qw/Department <=> activities <=> employee/);

which is equivalent to

  my $sql = <<_EOSQL_
    INNER JOIN Activity ON Department.dpt_id = Activity.dpt_id
    INNER JOIN Employee ON Activity.emp_id   = Employee.emp_id

Table aliases

Participants in the join may be aliased using | as a separator :

  $meta_schema->define_join(qw/Department|dpt activities|act employee|emp/)
              ->select(-columns => qw/dpt.name act.d_begin emp.lastname/,
                       -where   => {"dpt.dpt_name" => {-like => "A%"}});

which generates

  SELECT dpt.name, act.d_begin, emp.lastname
  FROM Department AS dpt
    LEFT OUTER JOIN Activity AS act ON dpt.dpt_id = act.dpt_id
    LEFT OUTER JOIN Employee AS emp ON act.emp_id = emp.emp_id
  WHERE dtp.dpt_name like 'A%'

Path lookups are performed in "last in, first out" order : in

  ..->define_join(qw/FirstTable path1 path2 path3/)

the system must find a path1 in FirstTable, from which it can deduce the second table. Then it must find a path2 in Table2, or otherwise in FirstTable, in order to deduce the third table . In turn, path3 must be found either in Table3, or in Table2, or in FirstTable, etc. To resolve ambiguities, path names may be prefixed by the name or alias of the targeted source, such as :

  ..->define_join(qw/FirstTable path1|p1 

Name of the join subclass

The name of the resulting join class will be composed by concatenating table, connectors and path names, including optional aliases. If the same sequence of table and paths was already encountered before, the Perl class already exists, and its corresponding meta-object is returned; otherwise, a new Perl class is created together with its meta-object.

Joins versus loops

The main purpose of define_join is to gain efficiency in interacting with the database. If we write

  foreach my $dpt (@{$schema->table('Department')->select}) {
    foreach my $act ($dpt->activities) {
      my $emp = $act->employee;
      printf "%s works in %s since %s\n", 
         $emp->{lastname}, $dpt->{dpt_name}, $act->{d_begin};

many database calls are generated behind the scene, in the loops that call the activities and employee methods. Instead we could write

  my $join = $meta_schema->define_join(qw/Department activities employee/);
  foreach my $row (@{$join->select}) {
    printf "%s works in %s since %s\n", 
      $row->{lastname}, $row->{dpt_name}, $row->{d_begin};

which generates one single call to the database.

Meta-table methods


    $meth_name => qw/role1 role2 .../, 

Inserts into $meta_source->class a new method named $meth_name, that will automatically call "define_join()" and then select(), passing %optional_select_args to the select call. This is useful for joining several tables at once, so for example with

    employees => qw/activities employee/

we can then write

  my $empl_ref = $some_dept->employees(-where   => {gender => 'F'},
                                       -columns => [qw/firstname lastname]);

This method is used internally to implement many-to-many associations|DBIx::DataModel::Doc::Glossary/"many-to-many association">, which is a recommended higher-level notion when only two tables are involved. Direct calls to define_navigation_method are still useful when three or more tables are involved.

The last argument to define_navigation_method is an optional hashref; if present, the hash will be passed as initial argument to the select call.


  $meta_table->define_column_type($type_name, @column_names);

Registers type $type_name to be applied to columns with name in @column_names, within the target $meta_table.


    $handler_name_1 => sub { ... },

Registers handlers to be applied to $column_name, within the target $meta_table. The main purpose of this method is for implementing the higher-level "define_column_type()" method; however it can also be called directly, without the need for defining a type.

If another handler is already installed within $column_name under name $handler_name_1, then both subs are automatically composed into a new sub performing both coderefs. The execution order will correspond to the handler declaration order, except for the special case from_DB, where the last declared handler is executed first.

Handlers are called through the "apply_column_handler()" method. Given a handler name, that method will automatically find all columns having a handler of that name, and will call the corresponding coderefs.



Generates an "auto_expand()" method for the class, that will autoexpand on the roles listed (i.e. will call the appropriate method and store the result in a local slot within the object). In other words, the object knows how to expand itself, fetching information from associated tables, in order to build a data tree in memory. Only component names declared through Composition() may be auto-expanded.

Be careful about performance issues: when an object uses auto-expansion through a call to "auto_expand()", every auto-expanded role will generate an additional call to the database. This might be slow, especially if there are recursive auto-expansions; so in some cases it will be more appropriate to flatten the tree and use database joins.


A schema is an instance of a subclass of DBIx::DataModel::Schema. The subclass is associated with a meta-schema where all information about tables, associations, etc. is kept. Each instance of that subclass has an internal state containing a database connection, a debugging status, etc.

A schema subclass can have several instances; in that case, the application runs in multi-schema. However, multi-schema mode is only useful in some special situations, like for example when transferring data between several databases; in most common cases, a single-schema mode is enough.

Single-schema mode is activated by default, which means that all method calls can be performed directly on the subclass; the subclass will manage a singleton instance, and will delegate calls to that singleton. This is the recommended way to work with DBIx::DataModel, because it uses less memory, and simplifies the application code (there is no need to pass a $schema reference around between all modules).

Multi-schema mode is activated by calling the new() method, as many times as needed. Once this mode is activated, it is not possible to go back to single-schema mode. Furthermore, multi-schema mode should be activated before the singleton has been created, i.e. before any call to the class methods described below.

Because of this duality, all methods listed below as $schema->some_method(...) can be called either as class methods (single-schema mode) or as instance methods (multi-schema mode).

Creating schema instances


  my $schema = $schema_subclass->new(%options);

As just explained above, invoking the constructor activates multi-schema mode, and returns a new schema instance. Options to be passed to the constructor are exactly the same as those described for the "define_schema()" method.


  my $schema = $schema_subclass->singleton;

When in single-schema mode, this method returns the singleton instance associated with the $schema_subclass. When in multi-schema mode (i.e. after Schema::new() has been called at least once), it raises an error.

Client code rarely needs to call singleton() explicitly; when in single-schema mode, method calls are usually expressed as class methods, and the singleton is retrieved automatically.

Manipulating schema state


  my $dbh = DBI::connect(...);
  $schema->dbh($dbh, %options);        # set
  $schema->dbh([$dbh, %options]);      # set, alternative syntax

  my $dbh             = $schema->dbh;  # get back just the dbh
  my ($dbh, %options) = $schema->dbh;  # get back dbh plus options

Returns or sets the handle to a DBI database handle (see DBI). The $dbh handle must have its RaiseError property set to a true value.

In %options you may pass any key-value pairs, and retrieve them later by calling dbh in a list context. This may be useful for holding driver-specific information, or for holding values to be used by column handlers -- for example you may connect with

  $schema->dbh(dbh, USER_ID => 'arthur');

and have an "auto_insert_columns" handler of the form

  ...->auto_insert_columns(created_by => sub {
     my ($record, $table) = @_;
     my ($dbh, %options)  = $table->schema->dbh;
     my $user_id = $options{USER_ID} || 'anonymous';
     return $user_id;

The only entry in %options which is used by DBIx::DataModel itself is returning_through, for driving how database-generated keys are retrieved : this is explained in the "insert()" method documentation.

Calls to the dbh method for changing the database handle are allowed at any time, except when a transaction is in course. However, a nested transaction may temporarily change the database handle by supplying it as argument to the "do_transaction()" method.

To unset the database handle, call $schema->dbh(undef).


  $schema->debug(1);             # will warn for each SQL statement
  $schema->debug($debug_object); # will call $debug_object->debug($sql)
  $schema->debug(undef);         # turn off debugging

Debug mode is useful for seeing SQL statements generated by DBIx::DataModel. The argument to the debug() method can be :

  • any true value : this will print all SQL statements to the console through calls to Perl's "warn" method.

  • A Perl object that has a debug method : that method will be called with the SQL content. Such debug objects will be typically generated from Log::Log4perl or Log::Dispatch.

  • a false value : this turns off debugging

There is also another way to see the SQL code for one particular statement :

  my $spy_sql = sub {my ($sql, @bind) = @_;
                     print STDERR join "\n", $sql, @bind;
                     return ($sql, @bind);};
  my $result = $source->select(-columns   => \@columns,
                               -where     => \%criteria,
                               -post_SQL  => $spy_sql);


  $schema->sql_abstract($an_SQL_Abstract_More_instance); # set
  my $sqlam = $schema->sql_abstract;                     # get

Sets or retrieves the instance of SQL::Abstract::More used by this $schema. If the client code does not set it explictly, an instance will be implicitly created. The reason for using something else than this default might be either to control some specific parameters of SQL::Abstract::More, or to subclass it.


  $schema->dbi_prepare_method($method);      # set
  my $method = $schema->dbi_prepare_method;  # get

Sets or retrieves the method sent to DBI for preparing SQL statements. The default is "prepare"|DBI/prepare; it can be set to "prepare_cached"|DBI/prepare_cached instead.


  $schema->placeholder_prefix($prefix);      # set
  my $prefix = $schema->placeholder_prefix;  # get

Sets or retrieves the prefix string to recognize "named placeholders" within a statement. That prefix should never match any regular data encountered in your application; the default is '?:'. Examples of usage are given in the design document.


  $schema->autolimit_firstrow($bolean);                      # set
  my $has_autolimit_firstrow = $schema->autolimit_firstrow;  # get

If this flag is true, calls to select(..., -result_as => 'firstrow') will automatically add -limit => 1. The result should be the same, but the -limit clause may give a hint to the database for optimizing the query.


  $schema->select_implicitly_for($string);      # set
  my $string = $schema->select_implicitly_for;  # get

Sets or retrieves a default value for the -for argument to select() : for example it may add FOR READ ONLY or FOR UPDATE.

For statements with -result_as => 'subquery', This default value is ignored (because the FOR clause in an SQL statement only makes sense at the top level, not in a subquery).


  $schema->db_schema($string);      # set
  my $string = $schema->db_schema;  # get

Sets or retrieves a string to be used as database schema name. If non-empty, that string will be prepended to table names in the generated SQL statements. This may be useful in situations where the database connection has access to several schemata of similar structure. Suppose for example that there is a DEV schema similar to the production schema; then the following code

  my $rows = $schema->join(qw/Activity employee/)->select(...)

will generate

  SELECT ... FROM DEV.Activity INNER JOIN DEV.Employee ...

instead of

  SELECT ... FROM Activity INNER JOIN Employee ...

Technically, the alteration of table names is performed by the "db_from()" method.

Setting the db_schema will remain into effect until the next call to that method with a different string, or with an undef.

For a temporary change of database schema, see method with_db_schema() below.


  $rows = $schema->with_db_schema($string)->table($table_name)->select(...);
  $rows = $schema->with_db_schema($string)->join(qw/Table path1 ../)->...;

This method returns a copy of the current schema, with the db_schema attribute set to $string. The original $schema is left untouched. This is useful for a temporary switch of database schema, as opposed to the db_schema() method above that performs a permanent change.


    my $scope_guard = $schema->localize_state(@schema_members_to_localize);

    ... # do some work, possibly change state
  } # $scope_guard out of scope : previous state of $schema is restored 

Applies dynamic scoping to a $schema, like Perl's local construct (see "Temporary Values via local" in perlsub). Here however, since local is not perfectly suited for private class data, we provide a solution which is similar in spirit, but slightly different in implementation. The localize_state method internally takes a copy of the current state, and returns a handle to it. The handle should be stored in a my variable; when that variable goes out of scope (at block exit), then the previous state is restored.

The optional argument @schema_members_to_localize specifies precisely which schema members should be localized. When called with an empty list, the default is : dbh, debug, select_implicitly_for, dbi_prepare_method, db_schema.


Returns the meta-schema instance associated with the schema.

Accessing data sources


Data is stored in data sources that are accessed through a given schema; therefore, before issuing data manipulation statements, we need to establish a relationship between a data source and a schema : this is the job of the two methods below. Both create an instance of the source with one single field __schema pointing to the $schema. That object can then perform data selection, update or deletion.


  my $source = $schema->table($table_name);

This is the recommended way to reach a table from a schema. The $table_name is the Perl name that was declared in "define_table()", not the database name.

When in single-schema mode, the table() call can be replaced by a direct call to the table class: so for example, if we have a schema class HR containing a table Employee, the following calls are equivalent :

  $result = HR::Employee->select(...)
  $result = HR->table('Employee')->select(...)
  $result = HR->singleton->table('Employee')->select(...)

In the first syntax above, the HR::Employee subclass will implicitly reach for its schema singleton and for its meta-table.

When in multi-schema mode, the only way is through the table() method :

  my $schema1 = HR->new(dbh => dbh1);
  my $schema2 = HR->new(dbh => dbh2);
  $result1 = $schema1->table('Employee')->select(...);
  $result2 = $schema2->table('Employee')->select(...);

This Schema::table() method should not be confused with the meta-schema navigation method "Meta::Schema::table()", described at the end of this document. It is also different from the uppercase Table() method, used as a proxy to "define_table".


  my $source = $schema->db_table($db_table_name);

Works like the table() method above, except that the argument is the database table name instead of the Perl class name.


  my $source = $schema->join($table, $path1, $path2, ...);

This method calls "define_join()" to define or retrieve a join subclass; then it creates an instance of that subclass connected to the $schema. From that object, data can be retrieved through the "select()" method.

This Schema::join() method should not be confused with the meta-schema navigation method "Meta::Schema::join()", described at the end of this document.

Besides, a derivative of the present method is defined in "Source::join()".



  my $coderef = sub {$table1->insert(...); $table2->update(...); ...};

Evaluates the code within a transaction. In case of failure, the transaction is rolled back, and an exception is raised, with a message containing the initial error and the status of the rollback (because the rollback itself may also fail). If you need finer details, you can treat the exception as an object with two methods initial_error and rollback_errors:

    eval {$schema->do_transaction($coderef); 1}
    or do {my $err = $@;
           explain($err->initial_error, $err->rollback_errors)};

Usually the coderef passed as argument will be a closure that may refer to variables local to the environment where the closure was created.

Nested calls to do_transaction are supported : only the top-level call will actually initiate the transaction and commit it. An exception at any level of transaction will abort and rollback the whole thing.

If the nested transaction needs to temporarily work on a different database handle, it may do so by supplying the dbh and its options as additional arguments :

  $schema->do_transaction($coderef, $new_dbh, %new_dbh_options);

When called in this form, do_transaction will temporarily set the dbh to the supplied value, and then return to the previous dbh when the nested transaction is finished. However, the commit on the dbh of that nested transaction will be delayed until the top-level transaction is finished.

There is no support (yet!) for nested transactions with intermediate savepoints.

If the dbh is an instance of DBIx::RetryOverDisconnects, do_transaction() behaves like "txn_do" in DBIx::RetryOverDisconnects, i.e. it will inspect if the failure was due to a disconnection, and in that case will try again, as many times as required in the TxnRetries attribute. Of course one should be careful that the coderef does not create side-effects outside of the database, because those would be executed several times.


  $schema->do_after_commit(sub {...});

This method can only be called from within a transaction. The coderef supplied as argument will be called automatically after the transaction is finished successfully; as the name implies, this is guaranteed to happen only after the commit, even if this is called inside a nested transaction where the final commit only happens later, in a different module.

To understand why this method is useful, consider a situation like this

  sub function1 {
    my @keys = $schema->do_transaction(sub {do_complex_inserts()});
  sub function2 {
    $schema->do_transaction(sub {

Here the call to publish_to_outer_world(@keys) is likely to fail, because this happens inside a nested transaction and therefore the keys returned by the previous line are not committed yed; as a result, the "outer world" (especially if it is another process) cannot access those keys until the outer transaction is finished.

To avoid this problem, function1 should be rewritten as

  sub function1 {
    $schema->do_transaction(sub {
      my @keys = do_complex_inserts();
      $schema->do_after_commit(sub {publish_to_outer_world(@keys)});

so that publish_to_outer_world(@keys) only happens after the end of function2().

Multiple calls to do_after_commit() are allowed; the coderefs will be executed in the same order as they were registered. If the transaction is aborted, all coderefs are dismissed.

Other schema methods


  $schema->unbless($obj1, $obj2, ...);

Recursively applies "damn" in Acme::Damn to remove all class information from its arguments : these become plain Perl hashrefs, arrayrefs or scalars. Internal references to __schema are also removed.

This may be useful if the datatree returned by a select(..) / expand(..) needs to be exported through an external module that only considers plain datastructures; this is the case for example with "Dump" in YAML::Syck.

In list context, the unbless() method returns its list of arguments; in scalar context, it returns the first argument.


A statement object encapsulates a SELECT request to the database. Information for the request may be assembled in several steps : first knowing from which datasource to query, including its database connection, then deciding about columns to retrieve, about filtering criteria, etc. Once everything is ready, the database request is issued, and the results are collected. These steps may occur either all at once (which is the most frequent case), or they may proceed through several calls to intermediate methods; see for example the methods "refine()", "prepare()", "execute()" below.

By default statements are created as instances of DBIx::DataModel::Statement; but the schema may define another "statement_class()" for application-specific purposes : for instance the DBIx::DataModel::Statement::Oracle subclass in this distribution adds some methods specific to DBD::Oracle data sources, for implementing pagination through Oracle cursors.

Statement constructor, accessor and proxy methods


  my $statement 
    = DBIx::DataModel::Statement->new($datasource, %options);

This is the statement constructor. The first argument $datasource must be an instance of DBIx::DataModel::Source. If present, other %options are delegated to the refine() method.

Explicit calls to the statement constructor are exceptional; the usual way to create a statement is through "Source::select()".


Accessor for the statement's datasource.


Accessor for the statement's status. This is a dualvar with a string component (new, refined, sqlized, prepared, executed) and an integer component (1, 2, 3, 4, 5).


Accessor for the lower-level DBI statement handle. If that handle is not yet created, method $self->prepare is called automatically in order to produce sth.


Proxy for $self->source->metadm.


Proxy for $self->source->schema.

Data selection


  $result = $statement->select(
     -columns         => \@columns, 
       # OR : -columns => [-DISTINCT => @columns],
     -where           => \%where_criteria,
       # OR : -fetch   => $key,
       # OR : -fetch   => \@key,
     -where_on        => \%where_on_criteria,
     -group_by        => \@groupings,
     -having          => \%having_criteria,
     -order_by        => \@order,
     -for             => $purpose,
     -post_SQL        => sub {...},
     -pre_exec        => sub {...},
     -post_exec       => sub {...},
     -post_bless      => sub {...},
     -prepare_attrs   => \%attrs,
     -limit           => $limit,
     -offset          => $offset,
     -page_size       => $page_size,
     -page_index      => $page_index,
     -column_types    => \%column_types,
     -join_with_USING => $boolean,
     -result_as       => 'rows' || 'firstrow' || [hashref => @cols] || etc.

Calls "refine()" to process arguments (if any), and returns a result as specified by the -result_as argument (see below). If necessary, intermediate steps are automatically performed, like calling "sqlize()", "prepare()" and "execute()".

Arguments are all optional and are passed by name :

-columns => \@columns

\@columns is a reference to an array of strings. Each string is either a column name or a star * to mean "all columns"; both may be preceded by a table name, like in usual SQL, if disambiguation is necessary. SQL functions are also supported. Column parsing is performed by the the auxiliary module SQL::Abstract::More.

Initial words in @columns that start with a hyphen are treated as verbatim SQL : in particular, -columns => [-DISTINCT => qw/col1 col2 .../] will yield

  SELECT DISTINCT col1, col2, ... FROM ...

A '|' in a column is translated into an 'AS' clause, according to the current SQL dialect in SQL::Abstract::More : this is convenient when using perl qw/.../ operator for columns, as in

  -columns => [ qw/table1.longColumn|t1lc
                   table2.longColumn|t2lc/ ],

Column aliasing should be avoided on key columns (either primary or foreign keys), because path methods will no longer be able to navigate through the joins (currently DBIx::DataModel is not clever enough to rename its internal join constraints according to column aliases). Aliasing on non-key columns is OK, and column handlers will operate properly on aliased columns.

The argument to -columns can also be a string instead of an arrayref, like for example "c1 AS foobar, MAX(c2) AS m_c2, COUNT(c3) AS n_c3"; however this is mainly for backwards compatibility. The recommended way is to use the arrayref notation as explained above :

  -columns => [ qw/  c1|foobar   MAX(c2)|m_c2   COUNT(c3)|n_c3  / ]

If omitted, \@columns takes the default, which is usually '*', unless modified through the default_columns argument to "define_table()".

No verification is done on the list of retrieved \@columns; in particular, the list must not always include primary or foreign keys --- but if this is the case, later attempts to perform joins or updates will obviously fail. References to columns absent from the datasource will raise an error at the database level, not at the DBIx::DataModel level.

-where => \%where_criteria

\%where_criteria is a reference to a hash or array of criteria that will be translated into SQL clauses. In most cases, this will just be something like {col1 => 'val1', col2 => 'val2'}. However, trees of hashrefs and arrayrefs may express much more complex combinations, involving nested booolean logic, subqueries, etc. --- see SQL::Abstract::select for a detailed description of the structure of the -where argument. Accidentally, the argument can also be a plain SQL string like "col1 IN (3, 5, 7, 11) OR col2 IS NOT NULL".

-fetch => \@columns

equivalent to

  ...->select(-where => {<primary_key_column0> => $columns[0],
                         <primary_key_column1> => $columns[1], ...},
              -result_as => "firstrow")

If the primary key ranges on one single column (which is the most frequent case), then the argument to -fetch can also be a single scalar value :

  ...->select(-fetch => $key)

When calling a table directly, the select(-fetch => ..) syntax is awkward; you will most certainly prefer the syntactic sugar offered by the fetch() method :


However, the -fetch notation is useful when walking through association roles :

  $employee->activities(-fetch => $act_id)

This example will generate the following SQL

  SELECT * FROM activity WHERE act_id=$act_id AND emp_id=$employee->{emp_id}

Notice how this is different from


which would generate

  SELECT * FROM activity WHERE act_id=$act_id

Both examples would end up with the same record, but in the first case there is an additional check that this record really belongs to the given employee.

In presence of -fetch, arguments -where and -result_as are not allowed.

-where_on => \%where_on_criteria
  $schema->join(qw/Employee => activities => department/)->select(
    -where => {firstname => 'Hector',
               dpt_name  => 'Music'},
    -where_on => {
       T_Activity   => {d_end => {"<" => '01.01.2001'}},
       T_Department => {dpt_head => 999},

Adds some additional criteria into the JOIN ... ON ... SQL statement. These criteria are expressed as a hashref; keys of that hashref are names of database tables participating in the join. The criteria will be combined with regular foreign key constraints produced by the join; so in the (silly) example above, the generated SQL is :

     SELECT * FROM T_Employee 
       LEFT OUTER JOIN T_Activity
         ON T_Employee.emp_id = T_Activity.emp_id AND d_end < ?
       LEFT OUTER JOIN T_Department 
         ON T_Activity.dpt_id = T_Department.dpt_id AND dpt_head = ?
       WHERE dpt_name = ? AND firstname = ?

Depending on the database system, results from such additional criteria within the ON clauses may be different from a usual WHERE criteria at the end of the statement; if in doubt, read your vendor's manual and do experiments.

-group_by => "string" or -group_by => \@array

adds a GROUP BY clause in the SQL statement. Grouping columns are specified either by a plain string or by an array of strings, like for the -columns argument.

-having => "string" or -having => \%criteria

adds a HAVING clause in the SQL statement (which only makes sense together with a GROUP BY clause). This is like a -where clause, except that the criteria are applied after grouping has occured.

-order_by => \@order

\@order is a reference to a list of columns for sorting. It can also be a plain SQL string like "col1 DESC, col3, col2 DESC". Columns can also be prefixed by '+' or '-' for indicating sorting directions, so for example -orderBy => [qw/-col1 +col2 -col3/] will generate the SQL clause ORDER BY col1 DESC, col2 ASC, col3 DESC.

-for => $clause

specifies an additional clause to be added at the end of the SQL statement, like -for => 'READ ONLY' or -for => 'UPDATE'.

-post_SQL => sub{...}

hook for specifying a callback function to be applied on SQL code and bind values, before preparing the statement. It will be called as follows:

  ($sql, @bind) = $args->{-post_SQL}->($sql, @bind) if $args->{-post_SQL};
-pre_exec => sub{...}, -post_exec => sub{...}

hooks for specifying callback functions to be applied on the DBI statement handle, just before or just after invoking execute(). So the sequence will be more or less like this:

  $sth = $dbh->prepare($sql_statement);
  $pre_exec_callback->($sth)  if $pre_exec_callback;
  $post_exec_callback->($sth) if $post_exec_callback;

This is mostly useful if you need to call driver-specific functions at those stages.

-post_bless => sub{...}

hook for specifying a callback function to be called on data rows. The callback will be called after bless_from_DB, i.e. the row is already an object of the proper class and column handlers have been applied.

-prepare_attrs => \%attrs

Optional attributes that will be transmitted to "prepare" in DBI.

-page_size => $page_size

specifies how many rows will be retrieved per "page" of data. Default is unlimited. When specified, this automatically implies -limit.

-page_index => $page_index

specifies the page number (starting at 1). Default is 1. When specified, this automatically implies -offset.

-limit => $limit

limit to the number of rows that will be retrieved. Automatically implied by -page_size.

-offset => $offset

Automatically implied by -page_index.

-column_types => \%column_types

dynamically specifies some column types at the statement level. Usually column types are defined at table definition time, but it may be necessary to defined additional types within a particular statement, for example when using database functions and/or aliases. The argument is a hashref in which keys are names of column types defined within the schema, and values are arrayrefs of column names :

   select(-columns     => [qw/ MAX(date_col)|max_date 
                               MIN(date_col)|min_date ... /],
          -column_types => { Date => [qw/max_date min_date/] },
-dbi_prepare_method => $method_name

overrides the "dbi_prepare_method()" specified at the schema level.

-join_with_USING => $boolean

overrides the "join_with_USING()" parameter specified at the schema level.

-result_as => $result_kind
  my $result = $source->select(..., -result_as => $subclass_name);
  # or
  my $result = $source->select(..., -result_as => [$subclass_name,

specifies what kind of result will be produced. Some result kinds are builtin features of DBIx::DataModel; they are listed below. Other result kinds may be added by subclasses within client code. The system will put the first letter of $subclass_name in uppercase, and then try to load a class named DBIx::DataModel::Schema::ResultAs::$subclass_name or My::Schema::ResultAs::$subclass_name (assuming that My::Schema is the classname the current schema). An instance of that class is then created, passing @args_for_constructor to the new() method. Then the result is obtained by calling get_result() on that instance, passing the current statement as argument.

Builtin result kinds are :


The result will be a ref to an array of rows, each of them blessed into an object of the source class. This is the default result kind. If there are no data rows, a ref to an empty array is returned.


The result will be just the first data row, blessed into an object of the source class. If there is no data, undef is returned.

hashref || [hashref => @cols]

The result will be a hashref. Keys in the hash correspond to distinct values of the specified columns, and values are data row objects. If the argument is given as [hashref => @cols], the column(s) are specified by the caller; otherwise if the argument is given as a simple string, @cols will default to $source->primary_key. If there is more than one column, the result will be a tree of nested hashes. In principle, columns in @cols should contain unique values; nevertheless if it happens that multiple rows have the same values for @cols, then later rows overwrite earlier ones.

flat_arrayref (or just flat)

The result will be a ref to an array that gathers scalar values from each row. Usually this is combined with a -columns argument with one single column, to get a vertical slice from a resultset, like in

  my $all_names = $schema->table('People')->select(
    -columns   => [-DISTINCT => qw/firstname/],
    -result_As => 'flat_arrayref',
  print sort @$all_names;

However, it may also be used for example to fill a hash from pairs retrieved from each row, like in

  my $pairs = $schema->table('People')->select(
    -columns   => [qw/pers_id firstname/],
    -result_as => 'flat_arrayref',
  my %hash = @$pairs;

Finally, it can be convenient for avoiding column aliases, when using aggregator functions :

  my $array_ref = $source->select(-columns  => [qw/MAX(col1)
                                  -where     => ...,
                                  -result_as => 'flat_arrayref');
  my ($max_col1, $avg_col2, $count_col3) = @$array_ref;

Returns the statement itself; data rows will be retrieved later, through the "next()" or "all()" methods. A typical usage pattern is :

  my $statement = $schema->table($table_name)->select(
    -where     => \%criteria, 
    -result_as => 'statement',
  while (my $row = $statement->next) {

The result is like a normal statement, except that successive rows will be fetched into the same memory location, using DBI's fetch and bind_columns methods. This is the fastest way to get data; however, pay attention to the following warning from DBI's documentation : Note that the same array reference is returned for each fetch, so don't store the reference and then use it after a later fetch. Also, the elements of the array are also reused for each row, so take care if you want to take a reference to an element.

Since each row must be retrieved separately, calls to all() or next(10) on a fast statement make no sense and therefore generate an exception.


The result will be an executed DBI statement handle. Then it is up to the caller to retrieve data rows using the DBI API. If needed, these rows can be later blessed into appropriate objects through bless_from_DB().


In scalar context, the result will just be the generated SQL statement. In list context, it will be ($sql, @bind), i.e. the SQL statement together with the bind values.


Returns a ref to an arrayref containing \["($sql)", @bind]. This is meant to be passed to a second query through the -in or -not_in operator of SQL::Abstract, as in :

  my $subquery = $source1->select(..., -result_as => 'subquery');
  my $rows     = $source2->select(
      -columns => ...,
      -where   => {foo => 123, bar => {-not_in => $subquery}}

Refines the statement with -columns => 'COUNT(*) and returns the count of rows.

  my $tree = $source->select(..., $result_as => [categorize => ($key1, $key2)]);

Builds a tree of rows through module List::Categorize, with $key1, $key2, etc. as categorization keys. This is quite similar to the hashref result kind, except that the categorization keys need not be unique : each leaf of the tree will contain lists of rows matching those categories, while the hashref result kind only keeps the last row matching the given keys.

  $source->select(..., $result_as => 'table');

Returns an arrayref where the first item contains an arrayref of headers, and the following items contain arrayrefs of data values. This structure could be useful for exporting the data, or for passing it to a bulk ""insert()".

  $source->select(..., $result_as => [tsv => $filename]);

Writes all rows into a tab-separated file. Tab or newline characters within the data are converted to spaces.

  $source->select(..., $result_as => [file_tabular => ($filename, \%options)]);

Writes all rows into a flat file through the File::Tabular module.

  $source->select(..., $result_as => [xlsx => $filename]);

Writes all rows into an Excel file.


Converts all rows to JSON format, using JSON::MaybeXS.


Converts all rows to YAML format, using YAML::XS.

Other result kinds may be implemented by writing new subclasses of DBIx::DataModel::Schema::ResultAs.

Retrieving data rows


  while (my $row = $statement->next) {...}

  my $slice_arrayref = $statement->next(10);

If called without any argument, next() returns the next data row, or undef if there are no more data rows. If called with a numeric argument, next() attempts to retrieve that number of rows, and returns an arrayref; the size of the array may be smaller than required, if there were no more data rows in the database. The numeric argument is forbidden when select() was called with -result_as => 'fast_statement', because in that case rows must be retrieved one at a time.

Each row is blessed into an object of the proper class, and is passed to the -post_bless callback (if applicable).


  my $rows = $statement->all;

Similar to the next method, but returns an arrayref containing all remaining rows. This method is forbidden when select() was called with -result_as => 'fast_statement', because in that case rows must be retrieved one at a time.


Returns the number of rows corresponding to the current executed statement. Raises an exception if the statement is not in state "executed".

The default implementation for counting rows involves an additional call to the database (SELECT COUNT(*) FROM ...); but a Statement subclass may override this method if the database driver provides a better way (for example DBIx::DataModel::Statement::JDBC calls the Java method getMemberCount).


Returns the index number of the next row to be fetched (starting at $self->offset, or 0 by default).



Returns the page size (requested number of rows), as it was set through the -page_size argument to refine() or select().


Returns the current page index (starting at 1). Always returns 1 if no pagination is activated (no -page_size argument was provided).


Returns the current requested row offset (starting at 0). This offset changes when a request is made to go to another page; but it does not change when retrieving successive rows through the "next" method.


Calls "row_count()" to get the total number of rows for the current statement, and then computes the total number of pages.


  my ($first, $last) = $statement->page_boundaries;

Returns the indices of first and last rows on the current page. These numbers are given in "user coordinates", i.e. starting at 1, not 0 : so if -page_size is 10 and -page_index is 3, the boundaries are 21 / 30, while technically the current offset is 20. On the last page, the $last index corresponds to row_count (so $last - $first is not always equal to pageSize + 1).


Returns an arrayref of rows corresponding to the current page (maximum -page_size rows).

Dealing with the statement lifecycle



Set up some parameters on the statement, that will be used later when calling "sqlize()", "prepare()" and "execute()". Admitted parameters are listed under the "select()" method.

For most parameters, when refine() is called several times on the same statement, the latest value takes precedence (like the latest update in a Perl hash) : this is the case for instance for parameters -columns, -order_by, etc. However, for the -where parameter, the statement accumulates all successive conditions, combining them with an implicit "AND". So for example

   $statement->refine(-where => {col1 => $val1, col2 => {">" => $min}})
             ->refine(-where => {col3 => $val3, col2 => {"<" => $max}});

is equivalent to

   $statement->refine(-where => {col1 => $val1, 
                                 col2 => {">" => $min,
                                          "<" => $max},
                                 col3 => $val3});

Refining a statement can only occur when the statement is still in status NEW or REFINED; after "sqlize()" has been called, parameters are frozen and no further refinement is allowed.



Generates SQL from all parameters accumulated so far in the statement. The statement switches from state REFINED to state SQLIZED, which forbids any further refinement of the statement (but does not forbid further bindings).

If present, arguments are passed to refine(); so this is just a shortcut for




Method sqlize is called automatically if necessary. Then the SQL is sent to the database, and the statement handle returned by DBI ($sth) is stored internally within the statement. The state switches to PREPARED.

Arguments are optional, and are just a shortcut for




Calls the "bind()" method, calls "execute" in DBI on the internal $sth, and applies the -pre_exec and -post_exec callbacks if necessary. The state switches to EXECUTED.

Arguments are optional, and are just a shortcut for


An executed statement can be executed again, possibly with some different bindings. When this happens, the internal result set is reset, and fresh data rows can be retrieved again through the "next" or "all" methods.


Builds a reusable hash for data rows. This spares the time for building a fresh Perl hash for each row; but of course each new row will overwrite data of the previous row. See the doc for "fast_statement" above.


  $sql         = $statement->sql;
  (sql, @bind) = $statement->sql;

In scalar context, returns the SQL code for this statement (or undef if the statement is not yet sqlized).

In list context, returns the SQL code followed by the bind values, suitable for a call to "execute" in DBI.

Obviously, this method is only available after the statement has been sqlized (through a direct call to the "sqlize" method, or indirect call via "prepare", "execute" or "select").


  my @headers = $statement->headers;

Implicitly calls "execute()" if the statement is not already in EXECUTED state. Returns an array of column names as returned from the DBD driver through $self->sth->{FetchHashKeyName}. See the DBI documentation for details.


  $statement->bind(foo => 123, bar => 456);
  $statement->bind({foo => 123, bar => 456}); # equivalent to above

  $statement->bind(0 => 123, 1 => 456);
  $statement->bind([123, 456]);               # equivalent to above

Takes a list of bindings (name-value pairs), and associates them to "named placeholders" within the statement. Named placeholders are defined at the DBIx::DataModel level, not within the database.

The bind() method can be called several times. Binding can occur very early in the statement's lifecycle, even before any named placeholder was inserted into the statement. If successive bindings occur on the same named placeholder, the last value silently overrides previous values. If a binding has no corresponding named placeholder, it is ignored. Names can be any string (including numbers), except reserved words limit and offset, which have a special use for pagination.

The list may alternatively be given as a hashref. This is convenient for example in situations like

  my $rows      = $source->select(...);   
  my $statement = $source->some_method;
  foreach my $row (@$rows) {
    my $related_rows = $statement->bind($row)->select;

The list may also be given as an arrayref; this is equivalent to a hashref in which keys are positions within the array.

Finally, there is a ternary form of bind for passing DBI-specific arguments.

  use DBI qw/:sql_types/;
  $statement->bind(foo => $val, {TYPE => SQL_INTEGER});

See also "bind_param" in DBI.

Examples of this mechanism are given in the design document.



Resets the statement back into NEW state, erasing all information except references to the $schema and $meta_source.

%args are optional; if present, they are passed to the "refine()" method.


Calls $self->sth->finish

Utility methods


  my $obj = $statement->bless_from_DB($record);

Blesses $record into an object of $statement->meta_source->class, and applies the from_DB column handlers.

Although implemented in the Statement class, the same method can also be called also directly from source classes, in the form :

  my $obj = $schema->table($tablename)->bless_from_DB($record);


Data rows coming from the database are blessed into source classes, which are either tables or joins; such classes implement instance methods for manipulating the row objects, or for navigating to related rows. Source classes do not contain information about the data structure (list of tables, associations, etc.); for getting such information, use the metadm method to access the metasource object.

Path methods specific to each source

When an "Association()" is defined between two tables, methods are automatically added into the corresponding classes; so for example with

  $schema->Association([qw/Department department 1 /],
                       [qw/Activity   activities * /]);

the Department class will have an activities() method, and the Activity class will have a department() method. Such methods are called path methods; their names correspond to the UML roles defined in the association. Path methods always take the same arguments as the "select()" method (actually, they are implemented through an internal call to select()).

UML role names should preferably be chosen to avoid collisions with the builtin methods listed below. However, should a conflict occur, it is always possible to alleviate the ambiguity using a fully qualified method name: for instance if a table has a path method that collides with the "schema()" method presented below, we can write

  # calling the path method
  my $related_row_called_schema = $data_row->schema(); 

  # calling the builtin method
  my $dbidm_schema = $data_row->DBIx::DataModel::Source::schema();

A join between several tables (see the "join()" method) creates a new class that inherits all path methods from all tables participating in the join; in case of name conflics, the latest table takes precedence. Again, it is possible to use fully qualified method names if necessary.



Returns the meta-source instance associated with the invocant.


Returns the instance of DBIx::DataModel::Schema from which the current data row was retrieved. When in single-schema mode, the schema comes from $self->metadm->schema->class->singleton; when in multi-schema mode, a reference to the schema is kept under __schema within each row object.

Instance methods


  $row->expand($path, %options);

Executes the method $path to follow an Association, stores the result in the object itself under $row->{$path}, and returns that result. This is typically used to expand an object into a tree datastructure. If present, %options are passed to $row->$path(%options), for example for specifying -where, -columns or -order_by options.

After the expansion, further calls to $row->$path (without any arguments) will reuse that same expanded result instead of calling the database again. This caching improves efficiency, but also introduces the risk of side-effects across your code : after

  $row->expand(some_path => (-columns => [qw/just some columns/],
                             -where   => {some_field => 'restriction'}))

further calls to $row->some_path() will just return a dataset restricted according to the above criteria, instead of a full join. To prevent that effect, you would need to delete $row->{some_path}, or to call the path method with arguments, like $row->some_path(-columns => '*').


  $record->auto_expand( $with_recursion );

Asks the object to expand itself with some objects in foreign tables. Does nothing by default. Should be redefined in subclasses, most probably through the "define_auto_expand" method. If the optional argument $with_recursion is true, then auto_expand is recursively called on the expanded objects.


This is the internal method used by select(), insert(), update(), etc. for generating the datasource specification passed to the SQL::Abstract::More call (for example the -from argument in a call to select()).

The basic algorithm is in "Meta::Source::db_from()"; table names coming from that method may then be prepended with a database schema name, if $self->schema->db_schema is non-empy. This is how temporary or permanent switches of database schema are implemented.


  $class->apply_column_handler($handler_name, \@rows);
  $row  ->apply_column_handler($handler_name);

Inspects the target object or list of objects; for every column that exists in the object, checks whether a handler named $handler_name was declared for that column (see methods "define_type" and "define_column_type"), and if so, calls the handler. By this definition, if a column is absent in an object, then the handler for that column is not called, even though it was declared in the class.

The results of handler calls are collected into a hashref, with an entry for each column name. The value of each entry depends on how apply_column_handlers was called : if it was called as an instance method, then the result is something of shape

  {column_name1 => result_value1, column_name2 => result_value2, ... }

if it was called as a class method (i.e. if \@objects is defined), then the result is something of shape

  {column_name1 => [result_value1_for_object1, result_value1_for_object2, ...], 
   column_name2 => [result_value2_for_object1, result_value2_for_object2, ...], 
   ... }

If column_name is not present in the target object(s), then the corresponding result value is undef.


  my $invalid_columns = $row->has_invalid_columns;
  if ($invalid_columns) {
    print "wrong data in columns ", join(", ", @$invalid_columns);
  else {
   print "all columns OK";

Applies the 'validate' handler to all existent columns. Returns a ref to the list of invalid columns, or undef if there are none.

Note that this is validation at the column level, not at the record level. As a result, your validation handlers can check if an existent column is empty, but cannot check if a column is missing (because in that case the handler would not be called).

Your 'validate' handlers, defined through "define_type", should return 0 or an empty string whenever the column value is invalid. Handlers should not return undef, because we would no longer be able to distinguish between an existent column that is invalid and a missing column.


  use JSON;
  my $json_converter = JSON->new->convert_blessed(1);
  my $json_text      = $json_converter->encode($data_row);

The Source class implements a TO_JSON method, so that data rows from any table or any join can be easily converted into JSON strings (including nested rows that may have been added by the "expand" method). See "convert_blessed" in JSON for more explanations.

The TO_JSON method merely copies the object hash into a plain hash, and removes the __schema slot.

Class methods

Class methods encapsulate operations on a whole table in the database, as opposed to instance methods that encapsulate operations on a single record. Here the terme "class methods" is broader than in the usual Perl sense :

  • when in single-schema mode, class methods may be invoked directly on the class name, using the ordinary Perl mechanism :

      my $result = HR::Employee->select(...);

    This will implicitly retrieve the "singleton()" schema and call the database on that schema.

  • when in multi-schema mode, the syntax above does not work, because the HR::Employee class cannot decide on which database schema the operation should be performed. Instead, the call should use this syntax :

      my $result = $schema->table('Employee')->select(...);

    The call to $schema->table('Employee') returns an instance of class HR::Employe with one single field __schema pointing to the schema. When invoking the select() method on it, technically this is a Perl instance call, but it will be treated as a "class method call" in the DBIx::DataModel sense.


Delegated to Statement::bless_from_DB()


Delegated to Statement::select


  my $record = $source->fetch(@key_values, \%options);

Fetches a single record, from its primary key value (on one or several columns). %options may specify arguments to "select()", like -for, -pre_exec, -post_exec, etc.


  my $record = $source->fetch_cached(@key_values, \%options);

Like fetch, except that the result is stored in a cache, and further calls to the same methods with the same parameters will return the cached record instead of going back to the database. The cache does not know about any updates to the database, so this is mainly useful for readonly data.

The cache is stored internally in $source->metadm->{fetch_cached}{$dbh_addr}{$freeze_args} (where $dbh_addr is Scalar::Util::refaddr($source->schema->dbh) and $freeze_args is Storable::freeze(@keyValues, \%options)). If needed, client code may use this information to clear the cache or tie it to a more sophisticated caching module.


  my @ids = $source->insert(
    {col1 => $val1, col2 => $val2, ...}, 
  # or
  my @ids = $source->insert(
    [qw/ col1   col2   .../],
    [    $val1, $val2, ... ],

Inserts a collection of rows into the database, given either as a list of hashrefs, or as a first arrayref containing the column names, followed by a list of arrayrefs containing values for each row to be inserted.

In either form, the method applies the to_DB handlers, removes the no_update columns, and then inserts the new records into the database.

Primary key column(s) should be present in the supplied hashrefs, unless the the key is auto-generated by the database (see below).

Retrieving ids of inserted records

Each hashref will be blessed into the $source class, and will be inserted through the internal _singleInsert() method. The default implementation of this method should be good enough for most common uses, but you may want to refine it in your table classes if you need some fancy handling on primary keys (like for example computing a random key and checking whether that key is free; see DBIx::DataModel::Doc::Internals for an example). The default implementation uses the following algorithm to retrieve keys auto-generated by the database :

  • if a dbh option called returning_through is found (see options passed to the "dbh" method), the method automatically adds a -returning clause for retrieving value(s) from the primary key column(s). The way to retrieve such values depends on returning_through: when 'FETCH', it performs an additional call to $sth->fetchrow_array; when 'INOUT', it binds inout parameters into the statement. When setting the database handle through the "dbh" method, the returning_through option is automatically set to 'FETCH' if it is a Pg driver, or automatically set to 'INOUT' if it is an Oracle driver.

  • if a dbh option called last_insert_id is found, this is taken as a callback function, which gets called as

      $dbh_options{last_insert_id}->($dbh, $table_name, $column_name)
  • if dbh options called catalog and/or schema are found, DBIx::DataModel will call

      $dbh->last_insert_id($dbh_options{catalog}, $dbh_options{schema},
                           $table_name, $column_name)
  • otherwise, DBIx::DataModel will call

      $dbh->last_insert_id(undef, undef, undef, undef)

Cascaded insert

If the table is a composite class (see Composition() above), then the component parts may be supplied within the hashref, in the form of arrayrefs of sub-hashrefs; then these will be inserted into the database, at the same time as the main record, with join values automatically filled in. For example :

   HR::Employee->insert({firstname  => "Johann Sebastian",  
                         lastname   => "Bach",
                         activities => [{d_begin  => '01.01.1695',
                                         d_end    => '18.07.1750',
                                         dpt_code => 'CPT'}]});

Insert options

The insert() call may take a list of %options specified at the end of the argument list (notice they are not given as a hashref, but as a mere hash, or list of pairs). Currently the only supported option is -returning :

  • if the -returning option is set to an empty hashref, the return value will be a list of hashrefs (one for each inserted record), containing the column name(s) and value(s) of the primary key for that record, and possibly containing subhashes or subarrays for other records created through cascaded inserts. For example:

      my @result = HR->table('Employee'>
                     ->insert({..., activities => [{...}, ...]},
                              -returning => {});
      my $prim_key_first_emp = $result[0]{emp_id};
      my $prim_key_first_act = $result[0]{activities}[0]{act_id};
  • if the -returning option is set to any other value, that value is passed to "insert" in SQL::Abstract::More and finally to the SQL level (INSERT ... RETURNING ...); whatever is returned from the database for each single record gets flattened into a single list transmitted back to the caller.

      my @result = $statement->insert({...}, ..., 
                                      -returning => $scalar_or_arrayref);
  • if the -returning option is absent, values returned by calls to _singleInsert() are collected into a flattened array, and then returned by insert(); usually, these are the primary keys of the inserted records. If this array contains several values, and insert() was called from a scalar context, a warning is issued.

Reftype checking

If a record contains columns that are arrayrefs or hashrefs, and these are not known as "component parts" (see "Cascaded insert" above), then a warning is generated and these columns are automatically removed from the record.

An exception to this rule is when the SQL::Abstract instance associated with the schema has the option array_datatypes => 1 : in that case, columns with arrayrefs are passed as-is to the SQL::Abstract::More and DBI layers, under the assumption that the DBD driver will take appropriate action on those datatypes.

Another exception is when the value is of shape [$orig_value, \%datataype], which is interpreted as a value together with an SQL datatype; again this is passed to the SQL::Abstract::More layer. An example is shown in the cookbook.

Bimodal methods (methods that can be invoked both as class and as instance methods)


  my @primary_key_columns = $class->primary_key;
  my @primary_key_values  = $object->primary_key;

If called as a class method, returns the list of columns registered as primary key for that table or computed as primary key for that view (concatenation of primary keys of joined tables that are in a 1-to-many association).

If called as an instance method, returns the list of values in those columns.

When called in scalar context and the primary key has only one column, returns that column (so you can call my $k = $obj->primary_key instead of my ($k) = $obj->primary_key).


  my $join_stmt = $source->join(qw/path1 path2 .../);

As a class method, this returns a statement that will select a collection of data rows from tables associated with the current meta-source, performing the appropriate joins. Internally this is implemented throught the /define_join() method, with an additional -where criteria to constrain on the primary key(s) of the meta-source. That statement cannot be executed yet, because the values of the primary key are not known until we have an row of that source; but the statement can already be prepared. Later on, we can bind the statement to an instance of the $source, and then execute it :

   my $obj          = $source->fetch(...);
   my $related_rows = $join_stmt->all;

Both operations can even be performed in one single line, because arguments to execute() are passed to the bind() method :

   my $obj          = $source->fetch(...);
   my $related_rows = $join_stmt->execute($obj)->all;

Such prepared statements are especially useful for loop efficiency :

  my $join_stmt = $source->join(qw/path1 path2 .../);
  my $list = $source->select(...);
  foreach my $row (@$list) {
    my $related_rows = $join_stmt->execute($row)->all;
    # ... work with $related_rows

When used as an instance method, join() calls the class method to create a statement and immediately binds it to the current object. So for example if $emp->{emp_id} == 987, then

  $emp->join(qw/activities department/)
      ->select(-where => {d_end => undef})

will generate

  SELECT * FROM Activity INNER JOIN Department 
                         ON Activity.dpt_id = Department.dpt_id
           WHERE emp_id = 987 
             AND d_end IS NULL

Observe that the WHERE clause contains a combination of criteria, coming on one hand from the initial $emp object, and on the other hand from the regular -where clause within the select().


  # class method calls
  $source_class->update(-set   => {col1 => $val1, ...},
                        -where => \%condition);
  $source_class->update({pk_col1 => $pk_val1, ..., col1 => $val1, ...});
  $source_class->update(@primary_key, {col1 => $val1, ...});
  # or instance method calls
  $source_instance->update({field1 => $val1, ...});

Generates a request to the database to update one or several records.

As a class method call, the API for this method accepts three different syntaxes :

  • the syntax with -set and -where keywords closely reflects the SQL syntax of shape

      UPDATE table SET col1='val1', col2='val2', ... WHERE ...

    This is mostly used for updating several records simultaneously (bulk update). The invocant must be a source class, not a source instance.

  • the second syntax is for updating a single record, passed as a hashref; it does not matter if this hashref is blessed or not. A -where clause will be automatically generated by extracting the primary key column(s) from the record; then the remaining columns are treated as the columns to update. If values for primary key columns are missing, an error is generated.

  • the third syntax with @primary_key is an alternate way to supply the values for the primary key; it may be more convenient because you don't need to repeat the name of primary key columns. So if emp_id is the primary key of table Employee, then the following are equivalent :

      HR->table('Employee')->update({emp_id  => $eid, 
                                     address => $new_addr, 
                                     phone   => $new_phone});
      HR->table('Employee')->update($eid => {address => $new_addr, 
                                             phone   => $new_phone});

When used as an instance method, there are two different syntaxes :

  • with arguments, this is equivalent to

                            {field1 => $val1, ...});

    in other words, the -where part is taken from the primary key of the invocant, and the updated fields are passed as an argument hashref. Primary key colums may appear in this hashref, but this will result in changing the primary key for that record. Ex :

      $an_employee->update({emp_id => $new_eid, phone => $new_phone});
  • When used as an instance method without arguments, this is equivalent to


    in other words, the updated fields are all column values stored in memory within the object.

Before calling the database, the to_DB handlers are applied, the no_update_columns are removed, and the auto_update_columns are inserted.

The update method only updates the columns received as arguments : it knows nothing about other columns that may sit in the database. Therefore if you have two concurrent clients doing

  (client1)  ...->update($id, {c1 => $v1, c2 => $v2});
  (client2)  ...->update($id, {c3 => $v3, c4 => $v4, c5 => $v5});

the final state of record $id in the database is guaranteed to reflect changes from both clients, because the sets of updated columns ('c1', 'c2') and ('c3', 'c4', 'c5') are disjoint.

Like for inserts, columns with arrayrefs or hashrefs are automatically removed from the update list (with a warning), except some special cases as described above for insert().

In all syntaxes described above, the return value from the update() method is the number of records updated in the database.


  # class method calls
  $source_class->delete(-where => \%condition);
  $source_class->delete({col1 => $val1, ...});

  # or instance method call

Generates a request to the database to delete one or several records.

As a class method call, the API for this method accepts three different syntaxes :

  • the syntax with the -where keyword closely reflects the SQL syntax of shape

      DELETE FROM table WHERE ...

    This is mostly used for deleting several records simultaneously (bulk delete).

  • the second syntax is used for deleting a single record. A -where clause will be automatically generated by extracting the primary key column(s) from the record.

    If the source is a composite class (see Composition() above), and if the record in memory contains references to lists of component parts, then those will be recursively deleted together with the main object (cascaded delete). However, if there are other component parts in the database, not referenced in the object hashref, then those will not be automatically deleted : in other words, the delete method does not go by itself to the database to find all component parts (this is the job of the client code, or sometimes of the database itself).

  • the third syntax with @primary_key is an alternate way to supply the values for the primary key; it may be more convenient because you don't need to repeat the name of primary key columns. Note that $statement->delete(11, 22) does not mean "delete records with keys 11 and 22", but rather "delete record having primary key (11, 22)"; in other words, with this syntax you only delete one record at a time. With this syntax no cascaded delete is performed.

When used as an instance method, the only syntax is to call the delete() method without any arguments :


In all syntaxes described above, the return value from the delete() method is the number of records deleted in the database.


Meta-schema methods


  my @meta_tables  = $meta_schema->tables;

Returns all DBIx::DataModel::Meta::Source::Table instances declared in this $meta_schema.


  my $meta_table   = $meta_schema->table($table_name);

Returns the single instance of DBIx::DataModel::Meta::Source::Table with name $table_name, or undef.


  my $meta_table   = $meta_schema->db_table($db_table_name);

Returns the single instance of DBIx::DataModel::Meta::Source::Table with database name $db_table_name, or undef.


  my @associations = $meta_schema->associations;

Returns all DBIx::DataModel::Meta::Association instances declared in this $meta_schema.


  my $association  = $meta_schema->associations($association_name);

Returns the single instance of DBIx::DataModel::Meta::Source::Association with name $association_name, or undef.


  my @types        = $meta_schema->types;

Returns all DBIx::DataModel::Meta::Type instances declared in this $meta_schema.


  my $type         = $meta_schema->type($type_name);

Returns the single instance of DBIx::DataModel::Meta::Type with name $type_name, or undef.


  my @joins        = $meta_schema->joins;

Returns all DBIx::DataModel::Meta::Source::Join instances declared in this $meta_schema.


  my $join         = $meta_schema->join($join_name);

Returns the single instance of DBIx::DataModel::Meta::Source::Join with name $join_name, or undef.

other accessors

Accessor methods are defined for the following members of the $meta_schema :


Meta-source methods

Accessor methods are defined for the following members of a $meta_source (instance of either DBIx::DataModel::Meta::Source::Table or DBIx::DataModel::Meta::Source::Join) :

aliased tables()

In addition, the following methods return dynamic lists :


Returns a flattened list of recursive calls to the "parents()" method.


Returns a flattened hash, built from auto_insert_columns declared in this source, in its ancestors, or in the $meta_schema.


Returns a flattened hash, built from auto_update_columns declared in this source, in its ancestors, or in the $meta_schema.


Returns a flattened hash, built from no_update_columns declared in this source, in its ancestors, or in the $meta_schema. Keys are column names, values are insignificant.


  my %all_path      = $meta_source->path;
  my $specific_path = $meta_source->path($path_name);

Without any argument, returns a flattened hash of all paths accessible from this source : keys are path names, and values are instances of DBIx::DataModel::Meta::Path.

If a $path_name is supplied, returns the corresponding metapath object.


Returns what will be injected as -from argument into the "select" in SQL::Abstract::More call. For a Table, this is just the table name; for a Join, it is the collection of joined tables together with the join conditions on columns.

The db_from() method of sources (as opposed to the present meta-source method) will reuse this data, possibly with the addition of a database schema name in front of table names; see "db_from()".


Returns the optional "where" condition associated with this source (in the case of "View()").

Meta-table methods

In addition to the $meta_source methods above, the following methods are defined for an instance of DBIx::DataModel::Meta::Source::Table :


Returns the list of other meta-sources that have been declared as components of this source, through the "Composition()" declaration.

Association methods


The DBIx::DataModel::Meta::Schema instance in which this association is declared.


Returns the association name


Returns the association kind (Association or Composition).


Returns the DBIx::DataModel::Meta::Path object describing the path from A to B.


Returns the DBIx::DataModel::Meta::Path object describing the path from B to A.

Path methods


The name of this path.


Reference to the DBIx::DataModel::Meta::Source::Table where this path starts.


Reference to the DBIx::DataModel::Meta::Source::Table where this path ends.


  my %join_cond = $path->on();

Hash for generating the join condition (keys are colums for the left-hand side, values are columns for the right-hand side).


see "Association methods"


Reference to the DBIx::DataModel::Meta::Association that created this path.


Either "AB" or "BA".


Returns the path object representing the opposite direction.

Type methods


The DBIx::DataModel::Meta::Schema instance in which this type is declared.


Name of this type.


Hashref of handlers declared in this type (keys are handler names, values are handler bodies, i.e. coderefs).