DBIx::Class::ResultDDL - Sugar methods for declaring DBIx::Class::Result data definitions
package MyApp::Schema::Result::Artist; use DBIx::Class::ResultDDL qw/ -V2 -inflate_datetime -inflate_json /; table 'artist'; col id => integer unsigned auto_inc; col name => varchar(25), null; col formed => date; col disbanded => date, null; col general_info => json null; col last_update => datetime('UTC'); primary_key 'id'; idx artist_by_name => [ 'name' ]; has_many albums => { id => 'Album.artist_id' }; rel_many impersonators => { name => 'Artist.name' }; might_have newest_album => 'JOIN Album a ON a.id = (SELECT MAX(Album.id) FROM Album WHERE Album.artist_id = self.id)';
This is Yet Another Sugar Module for building DBIC result classes. It provides a domain-specific-language that feels almost like writing DDL.
This module heavily pollutes your symbol table in the name of extreme convenience, so the -Vx option has the added feature of automatically removing those symbols at end-of-scope as if you had said use namespace::clean;.
-Vx
use namespace::clean;
This module has a versioned API, to help prevent name collisions. If you request the -Vx behavior, you can rely on that to remain the same across upgrades.
This module also has a mixin for Schema::Loader! Now you can have pretty column definitions even on a SQL-first project that you generate from the database.
This module is based on Exporter::Extensible, allowing all the import notations that module provides. Every export beginning with a dash indicates custom behavior, rather than just a simple export of symbols.
-swp
"Strict, Warnings, Parent".
Enable use strict and use warnings unless those flags have been changed from the default via other means. In other words, you can still use Moo or use common::sense without this option overriding your choice.
use strict
use warnings
use Moo
use common::sense
Then, use parent "DBIx::Class::Core" unless the class already has an add_column method. If add_column exists it is presumably because you already declared a parent class. Note that this check happens at BEGIN-time, so if you use Moo and extends 'SomeClass'; you need to wrap that in a begin block before the use DBIx::Class::ResultDDL -V2 line.
use parent "DBIx::Class::Core"
add_column
extends 'SomeClass';
use DBIx::Class::ResultDDL -V2
-autoclean
Remove all added symbols at the end of current scope.
-V2
Implies -swp, :V2, and -autoclean.
:V2
-V1
Implies -swp, :V1, and -autoclean.
:V1
-V0
Implies -swp, :V0, and -autoclean.
:V0
-inflate_datetime
Inflate all date columns to DateTime objects, by adding the DBIC component DBIx::Class::InflateColumn::DateTime.
-inflate_json
Causes all columns declared with json or jsonb sugar methods to also declare "inflate_json". This requires DBIx::Class::InflateColumn::Serializer to be installed (which is not an official dependency of this module).
json
jsonb
-retrieve_defaults
Causes all columns having a default_value to also set retrieve_on_insert => 1. This way after an insert for a row having a date column with default_value => \'NOW()', the row object will hold the value of NOW() that was generated by the database.
default_value
retrieve_on_insert => 1
default_value => \'NOW()'
See "retrieve_on_insert" in DBIx::Class::ResultSource for details on the column flag.
This feature has no way of knowing about the existence of defaults in the database unless they were declared here in DBIx::Class metadata, nor does it know about triggers or other things that could cause the inserted row to be different from the insert request.
This tag selects the following symbols:
table view col null default auto_inc fk integer unsigned tinyint smallint bigint decimal numeric money float float4 float8 double real char varchar nchar nvarchar MAX binary varbinary bit varbit blob tinyblob mediumblob longblob text tinytext mediumtext longtext ntext bytea date datetime timestamp enum bool boolean uuid json jsonb inflate_json array primary_key idx create_index unique sqlt_add_index sqlt_add_constraint rel_one rel_many has_one might_have has_many belongs_to many_to_many ddl_cascade dbic_cascade
See DBIx::Class::ResultDDL::V1. The primary difference from V2 is a bug in datetime($timezone) where the timezone generated the wrong DBIC arguments. Also it didn't support -retrieve_defaults.
datetime($timezone)
See DBIx::Class::ResultDDL::V0. The primary difference from V1 is lack of array column support, lack of index declaration support, and sugar methods do not pass through leftover unknown arguments. Also new Postgres column types were added in V1.
table 'foo'; # becomes... __PACKAGE__->table('foo');
col $name, @options; # becomes... __PACKAGE__->add_column($name, { is_nullable => 0, @merged_options });
Define a column. This calls add_column after sensibly merging all your options. It defaults the column to not-null for you, but you can override that by saying null in your options. You will probably use many of the methods below to build the options for the column:
null
is_nullable => 1
is_auto_increment => 1, 'extra.auto_increment_type' => 'monotonic'
(The 'monotonic' bit is required to correctly deploy on SQLite. You can read the gory details but the short version is that SQLite gives you "fake" autoincrement by default, and you only get real ANSI-style autoincrement if you ask for it. SQL::Translator doesn't ask for the extra work by default, but if you're declaring columns by hand expecting it to be platform-neutral, then you probably want this. SQLite also requires data_type "integer", and for it to be the primary key.)
is_foreign_key => 1
# Call: Becomes: default($value) default_value => $value default(@value) default_value => [ @value ]
integer data_type => 'integer', size => 11 integer($size) data_type => 'integer', size => $size integer[] data_type => 'integer[]', size => 11 integer $size,[] data_type => 'integer[]', size => $size # MySQL variants tinyint data_type => 'tinyint', size => 4 smallint data_type => 'smallint', size => 6 bigint data_type => 'bigint', size => 22 # MySQL specific flag which can be combined with int types unsigned extra => { unsigned => 1 }
numeric data_type => 'numeric' numeric($p) data_type => 'numeric', size => [ $p ] numeric($p,$s) data_type => 'numeric', size => [ $p, $s ] numeric[] data_type => 'numeric[]' numeric $p,$s,[] data_type => 'numeric[]', size => [ $p, $s ] # Same API for decimal decimal ... data_type => 'decimal' ...
money data_type => 'money' money[] data_type => 'money[]'
real data_type => 'real' rea[] data_type => 'real[]' float4 data_type => 'float4' float4[] data_type => 'float4[]' double data_type => 'double precision' double[] data_type => 'double precision[]' float8 data_type => 'float8' float8[] data_type => 'float8[]'
# Call: Becomes: float data_type => 'float' float($bits) data_type => 'float', size => $bits float[] data_type => 'float[]' float $bits,[] data_type => 'float[]', size => $bits
SQLServer and Postgres offer this, where $bits is the number of bits of precision of the mantissa. Array notation is supported for Postgres.
$bits
# Call: Becomes: char data_type => 'char', size => 1 char($size) data_type => 'char', size => $size char[] data_type => 'char[]', size => 1 char $size,[] data_type => 'char[]', size => $size # Same API for the others nchar ... data_type => 'nchar' ... bit ... data_type => 'bit' ...
nchar (SQL Server unicode char array) has an identical API but returns data_type => 'nchar'
nchar
data_type => 'nchar'
Note that Postgres allows "bit" to have a size, like char($size) but SQL Server uses "bit" only to represent a single bit.
"bit"
char($size)
varchar data_type => 'varchar' varchar($size) data_type => 'varchar', size => $size varchar(MAX) data_type => 'varchar', size => "MAX" varchar[] data_type => 'varchar[]' varchar $size,[] data_type => 'varchar[]', size => $size # Same API for the others nvarchar ... data_type => 'nvarchar' ... binary ... data_type => 'binary' ... varbinary ... data_type => 'varbinary' ... varbit ... data_type => 'varbit' ...
Unlike char/varchar relation, binary does not default the size to 1.
binary
Constant for "MAX", used by SQL Server for varchar(MAX).
"MAX"
varchar(MAX)
blob data_type => 'blob', blob($size) data_type => 'blob', size => $size # MySQL specific variants: tinyblob data_type => 'tinyblob', size => 0xFF mediumblob data_type => 'mediumblob', size => 0xFFFFFF longblob data_type => 'longblob', size => 0xFFFFFFFF # Postgres blob type is 'bytea' bytea data_type => 'bytea' bytea[] data_type => 'bytea[]'
Note: For MySQL, you need to change the type according to '$size'. A MySQL blob is 2^16 max length, and probably none of your binary data would be that small. Consider mediumblob or longblob, or consider overriding My::Schema::sqlt_deploy_hook to perform this conversion automatically according to which DBMS you are connected to.
2^16
mediumblob
longblob
My::Schema::sqlt_deploy_hook
For SQL Server, newer versions deprecate blob in favor of VARCHAR(MAX). This is another detail you might take care of in sqlt_deploy_hook.
blob
VARCHAR(MAX)
text data_type => 'text', text($size) data_type => 'text', size => $size text[] data_type => 'text[]' # MySQL specific variants: tinytext data_type => 'tinytext', size => 0xFF mediumtext data_type => 'mediumtext', size => 0xFFFFFF longtext data_type => 'longtext', size => 0xFFFFFFFF # SQL Server unicode variant: ntext data_type => 'ntext', size => 0x3FFFFFFF ntext($size) data_type => 'ntext', size => $size
See MySQL notes in blob. For SQL Server, you might want ntext or nvarchar(MAX) instead. Postgres does not use a size, and allows arrays of this type.
ntext
nvarchar(MAX)
Newer versions of SQL-Server prefer nvarchar(MAX) instead of ntext.
enum(@values) data_type => 'enum', extra => { list => [ @values ] }
This function cannot take pass-through arguments, since every argument is an enum value.
bool data_type => 'boolean' bool[] data_type => 'boolean[]' boolean data_type => 'boolean' boolean[] data_type => 'boolean[]'
Note that SQL Server doesn't support 'boolean', the closest being 'bit', though in postgres 'bit' is used for bitstrings.
date data_type => 'date' date[] data_type => 'date[]'
datetime data_type => 'datetime' datetime($tz) data_type => 'datetime', timezone => $tz datetime[] data_type => 'datetime[]' datetime $tz, [] data_type => 'datetime[]', timezone => $tz # Same API timestamp ... data_type => 'timestamp', ...
NOTE that datetime and timestamp had a bug before version 2 which set "time_zone" instead of "timezone", causing the time zone (applied to DateTime objects by the inflator) to not take effect, resulting in "floating" timezone DateTime objects.
datetime
timestamp
array($type) data_type => $type.'[]' array(@dbic_attrs) data_type => $type.'[]', @other_attrs # i.e. array numeric(10,3) data_type => 'numeric[]', size => [10,3]
Declares a postgres array type by appending "[]" to a type name. The type name can be given as a single string, or as any sugar function that returns a data_type => $type pair of elements.
"[]"
data_type => $type
uuid data_type => 'uuid' uuid[] data_type => 'uuid[]'
json data_type => 'json' json[] data_type => 'json[]' jsonb data_type => 'jsonb' jsonb[] data_type => 'jsonb[]'
If -inflate_json use-line option was given, this will additionally imply "inflate_json".
inflate_json serializer_class => 'JSON'
This first loads the DBIC component DBIx::Class::InflateColumn::Serializer into the current package if it wasn't added already. Note that that module is not a dependency of this one and needs to be installed separately.
primary_key(@cols)
Shortcut for __PACKAGE__->set_primary_key(@cols)
unique($name?, \@cols)
Shortucut for __PACKAGE__->add_unique_constraint($name? \@cols)
In each of the native DBIC relationship-declaring functions, the parameters are
$package->$rel_type($name, $peer_class, $condition, \%attributes);
These sugar functions for each relationship type allow two additional notations:
$rel_type $name => { $my_col => "$peer_class.$their_col" }, @attrs; $rel_type $name => "JOIN $peer_class $alias ON $sql_template", @attrs;
The first differs from a normal DBIC \%condition in that the local columns come first. In DBIC, it is { "foreign.$their_col" => "self.$my_col" } but these functions allow { $my_col => "$peer_class.$their_col" }. It saves a little bit of typing and this order seems more logical to me.
\%condition
{ "foreign.$their_col" => "self.$my_col" }
{ $my_col => "$peer_class.$their_col" }
The second one is a shortcut for creating the coderef-based DBIC joins. Normal DBIC would use a condition of:
sub { my $literal_sql= "... $_[0]{self_alias} ... $_[0]{foreign_alias} ..."; return \$literal_sql; }
These methods allow the simpler notation of:
"JOIN $peer_class ON ... self.$my_col ... $peer_class.$their_col ..." "JOIN $peer_class alias ON ... self.$my_col ... alias.$their_col ..." "(LEFT|INNER) JOIN $peer_class x ON ... self.$my_col ... x.$their_col ..."
To write the template, start with 'JOIN' or 'LEFT JOIN' or 'INNER JOIN' followed by the full or partial name of the related Result Class (not actual table name). Optionally include an alias after that, and then the word 'ON'. Everything else will be used as the template's SQL. All occurrences of the result class name (or alias, if you used one) will be replaced by a variable, and every occurrence of self. will be replaced by a variable. These variables get substituted with the aliases provided by DBIC during the construction of the resultset.
self.
Examples:
might_have example1 => 'JOIN Example e ON e.id = self.example_id'; might_have example2 => <<'SQL'; JOIN MyApp::Result::Example foreign ON foreign.id = self.example_id SQL might_have max_example => <<'SQL'; LEFT JOIN Example ex ON ex.id = ( SELECT MAX(id) FROM example_table WHERE category = self.category ) SQL
When writing the SQL, you need to ensure that "self." and whatever alias you chose don't occur anywhere that you didn't intend; This module does not actually parse the SQL, it just performs blind text substitution.
belongs_to $name => $peer_class, $condition, @attr_list; belongs_to $name => { colname => "$ResultClass.$colname" }, @attr_list; belongs_to $name => 'JOIN $peer_class ON $sql', @attr_list; # becomes... __PACKAGE__->belongs_to($rel_name, $peer_class, $condition, { @attr_list });
might_have $name => $peer_class, $condition, @attr_list; might_have $name => { colname => "$ResultClass.$colname" }, @attr_list; might_have $name => 'JOIN $peer_class ON $sql', @attr_list; # becomes... __PACKAGE__->might_have($rel_name, $peer_class, $condition, { @attr_list });
has_one $name => $peer_class, $condition, @attr_list; has_one $name => { colname => "$ResultClass.$colname" }, @attr_list; has_one $name => 'JOIN $peer_class ON $sql', @attr_list; # becomes... __PACKAGE__->has_one($rel_name, $peer_class, $condition, { @attr_list });
has_many $name => $peer_class, $condition, @attr_list; has_many $name => { colname => "$ResultClass.$colname" }, @attr_list; has_many $name => 'JOIN $peer_class ON $sql', @attr_list; # becomes... __PACKAGE__->has_many($rel_name, $peer_class, $condition, { @attr_list });
many_to_many $name => $rel_to_linktable, $rel_from_linktable; # becomes... __PACKAGE__->many_to_many(@_);
Declares a single-record left-join relation without implying ownership. Note that the DBIC relations that do imply ownership like might_have cause an implied deletion of the related row if you delete a row from this table that references it, even if your schema did not have a cascading foreign key. This DBIC feature is controlled by the cascading_delete option, and using this sugar function to set up the relation defaults that feature to "off".
might_have
cascading_delete
rel_one $name => $peer_class, $condition, @attr_list; rel_one $name => { $mycol => "$ResultClass.$fcol", ... }, @attr_list; rel_one $name => 'JOIN $peer_class ON $sql', @attr_list; # becomes... __PACKAGE__->add_relationship( $rel_name, $peer_class, { "foreign.$fcol" => "self.$mycol" }, { join_type => 'LEFT', accessor => 'single', cascade_copy => 0, cascade_delete => 0, is_depends_on => $is_f_pk, # auto-detected, unless specified ($is_f_pk? fk_columns => { $mycol => 1 } : ()), @attr_list } );
rel_many $name => { $my_col => "$class.$col", ... }, @attr_list; rel_many $name => 'JOIN $peer_class ON $sql', @attr_list;
Same as "rel_one", but generates a one-to-many relation with a multi-accessor.
ddl_cascade; # same as ddl_cascade("CASCADE"); ddl_cascade(1); # same as ddl_cascade("CASCADE"); ddl_cascade(0); # same as ddl_cascade("RESTRICT"); ddl_cascade($mode);
Helper method to generate @options for above. It generates
@options
on_update => $mode, on_delete => $mode
This does not affect client-side cascade, and is only used by Schema::Loader to generate DDL for the foreign keys when the table is deployed.
dbic_cascade; # same as dbic_cascade(1) dbic_cascade($enabled);
cascade_copy => $enabled, cascade_delete => $enabled
This re-enables the dbic-side cascading that was disabled by default in the rel_ functions.
rel_
view $view_name, $view_sql, %options;
Makes the current resultsource into a view. This is used instead of 'table'. Takes two options, 'is_virtual', to make this into a virtual view, and 'depends' to list tables this view depends on.
It is the equivalent of
__PACKAGE__->table_class('DBIx::Class::ResultSource::View'); __PACKAGE__->table($view_name); __PACKAGE__->result_source_instance->view_definition($view_sql); __PACKAGE__->result_source_instance->deploy_depends_on($options{depends}); __PACKAGE__->result_source_instance->is_virtual($options{is_virtual});
DBIx::Class doesn't actually track the indexes or constraints on a table. If you want to add these to be automatically deployed with your schema, you need an sqlt_deploy_hook function. This module can create one for you, but does not yet attempt to wrap one that you provide. (You can of course wrap the one generated by this module using a method modifier from Class::Method::Modifiers) The method sqlt_deploy_hook is created in the current package the first time one of these functions are called. If it already exists and wasn't created by DBIx::Class::ResultDDL, it will throw an exception. The generated method does call maybe::next::method for you.
sqlt_deploy_hook
maybe::next::method
This is a direct passthrough to the function "add_index" in SQL::Translator::Schema::Table, without any magic.
See notes above about the generated sqlt_deploy_hook.
This is a direct passthrough to the function "add_constraint" in SQL::Translator::Schema::Table, without any magic.
create_index $index_name => \@fields, %options;
This is sugar for sqlt_add_index. It translates to
sqlt_add_index( name => $index_name, fields => \@fields, options => \%options, type => $options{type} );
where the %options are the "options" in SQL::Translator::Schema::Index, except if one of the keys is type, then that key/value gets pulled out and used as "type" in SQL::Translator::Schema::Index.
%options
type
Alias for "create_index"; lines up nicely with 'col'.
These are not exported, but might be useful for integrating with this module:
my $opts= DBIx::Class::ResultDDL::expand_col_options(@_);
This is a utility function that performs most of the work of "col". Given the list of arguments returned by the sugar functions above, it returns a hashref of official options for "add_column" in DBIx::Class::ResultSource.
my ($pkg, $rel_type, $rel_name, $related_pkg, \%colmap, \%options) = DBIx::Class::ResultDDL::expand_relationship_params(caller, @_);
This is a utility function that parses the relationship notations accepted by the functions above, and returns the normal positional parameters for the DBIC relationship functions.
The methods above in most cases allow you to insert plain-old-DBIC notation where appropriate, instead of relying purely on sugar methods. If you are missing your favorite column flag or something, feel free to contribute a patch.
Thanks to Clippard Instrument Laboratory Inc. and Ellis, Partners in Management Solutions for supporting open source, including portions of this module.
Michael Conrad <mconrad@intellitree.com>
Veesh Goldman <rabbiveesh@gmail.com>
version 2.04
This software is copyright (c) 2023 by Michael Conrad, IntelliTree Solutions llc.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
To install DBIx::Class::ResultDDL, copy and paste the appropriate command in to your terminal.
cpanm
cpanm DBIx::Class::ResultDDL
CPAN shell
perl -MCPAN -e shell install DBIx::Class::ResultDDL
For more information on module installation, please visit the detailed CPAN module installation guide.