NAME

DBIx::Class::Fixtures - Dump data and repopulate a database using rules

SYNOPSIS

 use DBIx::Class::Fixtures;

 ...

 my $fixtures = DBIx::Class::Fixtures->new({
     config_dir => '/home/me/app/fixture_configs'
 });

 $fixtures->dump({
   config => 'set_config.json',
   schema => $source_dbic_schema,
   directory => '/home/me/app/fixtures'
 });

 $fixtures->populate({
   directory => '/home/me/app/fixtures',
   ddl => '/home/me/app/sql/ddl.sql',
   connection_details => ['dbi:mysql:dbname=app_dev', 'me', 'password'],
   post_ddl => '/home/me/app/sql/post_ddl.sql',
 });

DESCRIPTION

Dump fixtures from source database to filesystem then import to another database (with same schema) at any time. Use as a constant dataset for running tests against or for populating development databases when impractical to use production clones. Describe fixture set using relations and conditions based on your DBIx::Class schema.

DEFINE YOUR FIXTURE SET

Fixture sets are currently defined in .json files which must reside in your config_dir (e.g. /home/me/app/fixture_configs/a_fixture_set.json). They describe which data to pull and dump from the source database.

For example:

 {
   "sets": [
     {
       "class": "Artist",
       "ids": ["1", "3"]
     },
     {
       "class": "Producer",
       "ids": ["5"],
       "fetch": [
         {
           "rel": "artists",
           "quantity": "2"
         }
       ]
     }
   ]
 }

This will fetch artists with primary keys 1 and 3, the producer with primary key 5 and two of producer 5's artists where 'artists' is a has_many DBIx::Class rel from Producer to Artist.

The top level attributes are as follows:

sets

Sets must be an array of hashes, as in the example given above. Each set defines a set of objects to be included in the fixtures. For details on valid set attributes see "SET ATTRIBUTES" below.

rules

Rules place general conditions on classes. For example if whenever an artist was dumped you also wanted all of their cds dumped too, then you could use a rule to specify this. For example:

 {
   "sets": [
     {
       "class": "Artist",
       "ids": ["1", "3"]
     },
     {
       "class": "Producer",
       "ids": ["5"],
       "fetch": [
         {
           "rel": "artists",
           "quantity": "2"
         }
       ]
     }
   ],
   "rules": {
     "Artist": {
       "fetch": [ {
         "rel": "cds",
         "quantity": "all"
       } ]
     }
   }
 }

In this case all the cds of artists 1, 3 and all producer 5's artists will be dumped as well. Note that 'cds' is a has_many DBIx::Class relation from Artist to CD. This is eqivalent to:

 {
   "sets": [
    {
       "class": "Artist",
       "ids": ["1", "3"],
       "fetch": [ {
         "rel": "cds",
         "quantity": "all"
       } ]
     },
     {
       "class": "Producer",
       "ids": ["5"],
       "fetch": [ {
         "rel": "artists",
         "quantity": "2",
         "fetch": [ {
           "rel": "cds",
           "quantity": "all"
         } ]
       } ]
     }
   ]
 }

rules must be a hash keyed by class name.

"RULE ATTRIBUTES"

includes

To prevent repetition between configs you can include other configs. For example:

 {
   "sets": [ {
     "class": "Producer",
     "ids": ["5"]
   } ],
   "includes": [
     { "file": "base.json" }
   ]
 }

Includes must be an arrayref of hashrefs where the hashrefs have key 'file' which is the name of another config file in the same directory. The original config is merged with its includes using Hash::Merge.

datetime_relative

Only available for MySQL and PostgreSQL at the moment, must be a value that DateTime::Format::* can parse. For example:

 {
   "sets": [ {
     "class": "RecentItems",
     "ids": ["9"]
   } ],
   "datetime_relative": "2007-10-30 00:00:00"
 }

This will work when dumping from a MySQL database and will cause any datetime fields (where datatype => 'datetime' in the column def of the schema class) to be dumped as a DateTime::Duration object relative to the date specified in the datetime_relative value. For example if the RecentItem object had a date field set to 2007-10-25, then when the fixture is imported the field will be set to 5 days in the past relative to the current time.

might_have

Specifies whether to automatically dump might_have relationships. Should be a hash with one attribute - fetch. Set fetch to 1 or 0.

 {
   "might_have": { "fetch": 1 },
   "sets": [
     {
       "class": "Artist",
       "ids": ["1", "3"]
     },
     {
       "class": "Producer",
       "ids": ["5"]
     }
   ]
 }

Note: belongs_to rels are automatically dumped whether you like it or not, this is to avoid FKs to nowhere when importing. General rules on has_many rels are not accepted at this top level, but you can turn them on for individual sets - see "SET ATTRIBUTES".

SET ATTRIBUTES

class

Required attribute. Specifies the DBIx::Class object class you wish to dump.

ids

Array of primary key ids to fetch, basically causing an $rs->find($_) for each. If the id is not in the source db then it just won't get dumped, no warnings or death.

quantity

Must be either an integer or the string 'all'. Specifying an integer will effectively set the 'rows' attribute on the resultset clause, specifying 'all' will cause the rows attribute to be left off and for all matching rows to be dumped. There's no randomising here, it's just the first x rows.

cond

A hash specifying the conditions dumped objects must match. Essentially this is a JSON representation of a DBIx::Class search clause. For example:

 {
   "sets": [{
     "class": "Artist",
     "quantiy": "all",
     "cond": { "name": "Dave" }
   }]
 }

This will dump all artists whose name is 'dave'. Essentially $artist_rs->search({ name => 'Dave' })->all.

Sometimes in a search clause it's useful to use scalar refs to do things like:

 $artist_rs->search({ no1_singles => \'> no1_albums' })

This could be specified in the cond hash like so:

 {
   "sets": [ {
     "class": "Artist",
     "quantiy": "all",
     "cond": { "no1_singles": "\> no1_albums" }
   } ]
 }

So if the value starts with a backslash the value is made a scalar ref before being passed to search.

join

An array of relationships to be used in the cond clause.

 {
   "sets": [ {
     "class": "Artist",
     "quantiy": "all",
     "cond": { "cds.position": { ">": 4 } },
     "join": ["cds"]
   } ]
 }

Fetch all artists who have cds with position greater than 4.

fetch

Must be an array of hashes. Specifies which rels to also dump. For example:

 {
   "sets": [ {
     "class": "Artist",
     "ids": ["1", "3"],
     "fetch": [ {
       "rel": "cds",
       "quantity": "3",
       "cond": { "position": "2" }
     } ]
   } ]
 }

Will cause the cds of artists 1 and 3 to be dumped where the cd position is 2.

Valid attributes are: 'rel', 'quantity', 'cond', 'has_many', 'might_have' and 'join'. rel is the name of the DBIx::Class rel to follow, the rest are the same as in the set attributes. quantity is necessary for has_many relationships, but not if using for belongs_to or might_have relationships.

has_many

Specifies whether to fetch has_many rels for this set. Must be a hash containing keys fetch and quantity.

Set fetch to 1 if you want to fetch them, and quantity to either 'all' or an integer.

Be careful here, dumping has_many rels can lead to a lot of data being dumped.

might_have

As with has_many but for might_have relationships. Quantity doesn't do anything in this case.

This value will be inherited by all fetches in this set. This is not true for the has_many attribute.

external

In some cases your database information might be keys to values in some sort of external storage. The classic example is you are using DBIx::Class::InflateColumn::FS to store blob information on the filesystem. In this case you may wish the ability to backup your external storage in the same way your database data. The "external" attribute lets you specify a handler for this type of issue. For example:

    {
        "sets": [{
            "class": "Photo",
            "quantity": "all",
            "external": {
                "file": {
                    "class": "File",
                    "args": {"path":"__ATTR(photo_dir)__"}
                }
            }
        }]
    }

This would use DBIx::Class::Fixtures::External::File to read from a directory where the path to a file is specified by the file field of the Photo source. We use the uninflated value of the field so you need to completely handle backup and restore. For the common case we provide DBIx::Class::Fixtures::External::File and you can create your own custom handlers by placing a '+' in the namespace:

    "class": "+MyApp::Schema::SomeExternalStorage",

Although if possible I'd love to get patches to add some of the other common types (I imagine storage in MogileFS, Redis, etc or even Amazon might be popular.)

See DBIx::Class::Fixtures::External::File for the external handler interface.

RULE ATTRIBUTES

cond

Same as with "SET ATTRIBUTES"

fetch

Same as with "SET ATTRIBUTES"

join

Same as with "SET ATTRIBUTES"

has_many

Same as with "SET ATTRIBUTES"

might_have

Same as with "SET ATTRIBUTES"

RULE SUBSTITUTIONS

You can provide the following substitution patterns for your rule values. An example of this might be:

    {
        "sets": [{
            "class": "Photo",
            "quantity": "__ENV(NUMBER_PHOTOS_DUMPED)__",
        }]
    }

ENV

Provide a value from %ENV

ATTR

Provide a value from "config_attrs"

catfile

Create the path to a file from a list

catdir

Create the path to a directory from a list

METHODS

new

Arguments: \%$attrs
Return Value: $fixture_object

Returns a new DBIx::Class::Fixture object. %attrs can have the following parameters:

config_dir:

required. must contain a valid path to the directory in which your .json configs reside.

debug:

determines whether to be verbose

ignore_sql_errors:

ignore errors on import of DDL etc

config_attrs

A hash of information you can use to do replacements inside your configuration sets. For example, if your set looks like:

   {
     "sets": [ {
       "class": "Artist",
       "ids": ["1", "3"],
       "fetch": [ {
         "rel": "cds",
         "quantity": "__ATTR(quantity)__",
       } ]
     } ]
   }

    my $fixtures = DBIx::Class::Fixtures->new( {
      config_dir => '/home/me/app/fixture_configs'
      config_attrs => {
        quantity => 100,
      },
    });

You may wish to do this if you want to let whoever runs the dumps have a bit more control

 my $fixtures = DBIx::Class::Fixtures->new( {
   config_dir => '/home/me/app/fixture_configs'
 } );

available_config_sets

Returns a list of all the config sets found in the "config_dir". These will be a list of the json based files containing dump rules.

dump

Arguments: \%$attrs
Return Value: 1
 $fixtures->dump({
   config => 'set_config.json', # config file to use. must be in the config
                                # directory specified in the constructor
   schema => $source_dbic_schema,
   directory => '/home/me/app/fixtures' # output directory
 });

or

 $fixtures->dump({
   all => 1, # just dump everything that's in the schema
   schema => $source_dbic_schema,
   directory => '/home/me/app/fixtures', # output directory
   #excludes => [ qw/Foo MyView/ ], # optionally exclude certain sources
 });

In this case objects will be dumped to subdirectories in the specified directory. For example:

 /home/me/app/fixtures/artist/1.fix
 /home/me/app/fixtures/artist/3.fix
 /home/me/app/fixtures/producer/5.fix

schema and directory are required attributes. also, one of config or all must be specified.

The optional parameter excludes takes an array ref of source names and can be used to exclude those sources when dumping the whole schema. This is useful if you have views in there, since those do not need fixtures and will currently result in an error when they are created and then used with populate.

Lastly, the config parameter can be a Perl HashRef instead of a file name. If this form is used your HashRef should conform to the structure rules defined for the JSON representations.

dump_config_sets

Works just like "dump" but instead of specifying a single json config set located in "config_dir" we dump each set named in the configs parameter.

The parameters are the same as for "dump" except instead of a directory parameter we have a directory_template which is a coderef expected to return a scalar that is a root directory where we will do the actual dumping. This coderef get three arguments: $self, $params and $set_name. For example:

    $fixture->dump_all_config_sets({
      schema => $schema,
      configs => [qw/one.json other.json/],
      directory_template => sub {
        my ($fixture, $params, $set) = @_;
        return io->catdir('var', 'fixtures', $params->{schema}->version, $set);
      },
    });

dump_all_config_sets

    my %local_params = %$params;
    my $local_self = bless { %$self }, ref($self);
    $local_params{directory} = $directory_template->($self, \%local_params, $set);
    $local_params{config} = $set;
    $self->dump(\%local_params);

Works just like "dump" but instead of specifying a single json config set located in "config_dir" we dump each set in turn to the specified directory.

The parameters are the same as for "dump" except instead of a directory parameter we have a directory_template which is a coderef expected to return a scalar that is a root directory where we will do the actual dumping. This coderef get three arguments: $self, $params and $set_name. For example:

    $fixture->dump_all_config_sets({
      schema => $schema,
      directory_template => sub {
        my ($fixture, $params, $set) = @_;
        return io->catdir('var', 'fixtures', $params->{schema}->version, $set);
      },
    });

populate

Arguments: \%$attrs
Return Value: 1
 $fixtures->populate( {
   # directory to look for fixtures in, as specified to dump
   directory => '/home/me/app/fixtures',

   # DDL to deploy
   ddl => '/home/me/app/sql/ddl.sql',

   # database to clear, deploy and then populate
   connection_details => ['dbi:mysql:dbname=app_dev', 'me', 'password'],

   # DDL to deploy after populating records, ie. FK constraints
   post_ddl => '/home/me/app/sql/post_ddl.sql',

   # use CASCADE option when dropping tables
   cascade => 1,

   # optional, set to 1 to run ddl but not populate
   no_populate => 0,

   # optional, set to 1 to run each fixture through ->create rather than have
   # each $rs populated using $rs->populate. Useful if you have overridden new() logic
   # that effects the value of column(s).
   use_create => 0,

   # optional, same as use_create except with find_or_create.
   # Useful if you are populating a persistent data store.
   use_find_or_create => 0,

   # Dont try to clean the database, just populate over whats there. Requires
   # schema option. Use this if you want to handle removing old data yourself
   # no_deploy => 1
   # schema => $schema
 } );

In this case the database app_dev will be cleared of all tables, then the specified DDL deployed to it, then finally all fixtures found in /home/me/app/fixtures will be added to it. populate will generate its own DBIx::Class schema from the DDL rather than being passed one to use. This is better as custom insert methods are avoided which can to get in the way. In some cases you might not have a DDL, and so this method will eventually allow a $schema object to be passed instead.

If needed, you can specify a post_ddl attribute which is a DDL to be applied after all the fixtures have been added to the database. A good use of this option would be to add foreign key constraints since databases like Postgresql cannot disable foreign key checks.

If your tables have foreign key constraints you may want to use the cascade attribute which will make the drop table functionality cascade, ie 'DROP TABLE $table CASCADE'.

directory is a required attribute.

If you wish for DBIx::Class::Fixtures to clear the database for you pass in dll (path to a DDL sql file) and connection_details (array ref of DSN, user and pass).

If you wish to deal with cleaning the schema yourself, then pass in a schema attribute containing the connected schema you wish to operate on and set the no_deploy attribute.

AUTHOR

  Luke Saunders <luke@shadowcatsystems.co.uk>

  Initial development sponsored by and (c) Takkle, Inc. 2007

CONTRIBUTORS

  Ash Berlin <ash@shadowcatsystems.co.uk>

  Matt S. Trout <mst@shadowcatsystems.co.uk>

  John Napiorkowski <jjnapiork@cpan.org>

  Drew Taylor <taylor.andrew.j@gmail.com>

  Frank Switalski <fswitalski@gmail.com>

  Chris Akins <chris.hexx@gmail.com>

  Tom Bloor <t.bloor@shadowcat.co.uk>

  Samuel Kaufman <skaufman@cpan.org>

LICENSE

  This library is free software under the same license as perl itself