- $lsfSub = Phenyx::Utils::LSF::Submission->new([\%h])
- listNodes([model=>str] [,type=str])
- $lsfSub->directory($cat [, $val]);
- SEE ALSO
For all related to LSF system submission (linux only). Read configuration data from properties (can be deduced from Phenyx::Config::GlobalParam)
Conceptually, this module is able to
- deal with a local or remote lsf master (meaning that the lsf system is not force to be ran on your machine)
- pass whatever argument to bsub command
- pre/pos synchronize sub directories
- synchronize file at regular interval during execution between node0 and lsf master (and back to your machine if lsf master is remote)
- extract properties from a function pointer (see t/Phenyx/Submit/LSF.t)
- deal with more than one lsf submission in the same working directory
#LSF activity lsf.active=1
#LSF master host from outside firstname.lastname@example.org #LSF master host name from within the cluster lsf.master.localHostname=frt #lsf.master.shell=/bin/bash lsf.queue.name=priority
lsf.mpich.nbnodes=3,4 lsf.mpich.wrapper=$LSF_BINDIR/pam -g 1 mpichp4_wrapper
lsf.extrasubcommand=-P phenyx lsf.extrasubcommand+=-R "select[model=XeonEM64T34]"
lsf.sync.pre.node0.directories=working,tmp lsf.sync.post.node0.directories=working lsf.sync.continuous.node0.files=tmp:my-stderr.txt lsf.refreshDelay=10
Returns if the LSF is to be launched (lsf.active property)
Returns true if LSF system is possible on the lsf master (bsub binary is available)
return if mpich features is activated from (mpich.active property)
Set a function to extract properties
Read a property (such as lsf.master.hostname)
Returns the LSF job id from the submitted job (undef means the job has not yet been submitted (or no id has been returned)
Returns a hash of available nodes ([for a given proc type & model ]) within the served lsf system. Data is based on the lsf lshosts command.
Returns a list of available queues
Returns [or set] a directory (ex $cat=working, tmp...)
If more than one lsf job is submitted from within a directory, we want to label them (to build exec & pre-exec scripts, stderr and stdout files for examples). Such a tag is read parsing lsf.*.scripts files in the current working directory.
Returns the current scriptTag
Synchronize files (if any) at start time
Synchronize files (if any) continuously during execution (frequency based on phenyx.lsf.sync.continuous.delay property)
Synchronize files (if any) at finish time.
Builds the script file to feed the LSF bsub command. Returns the script file name
Builds the script to be executed prior to launch (-E option) if any. Returns the script file name or undef if no command is to be executed.
Returns if any file is to be synchronized
Builds locally the script which will be executed to synchronize the files (lsf.sync.continuous... property) every refreshDelay this script has to be standalone and to quit whenever the job has finished.
Returns without doing nothing if the script already exist or if no file is to be synchronized.
Build a .sh file to call the file from buildSynchroScript with the correct arguments.
Submit the current setup lsf script to the system
Returns the .profile file (with aliases and shell functions) from the lib files
Waits for the job has finished. Returns 0 if OK
If a phenyx.lsf.master.hostname exist, make a command to ssh all conmmands. If not, just execute locally.
Prepend an ssh string if the lsf master is remote
#lsf active (no effective lsf submission if not active) lsf.active=0|1 #lsf master hostname, to be contacted from outside the cluster #nothing means lsf master is the localhost lsf.master.hostname=host #lsf master hostname, contacted from within the cluster lsf.master.localHostname=host #queue to to submited (bsub -q argument) lsf.queue.name=name #/bin/bash by default (bsub -L argument) lsf.master.shell=shell path #nb nodes for mpich submission (bsub -n arg) lsf.mpich.nbnodes=n[,m] #mpich wrapper command lsf.mpich.wrapper=string #extra param to be passed to bsub lsf.extrasubcommand=multiline string
#registered directories to be synchronized beteween master and node0 before the job starts lsf.sync.pre.node0.directories=dir1[,dir2] #after the job has finsished lsf.sync.post.node0.directories=dir1[,dir2[...]] #files to be kept updated back on the master every refreshDelay seconds lsf.sync.continuous.node0.files=dir1:file1,[dir2:file2[...]] lsf.sync.refreshDelay=int [10 default]
Copyright (C) 2004-2005 Geneva Bioinformatics www.genebio.com
This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
Alexandre Masselot, www.genebio.com