Data::Table::Text - Write data in tabular text format.
use Data::Table::Text;
# Print a table:
my $d = [[qq(a), qq(b\nbb), qq(c\ncc\nccc\n)], [qq(1), qq(1\n22), qq(1\n22\n333\n)], ]; my $t = formatTable($d, [qw(A BB CCC)]); ok $t eq <<END; A BB CCC 1 a b c bb cc ccc 2 1 1 1 22 22 333 END
# Print a table containing tables and make it into a report:
my $D = [[qq(See the\ntable\nopposite), $t], [qq(Or\nthis\none), $t], ]; my $T = formatTable($D, [qw(Description Table)], head=><<END); Table of Tables. Table has NNNN rows each of which contains a table. END ok $T eq <<END; Table of Tables. Table has 2 rows each of which contains a table. Description Table 1 See the A BB CCC table 1 a b c opposite bb cc ccc 2 1 1 1 22 22 333 2 Or A BB CCC this 1 a b c one bb cc ccc 2 1 1 1 22 22 333 END
# Print an array of arrays:
my $aa = formatTable ([[qw(A B C )], [qw(AA BB CC )], [qw(AAA BBB CCC)], [qw(1 22 333)]], [qw (aa bb cc)]); ok $aa eq <<END; aa bb cc 1 A B C 2 AA BB CC 3 AAA BBB CCC 4 1 22 333 END
# Print an array of hashes:
my $ah = formatTable ([{aa=> "A", bb => "B", cc => "C" }, {aa=> "AA", bb => "BB", cc => "CC" }, {aa=> "AAA", bb => "BBB", cc => "CCC" }, {aa=> 1, bb => 22, cc => 333 }]); ok $ah eq <<END; aa bb cc 1 A B C 2 AA BB CC 3 AAA BBB CCC 4 1 22 333 END
# Print a hash of arrays:
my $ha = formatTable ({"" => ["aa", "bb", "cc"], "1" => ["A", "B", "C"], "22" => ["AA", "BB", "CC"], "333" => ["AAA", "BBB", "CCC"], "4444" => [1, 22, 333]}, [qw(Key A B C)] ); ok $ha eq <<END; Key A B C aa bb cc 1 A B C 22 AA BB CC 333 AAA BBB CCC 4444 1 22 333 END
# Print a hash of hashes:
my $hh = formatTable ({a => {aa=>"A", bb=>"B", cc=>"C" }, aa => {aa=>"AA", bb=>"BB", cc=>"CC" }, aaa => {aa=>"AAA", bb=>"BBB", cc=>"CCC" }, aaaa => {aa=>1, bb=>22, cc=>333 }}); ok $hh eq <<END; aa bb cc a A B C aa AA BB CC aaa AAA BBB CCC aaaa 1 22 333 END
# Print an array of scalars:
my $a = formatTable(["a", "bb", "ccc", 4], [q(#), q(Col)]); ok $a eq <<END; # Col 0 a 1 bb 2 ccc 3 4 END
# Print a hash of scalars:
my $h = formatTable({aa=>"AAAA", bb=>"BBBB", cc=>"333"}, [qw(Key Title)]); ok $h eq <<END; Key Title aa AAAA bb BBBB cc 333 END
Write data in tabular text format.
Version 20200120.
The following sections describe the methods in each functional area of this module. For an alphabetic listing of all methods by name see Index.
These methods are the ones most likely to be of immediate use to anyone using this module for the first time:
absFromAbsPlusRel($a, $r)
Absolute file from an absolute file $a plus a relative file $r. In the event that the relative file $r is, in fact, an absolute file then it is returned as the result.
awsParallelProcessFiles($userData, $parallel, $results, $files, %options)
Process files in parallel across multiple Amazon Web Services instances if available or in series if not. The data located by $userData is transferred from the primary instance, as determined by awsParallelPrimaryInstanceId, to all the secondary instances. $parallel contains a reference to a sub, parameterized by array @_ = (a copy of the user data, the name of the file to process), which will be executed upon each session instance including the primary instance to update $userData. $results contains a reference to a sub, parameterized by array @_ = (the user data, an array of results returned by each execution of $parallel), that will be called on the primary instance to process the results folders from each instance once their results folders have been copied back and merged into the results folder of the primary instance. $results should update its copy of $userData with the information received from each instance. $files is a reference to an array of the files to be processed: each file will be copied from the primary instance to each of the secondary instances before parallel processing starts. %options contains any parameters needed to interact with EC2 via the Amazon Web Services Command Line Interface. The returned result is that returned by sub $results.
clearFolder($folder, $limitCount, $noMsg)
Remove all the files and folders under and including the specified $folder as long as the number of files to be removed is less than the specified $limitCount. Sometimes the folder can be emptied but not removed - perhaps because it a link, in this case a message is produced unless suppressed by the optional $nomsg parameter.
dateTimeStamp
Year-monthNumber-day at hours:minute:seconds
execPerlOnRemote($code, $ip)
Execute some Perl $code on the server whose ip address is specified by $ip or returned by awsIp.
filePathExt(@File)
Create a file name from a list of names the last of which is assumed to be the extension of the file name. Identical to fpe.
fn($file)
Remove the path and extension from a file name.
formatTable($data, $columnTitles, @options)
Format various $data structures as a table with titles as specified by $columnTitles: either a reference to an array of column titles or a string each line of which contains the column title as the first word with the rest of the line describing that column.
Optionally create a report from the table using the report %options described in formatTableCheckKeys
genHash($bless, %attributes)
Return a $blessed hash with the specified $attributes accessible via lvalue method method calls. updateDocumentation will generate documentation at "Hash Definitions" for the hash defined by the call to genHash if the call is laid out as in the example below.
readFile($file)
Return the content of a file residing on the local machine interpreting the content of the file as utf8.
readFileFromRemote($file, $ip)
Copy and read a $file from the remote machine whose ip address is specified by $ip or returned by awsIp and return the content of $file interpreted as utf8 .
relFromAbsAgainstAbs($a, $b)
Relative file from one absolute file $a against another $b.
runInParallel($maximumNumberOfProcesses, $parallel, $results, @array)
Process the elements of an array in parallel using a maximum of $maximumNumberOfProcesses processes. sub &$parallel is forked to process each array element in parallel. The results returned by the forked copies of &$parallel are presented as a single array to sub &$results which is run in series. @array contains the elements to be processed. Returns the result returned by &$results.
searchDirectoryTreesForMatchingFiles(@FoldersandExtensions)
Search the specified directory trees for the files (not folders) that match the specified extensions. The argument list should include at least one path name to be useful. If no file extensions are supplied then all the files below the specified paths are returned. Arguments wrapped in [] will be unwrapped.
writeFile($file, $string)
Write to a new $file, after creating a path to the $file with makePath if necessary, a $string of Unicode content encoded as utf8. Return the name of the $file on success else confess if the file already exists or any other error occurs.
writeFileToRemote($file, $string, $ip)
Write to a new $file, after creating a path to the file with makePath if necessary, a $string of Unicode content encoded as utf8 then copy the $file to the remote server whose ip address is specified by $ip or returned by awsIp. Return the name of the $file on success else confess if the file already exists or any other error occurs.
xxxr($cmd, $ip)
Execute a command $cmd via bash on the server whose ip address is specified by $ip or returned by awsIp. The command will be run using the userid listed in .ssh/config
Date and timestamps as used in logs of long running commands.
Example:
ok 𝗱𝗮𝘁𝗲𝗧𝗶𝗺𝗲𝗦𝘁𝗮𝗺𝗽 =~ m(\A\d{4}-\d\d-\d\d at \d\d:\d\d:\d\d\Z), q(dts);
Date time stamp without white space.
ok 𝗱𝗮𝘁𝗲𝗧𝗶𝗺𝗲𝗦𝘁𝗮𝗺𝗽𝗡𝗮𝗺𝗲 =~ m(\A_on_\d{4}_\d\d_\d\d_at_\d\d_\d\d_\d\d\Z);
Year-monthName-day
ok 𝗱𝗮𝘁𝗲𝗦𝘁𝗮𝗺𝗽 =~ m(\A\d{4}-\w{3}-\d\d\Z);
YYYYmmdd-HHMMSS
ok 𝘃𝗲𝗿𝘀𝗶𝗼𝗻𝗖𝗼𝗱𝗲 =~ m(\A\d{8}-\d{6}\Z);
YYYY-mm-dd-HH:MM:SS
ok 𝘃𝗲𝗿𝘀𝗶𝗼𝗻𝗖𝗼𝗱𝗲𝗗𝗮𝘀𝗵𝗲𝗱 =~ m(\A\d{4}-\d\d-\d\d-\d\d:\d\d:\d\d\Z);
hours:minute:seconds
ok 𝘁𝗶𝗺𝗲𝗦𝘁𝗮𝗺𝗽 =~ m(\A\d\d:\d\d:\d\d\Z);
Micro seconds since unix epoch.
ok 𝗺𝗶𝗰𝗿𝗼𝗦𝗲𝗰𝗼𝗻𝗱𝘀𝗦𝗶𝗻𝗰𝗲𝗘𝗽𝗼𝗰𝗵 > 47*365*24*60*60*1e6;
Various ways of processing commands and writing results.
Log debug messages with a time stamp and originating file and line number.
Parameter Description 1 @messages Messages
𝗱𝗱𝗱 "Hello";
Confess a message with a line position and a file that Geany will jump to if clicked on.
Parameter Description 1 $line Line 2 $file File 3 @m Messages
𝗳𝗳𝗳 __LINE__, __FILE__, "Hello world";
Log messages with a time stamp and originating file and line number.
𝗹𝗹𝗹 "Hello world";
Log messages with a differential time in milliseconds and originating file and line number.
𝗺𝗺𝗺 "Hello world";
Execute a shell command optionally checking its response. The command to execute is specified as one or more strings which are joined together after removing any new lines. Optionally the last string can be a regular expression that is used to test any non blank output generated by the execution of the command: if the regular expression fails the command and the command output are printed, else it is suppressed as being uninteresting. If such a regular expression is not supplied then the command and its non blank output lines are always printed.
Parameter Description 1 @cmd Command to execute followed by an optional regular expression to test the results
{ok 𝘅𝘅𝘅("echo aaa") =~ /aaa/;
Parameter Description 1 $cmd Command string 2 $ip Optional ip address
if (0) {ok 𝘅𝘅𝘅𝗿 q(pwd); }
Execute a block of shell commands line by line after removing comments - stop if there is a non zero return code from any command.
Parameter Description 1 $cmd Commands to execute separated by new lines
ok !𝘆𝘆𝘆 <<END; echo aaa echo bbb END
Execute lines of commands after replacing new lines with && then check that the pipeline execution results in a return code of zero and that the execution results match the optional regular expression if one has been supplied; confess() to an error if either check fails. To execute remotely, add "ssh ... 'echo start" as the first line and "echo end'" as the last line with the commands to be executed on the lines in between.
Parameter Description 1 $cmd Commands to execute - one per line with no trailing && 2 $success Optional regular expression to check for acceptable results 3 $returnCode Optional regular expression to check the acceptable return codes 4 $message Message of explanation if any of the checks fail
ok 𝘇𝘇𝘇(<<END, qr(aaa\s*bbb)s); echo aaa echo bbb END
Parameter Description 1 $code Code to execute 2 $ip Optional ip address
ok 𝗲𝘅𝗲𝗰𝗣𝗲𝗿𝗹𝗢𝗻𝗥𝗲𝗺𝗼𝘁𝗲(<<'END') =~ m(Hello from: t2.micro)i; #!/usr/bin/perl -I/home/phil/perl/cpan/DataTableText/lib/ use Data::Table::Text qw(:all); say STDERR "Hello from: ", awsCurrentInstanceType; END
Call the specified $sub after classifying the specified array of words in $args into positional and keyword parameters. $sub([$positional], {keyword=>value}) will be called with a reference to an array of positional parameters followed by a reference to a hash of keywords and their values, The value returned by $sub will be returned to the caller. The keywords names will be validated if $valid is either a reference to an array of valid keywords names or a hash of {valid keyword name => textual description}. Confess with a table of valid keywords definitions if $valid is specified and an invalid keyword argument is presented.
Parameter Description 1 $sub Sub to call 2 $args List of arguments to parse 3 $valid Optional list of valid parameters else all parameters will be accepted
my $r = 𝗽𝗮𝗿𝘀𝗲𝗖𝗼𝗺𝗺𝗮𝗻𝗱𝗟𝗶𝗻𝗲𝗔𝗿𝗴𝘂𝗺𝗲𝗻𝘁𝘀 {[@_]} [qw( aaa bbb -c --dd --eee=EEEE -f=F), q(--gg=g g), q(--hh=h h)]; is_deeply $r, [["aaa", "bbb"], {c=>undef, dd=>undef, eee=>"EEEE", f=>"F", gg=>"g g", hh=>"h h"}, ];
Call the specified $sub in a separate child process, wait for it to complete, then copy back the named @our variables from the child process to the calling parent process effectively freeing any memory used during the call.
Parameter Description 1 $sub Sub to call 2 @our Names of our variable names with preceding sigils to copy back
our $a = q(1); our @a = qw(1); our %a = (a=>1); our $b = q(1); for(2..4) { 𝗰𝗮𝗹𝗹 {$a = $_ x 1e3; $a[0] = $_ x 1e2; $a{a} = $_ x 1e1; $b = 2;} qw($a @a %a); ok $a == $_ x 1e3; ok $a[0] == $_ x 1e2; ok $a{a} == $_ x 1e1; ok $b == 1; }
Operations on files and paths.
Information about each file.
Get the size of a $file in bytes.
Parameter Description 1 $file File name
my $f = writeFile("zzz.data", "aaa"); ok 𝗳𝗶𝗹𝗲𝗦𝗶𝘇𝗲($f) == 3;
Return the largest $file.
Parameter Description 1 @files File names
my $d = temporaryFolder; my @f = map {owf(fpe($d, $_, q(txt)), 'X' x ($_ ** 2 % 11))} 1..9; my $f = 𝗳𝗶𝗹𝗲𝗟𝗮𝗿𝗴𝗲𝘀𝘁𝗦𝗶𝘇𝗲(@f); ok fn($f) eq '3', 'aaa'; my $b = folderSize($d); ok $b > 0, 'bbb'; my $c = processFilesInParallel( sub {my ($file) = @_; [&fileSize($file), $file] }, sub {scalar @_; }, (@f) x 12); ok 108 == $c, 'cc11'; my $C = processSizesInParallel sub {my ($file) = @_; [&fileSize($file), $file] }, sub {scalar @_; }, map {[fileSize($_), $_]} (@f) x 12; ok 108 == $C, 'cc2'; my $J = processJavaFilesInParallel sub {my ($file) = @_; [&fileSize($file), $file] }, sub {scalar @_; }, (@f) x 12; ok 108 == $J, 'cc3'; clearFolder($d, 12);
Get the size of a $folder in bytes.
Parameter Description 1 $folder Folder name
my $d = temporaryFolder; my @f = map {owf(fpe($d, $_, q(txt)), 'X' x ($_ ** 2 % 11))} 1..9; my $f = fileLargestSize(@f); ok fn($f) eq '3', 'aaa'; my $b = 𝗳𝗼𝗹𝗱𝗲𝗿𝗦𝗶𝘇𝗲($d); ok $b > 0, 'bbb'; my $c = processFilesInParallel( sub {my ($file) = @_; [&fileSize($file), $file] }, sub {scalar @_; }, (@f) x 12); ok 108 == $c, 'cc11'; my $C = processSizesInParallel sub {my ($file) = @_; [&fileSize($file), $file] }, sub {scalar @_; }, map {[fileSize($_), $_]} (@f) x 12; ok 108 == $C, 'cc2'; my $J = processJavaFilesInParallel sub {my ($file) = @_; [&fileSize($file), $file] }, sub {scalar @_; }, (@f) x 12; ok 108 == $J, 'cc3'; clearFolder($d, 12);
Get the Md5 sum of the content of a $file.
Parameter Description 1 $file File or string
𝗳𝗶𝗹𝗲𝗠𝗱𝟱𝗦𝘂𝗺(q(/etc/hosts)); my $s = join '', 1..100; my $m = q(ef69caaaeea9c17120821a9eb6c7f1de); ok stringMd5Sum($s) eq $m; my $f = writeFile(undef, $s); ok 𝗳𝗶𝗹𝗲𝗠𝗱𝟱𝗦𝘂𝗺($f) eq $m; unlink $f; ok guidFromString(join '', 1..100) eq q(GUID-ef69caaa-eea9-c171-2082-1a9eb6c7f1de); ok guidFromMd5(stringMd5Sum(join('', 1..100))) eq q(GUID-ef69caaa-eea9-c171-2082-1a9eb6c7f1de); ok md5FromGuid(q(GUID-ef69caaa-eea9-c171-2082-1a9eb6c7f1de)) eq q(ef69caaaeea9c17120821a9eb6c7f1de); ok stringMd5Sum(q(𝝰 𝝱 𝝲)) eq q(3c2b7c31b1011998bd7e1f66fb7c024d); } if (1) {ok arraySum (1..10) == 55; ok arrayProduct(1..5) == 120; is_deeply[arrayTimes(2, 1..5)], [qw(2 4 6 8 10)];
Create a guid from an md5 hash.
Parameter Description 1 $m Md5 hash
my $s = join '', 1..100; my $m = q(ef69caaaeea9c17120821a9eb6c7f1de); ok stringMd5Sum($s) eq $m; my $f = writeFile(undef, $s); ok fileMd5Sum($f) eq $m; unlink $f; ok guidFromString(join '', 1..100) eq q(GUID-ef69caaa-eea9-c171-2082-1a9eb6c7f1de); ok 𝗴𝘂𝗶𝗱𝗙𝗿𝗼𝗺𝗠𝗱𝟱(stringMd5Sum(join('', 1..100))) eq q(GUID-ef69caaa-eea9-c171-2082-1a9eb6c7f1de); ok md5FromGuid(q(GUID-ef69caaa-eea9-c171-2082-1a9eb6c7f1de)) eq q(ef69caaaeea9c17120821a9eb6c7f1de); ok stringMd5Sum(q(𝝰 𝝱 𝝲)) eq q(3c2b7c31b1011998bd7e1f66fb7c024d); } if (1) {ok arraySum (1..10) == 55; ok arrayProduct(1..5) == 120; is_deeply[arrayTimes(2, 1..5)], [qw(2 4 6 8 10)];
Recover an md5 sum from a guid.
Parameter Description 1 $G Guid
my $s = join '', 1..100; my $m = q(ef69caaaeea9c17120821a9eb6c7f1de); ok stringMd5Sum($s) eq $m; my $f = writeFile(undef, $s); ok fileMd5Sum($f) eq $m; unlink $f; ok guidFromString(join '', 1..100) eq q(GUID-ef69caaa-eea9-c171-2082-1a9eb6c7f1de); ok guidFromMd5(stringMd5Sum(join('', 1..100))) eq q(GUID-ef69caaa-eea9-c171-2082-1a9eb6c7f1de); ok 𝗺𝗱𝟱𝗙𝗿𝗼𝗺𝗚𝘂𝗶𝗱(q(GUID-ef69caaa-eea9-c171-2082-1a9eb6c7f1de)) eq q(ef69caaaeea9c17120821a9eb6c7f1de); ok stringMd5Sum(q(𝝰 𝝱 𝝲)) eq q(3c2b7c31b1011998bd7e1f66fb7c024d); } if (1) {ok arraySum (1..10) == 55; ok arrayProduct(1..5) == 120; is_deeply[arrayTimes(2, 1..5)], [qw(2 4 6 8 10)];
Create a guid representation of the md5 sum of the content of a string.
Parameter Description 1 $string String
my $s = join '', 1..100; my $m = q(ef69caaaeea9c17120821a9eb6c7f1de); ok stringMd5Sum($s) eq $m; my $f = writeFile(undef, $s); ok fileMd5Sum($f) eq $m; unlink $f; ok 𝗴𝘂𝗶𝗱𝗙𝗿𝗼𝗺𝗦𝘁𝗿𝗶𝗻𝗴(join '', 1..100) eq q(GUID-ef69caaa-eea9-c171-2082-1a9eb6c7f1de); ok guidFromMd5(stringMd5Sum(join('', 1..100))) eq q(GUID-ef69caaa-eea9-c171-2082-1a9eb6c7f1de); ok md5FromGuid(q(GUID-ef69caaa-eea9-c171-2082-1a9eb6c7f1de)) eq q(ef69caaaeea9c17120821a9eb6c7f1de); ok stringMd5Sum(q(𝝰 𝝱 𝝲)) eq q(3c2b7c31b1011998bd7e1f66fb7c024d); } if (1) {ok arraySum (1..10) == 55; ok arrayProduct(1..5) == 120; is_deeply[arrayTimes(2, 1..5)], [qw(2 4 6 8 10)];
Get the modified time of a $file as seconds since the epoch.
ok 𝗳𝗶𝗹𝗲𝗠𝗼𝗱𝗧𝗶𝗺𝗲($0) =~ m(\A\d+\Z)s;
Calls the specified sub $make for each source file that is missing and then again against the $target file if any of the @source files were missing or the $target file is older than any of the @source files or if the target does not exist. The file name is passed to the sub each time in $_. Returns the files to be remade in the order they should be made.
Parameter Description 1 $make Make with this sub 2 $target Target file 3 @source Source files
my @Files = qw(a b c); my @files = (@Files, qw(d)); writeFile($_, $_), sleep 1 for @Files; my $a = ''; my @a = 𝗳𝗶𝗹𝗲𝗢𝘂𝘁𝗢𝗳𝗗𝗮𝘁𝗲 {$a .= $_} q(a), @files; ok $a eq 'da'; is_deeply [@a], [qw(d a)]; my $b = ''; my @b = 𝗳𝗶𝗹𝗲𝗢𝘂𝘁𝗢𝗳𝗗𝗮𝘁𝗲 {$b .= $_} q(b), @files; ok $b eq 'db'; is_deeply [@b], [qw(d b)]; my $c = ''; my @c = 𝗳𝗶𝗹𝗲𝗢𝘂𝘁𝗢𝗳𝗗𝗮𝘁𝗲 {$c .= $_} q(c), @files; ok $c eq 'dc'; is_deeply [@c], [qw(d c)]; my $d = ''; my @d = 𝗳𝗶𝗹𝗲𝗢𝘂𝘁𝗢𝗳𝗗𝗮𝘁𝗲 {$d .= $_} q(d), @files; ok $d eq 'd'; is_deeply [@d], [qw(d)]; my @A = 𝗳𝗶𝗹𝗲𝗢𝘂𝘁𝗢𝗳𝗗𝗮𝘁𝗲 {} q(a), @Files; my @B = 𝗳𝗶𝗹𝗲𝗢𝘂𝘁𝗢𝗳𝗗𝗮𝘁𝗲 {} q(b), @Files; my @C = 𝗳𝗶𝗹𝗲𝗢𝘂𝘁𝗢𝗳𝗗𝗮𝘁𝗲 {} q(c), @Files; is_deeply [@A], [qw(a)]; is_deeply [@B], [qw(b)]; is_deeply [@C], []; unlink for @Files;
Returns the name of the first file from @files that exists or undef if none of the named @files exist.
Parameter Description 1 @files Files to check
my $d = temporaryFolder; ok $d eq 𝗳𝗶𝗿𝘀𝘁𝗙𝗶𝗹𝗲𝗧𝗵𝗮𝘁𝗘𝘅𝗶𝘀𝘁𝘀("$d/$d", $d);
Convert a unix $file name to windows format
Parameter Description 1 $file File
if (1) {ok 𝗳𝗶𝗹𝗲𝗜𝗻𝗪𝗶𝗻𝗱𝗼𝘄𝘀𝗙𝗼𝗿𝗺𝗮𝘁(fpd(qw(/a b c d))) eq q(\a\b\c\d\\); }
File names and components.
Create file names from file name components.
Create a file name from a list of names. Identical to fpf.
Parameter Description 1 @file File name components
ok 𝗳𝗶𝗹𝗲𝗣𝗮𝘁𝗵 (qw(/aaa bbb ccc ddd.eee)) eq "/aaa/bbb/ccc/ddd.eee"; ok filePathDir(qw(/aaa bbb ccc ddd)) eq "/aaa/bbb/ccc/ddd/"; ok filePathDir('', qw(aaa)) eq "aaa/"; ok filePathDir('') eq ""; ok filePathExt(qw(aaa xxx)) eq "aaa.xxx"; ok filePathExt(qw(aaa bbb xxx)) eq "aaa/bbb.xxx"; ok fpd (qw(/aaa bbb ccc ddd)) eq "/aaa/bbb/ccc/ddd/"; ok fpf (qw(/aaa bbb ccc ddd.eee)) eq "/aaa/bbb/ccc/ddd.eee"; ok fpe (qw(aaa bbb xxx)) eq "aaa/bbb.xxx";
fpf is a synonym for filePath.
Create a folder name from a list of names. Identical to fpd.
Parameter Description 1 @file Directory name components
ok filePath (qw(/aaa bbb ccc ddd.eee)) eq "/aaa/bbb/ccc/ddd.eee"; ok 𝗳𝗶𝗹𝗲𝗣𝗮𝘁𝗵𝗗𝗶𝗿(qw(/aaa bbb ccc ddd)) eq "/aaa/bbb/ccc/ddd/"; ok 𝗳𝗶𝗹𝗲𝗣𝗮𝘁𝗵𝗗𝗶𝗿('', qw(aaa)) eq "aaa/"; ok 𝗳𝗶𝗹𝗲𝗣𝗮𝘁𝗵𝗗𝗶𝗿('') eq ""; ok filePathExt(qw(aaa xxx)) eq "aaa.xxx"; ok filePathExt(qw(aaa bbb xxx)) eq "aaa/bbb.xxx"; ok fpd (qw(/aaa bbb ccc ddd)) eq "/aaa/bbb/ccc/ddd/"; ok fpf (qw(/aaa bbb ccc ddd.eee)) eq "/aaa/bbb/ccc/ddd.eee"; ok fpe (qw(aaa bbb xxx)) eq "aaa/bbb.xxx";
fpd is a synonym for filePathDir.
Parameter Description 1 @File File name components and extension
ok filePath (qw(/aaa bbb ccc ddd.eee)) eq "/aaa/bbb/ccc/ddd.eee"; ok filePathDir(qw(/aaa bbb ccc ddd)) eq "/aaa/bbb/ccc/ddd/"; ok filePathDir('', qw(aaa)) eq "aaa/"; ok filePathDir('') eq ""; ok 𝗳𝗶𝗹𝗲𝗣𝗮𝘁𝗵𝗘𝘅𝘁(qw(aaa xxx)) eq "aaa.xxx"; ok 𝗳𝗶𝗹𝗲𝗣𝗮𝘁𝗵𝗘𝘅𝘁(qw(aaa bbb xxx)) eq "aaa/bbb.xxx"; ok fpd (qw(/aaa bbb ccc ddd)) eq "/aaa/bbb/ccc/ddd/"; ok fpf (qw(/aaa bbb ccc ddd.eee)) eq "/aaa/bbb/ccc/ddd.eee"; ok fpe (qw(aaa bbb xxx)) eq "aaa/bbb.xxx";
fpe is a synonym for filePathExt.
Get file name components from a file name.
Get the path from a file name.
ok 𝗳𝗽 (q(a/b/c.d.e)) eq q(a/b/);
Remove the extension from a file name.
ok 𝗳𝗽𝗻(q(a/b/c.d.e)) eq q(a/b/c.d);
ok 𝗳𝗻 (q(a/b/c.d.e)) eq q(c.d);
Remove the path from a file name.
ok 𝗳𝗻𝗲(q(a/b/c.d.e)) eq q(c.d.e);
Get the extension of a file name.
ok 𝗳𝗲 (q(a/b/c.d.e)) eq q(e);
Return the name of the specified file if it exists, else confess the maximum extent of the path that does exist.
Parameter Description 1 $file File to check
my $d = filePath (my @d = qw(a b c d)); my $f = filePathExt(qw(a b c d e x)); my $F = filePathExt(qw(a b c e d)); createEmptyFile($f); ok eval{𝗰𝗵𝗲𝗰𝗸𝗙𝗶𝗹𝗲($d)}; ok eval{𝗰𝗵𝗲𝗰𝗸𝗙𝗶𝗹𝗲($f)};
Quote a file name.
ok 𝗾𝘂𝗼𝘁𝗲𝗙𝗶𝗹𝗲(fpe(qw(a "b" c))) eq q("a/\"b\".c");
Removes a file $prefix from an array of @files.
Parameter Description 1 $prefix File prefix 2 @files Array of file names
is_deeply [qw(a b)], [&𝗿𝗲𝗺𝗼𝘃𝗲𝗙𝗶𝗹𝗲𝗣𝗿𝗲𝗳𝗶𝘅(qw(a/ a/a a/b))]; is_deeply [qw(b)], [&𝗿𝗲𝗺𝗼𝘃𝗲𝗙𝗶𝗹𝗲𝗣𝗿𝗲𝗳𝗶𝘅("a/", "a/b")];
Swaps the start of a $file name from a $known name to a $new one if the file does in fact start with the $known name otherwise returns the original file name as it is. If the optional $new prefix is omitted then the $known prefix is removed from the $file name.
Parameter Description 1 $file File name 2 $known Existing prefix 3 $new Optional new prefix defaults to q()
ok 𝘀𝘄𝗮𝗽𝗙𝗶𝗹𝗲𝗣𝗿𝗲𝗳𝗶𝘅(q(/aaa/bbb.txt), q(/aaa/), q(/AAA/)) eq q(/AAA/bbb.txt);
Given a $file, change its extension to $extension. Removes the extension if no $extension is specified.
Parameter Description 1 $file File name 2 $extension Optional new extension
ok 𝘀𝗲𝘁𝗙𝗶𝗹𝗲𝗘𝘅𝘁𝗲𝗻𝘀𝗶𝗼𝗻(q(.c), q(d)) eq q(.d); ok 𝘀𝗲𝘁𝗙𝗶𝗹𝗲𝗘𝘅𝘁𝗲𝗻𝘀𝗶𝗼𝗻(q(b.c), q(d)) eq q(b.d); ok 𝘀𝗲𝘁𝗙𝗶𝗹𝗲𝗘𝘅𝘁𝗲𝗻𝘀𝗶𝗼𝗻(q(/a/b.c), q(d)) eq q(/a/b.d);
Given a $file, swap the folder name of the $file from $known to $new if the file $file starts with the $known folder name else return the $file as it is.
Parameter Description 1 $file File name 2 $known Existing prefix 3 $new New prefix
my $g = fpd(qw(a b c d)); my $h = fpd(qw(a b cc dd)); my $i = fpe($g, qw(aaa txt)); my $j = 𝘀𝘄𝗮𝗽𝗙𝗼𝗹𝗱𝗲𝗿𝗣𝗿𝗲𝗳𝗶𝘅($i, $g, $h); ok $j =~ m(a/b/cc/dd/)s;
Check whether a $file name is fully qualified or not and, optionally, whether it is fully qualified with a specified $prefix or not.
Parameter Description 1 $file File name to test 2 $prefix File name prefix
ok 𝗳𝘂𝗹𝗹𝘆𝗤𝘂𝗮𝗹𝗶𝗳𝗶𝗲𝗱𝗙𝗶𝗹𝗲(q(/a/b/c.d)); ok 𝗳𝘂𝗹𝗹𝘆𝗤𝘂𝗮𝗹𝗶𝗳𝗶𝗲𝗱𝗙𝗶𝗹𝗲(q(/a/b/c.d), q(/a/b)); ok !𝗳𝘂𝗹𝗹𝘆𝗤𝘂𝗮𝗹𝗶𝗳𝗶𝗲𝗱𝗙𝗶𝗹𝗲(q(/a/b/c.d), q(/a/c)); ok !𝗳𝘂𝗹𝗹𝘆𝗤𝘂𝗮𝗹𝗶𝗳𝗶𝗲𝗱𝗙𝗶𝗹𝗲(q(c.d));
Return the fully qualified name of a file.
if (0) {ok 𝗳𝘂𝗹𝗹𝘆𝗤𝘂𝗮𝗹𝗶𝗳𝘆𝗙𝗶𝗹𝗲(q(perl/cpan)) eq q(/home/phil/perl/cpan/); }
Remove duplicated leading directory names from a file name.
ok q(a/b.c) eq 𝗿𝗲𝗺𝗼𝘃𝗲𝗗𝘂𝗽𝗹𝗶𝗰𝗮𝘁𝗲𝗣𝗿𝗲𝗳𝗶𝘅𝗲𝘀("a/a/b.c"); ok q(a/b.c) eq 𝗿𝗲𝗺𝗼𝘃𝗲𝗗𝘂𝗽𝗹𝗶𝗰𝗮𝘁𝗲𝗣𝗿𝗲𝗳𝗶𝘅𝗲𝘀("a/b.c"); ok q(b.c) eq 𝗿𝗲𝗺𝗼𝘃𝗲𝗗𝘂𝗽𝗹𝗶𝗰𝗮𝘁𝗲𝗣𝗿𝗲𝗳𝗶𝘅𝗲𝘀("b.c");
The name of a folder containing a file
ok 𝗰𝗼𝗻𝘁𝗮𝗶𝗻𝗶𝗻𝗴𝗙𝗼𝗹𝗱𝗲𝗿𝗡𝗮𝗺𝗲(q(/a/b/c.d)) eq q(b);
Position in the file system.
Get the current working directory.
𝗰𝘂𝗿𝗿𝗲𝗻𝘁𝗗𝗶𝗿𝗲𝗰𝘁𝗼𝗿𝘆;
Get the path to the folder above the current working folder.
𝗰𝘂𝗿𝗿𝗲𝗻𝘁𝗗𝗶𝗿𝗲𝗰𝘁𝗼𝗿𝘆𝗔𝗯𝗼𝘃𝗲;
Parse a file name into (path, name, extension) considering .. to be always part of the path and using undef to mark missing components. This differs from (fp, fn, fe) which return q() for missing components and do not interpret . or .. as anything special
Parameter Description 1 $file File name to parse
if (1) {is_deeply [𝗽𝗮𝗿𝘀𝗲𝗙𝗶𝗹𝗲𝗡𝗮𝗺𝗲 "/home/phil/test.data"], ["/home/phil/", "test", "data"]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗙𝗶𝗹𝗲𝗡𝗮𝗺𝗲 "/home/phil/test"], ["/home/phil/", "test"]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗙𝗶𝗹𝗲𝗡𝗮𝗺𝗲 "phil/test.data"], ["phil/", "test", "data"]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗙𝗶𝗹𝗲𝗡𝗮𝗺𝗲 "phil/test"], ["phil/", "test"]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗙𝗶𝗹𝗲𝗡𝗮𝗺𝗲 "test.data"], [undef, "test", "data"]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗙𝗶𝗹𝗲𝗡𝗮𝗺𝗲 "phil/"], [qw(phil/)]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗙𝗶𝗹𝗲𝗡𝗮𝗺𝗲 "/phil"], [qw(/ phil)]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗙𝗶𝗹𝗲𝗡𝗮𝗺𝗲 "/"], [qw(/)]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗙𝗶𝗹𝗲𝗡𝗮𝗺𝗲 "/var/www/html/translations/"], [qw(/var/www/html/translations/)]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗙𝗶𝗹𝗲𝗡𝗮𝗺𝗲 "a.b/c.d.e"], [qw(a.b/ c.d e)]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗙𝗶𝗹𝗲𝗡𝗮𝗺𝗲 "./a.b"], [qw(./ a b)]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗙𝗶𝗹𝗲𝗡𝗮𝗺𝗲 "./../../a.b"], [qw(./../../ a b)]; }
Full name of a file.
𝗳𝘂𝗹𝗹𝗙𝗶𝗹𝗲𝗡𝗮𝗺𝗲(fpe(qw(a txt)));
Parameter Description 1 $a Absolute file to be made relative 2 $b Against this absolute file.
ok "bbb.pl" eq 𝗿𝗲𝗹𝗙𝗿𝗼𝗺𝗔𝗯𝘀𝗔𝗴𝗮𝗶𝗻𝘀𝘁𝗔𝗯𝘀("/home/la/perl/bbb.pl", "/home/la/perl/aaa.pl"); ok "../perl/bbb.pl" eq 𝗿𝗲𝗹𝗙𝗿𝗼𝗺𝗔𝗯𝘀𝗔𝗴𝗮𝗶𝗻𝘀𝘁𝗔𝗯𝘀("/home/la/perl/bbb.pl", "/home/la/java/aaa.jv");
Parameter Description 1 $a Absolute file 2 $r Relative file
ok "/home/la/perl/aaa.pl" eq 𝗮𝗯𝘀𝗙𝗿𝗼𝗺𝗔𝗯𝘀𝗣𝗹𝘂𝘀𝗥𝗲𝗹("/home/la/perl/bbb", "aaa.pl"); ok "/home/la/perl/aaa.pl" eq 𝗮𝗯𝘀𝗙𝗿𝗼𝗺𝗔𝗯𝘀𝗣𝗹𝘂𝘀𝗥𝗲𝗹("/home/il/perl/bbb.pl", "../../la/perl/aaa.pl");
Return the name of the given file if it a fully qualified file name else returns undef. See: fullyQualifiedFile to check the initial prefix of the file name as well.
Parameter Description 1 $file File to test
ok "/aaa/" eq 𝗮𝗯𝘀𝗙𝗶𝗹𝗲(qw(/aaa/));
Combine zero or more absolute and relative names of @files starting at the current working folder to get an absolute file name.
Parameter Description 1 @files Absolute and relative file names
ok "/aaa/bbb/ccc/ddd.txt" eq 𝘀𝘂𝗺𝗔𝗯𝘀𝗔𝗻𝗱𝗥𝗲𝗹(qw(/aaa/AAA/ ../bbb/bbb/BBB/ ../../ccc/ddd.txt));
Temporary files and folders
Create a new, empty, temporary file.
my $d = fpd(my $D = temporaryDirectory, qw(a)); my $f = fpe($d, qw(bbb txt)); ok !-d $d; eval q{checkFile($f)}; my $r = $@; my $q = quotemeta($D); ok nws($r) =~ m(Can only find.+?: $q)s; makePath($f); ok -d $d; ok -d $D; rmdir $_ for $d, $D; my $e = temporaryFolder; # Same as temporyDirectory ok -d $e; clearFolder($e, 2); my $t = 𝘁𝗲𝗺𝗽𝗼𝗿𝗮𝗿𝘆𝗙𝗶𝗹𝗲; ok -f $t; unlink $t; ok !-f $t; if (0) {makePathRemote($e); # Make a path on the remote system }
Create a new, empty, temporary folder.
my $D = 𝘁𝗲𝗺𝗽𝗼𝗿𝗮𝗿𝘆𝗙𝗼𝗹𝗱𝗲𝗿; ok -d $D; my $d = fpd($D, q(ddd)); ok !-d $d; my @f = map {createEmptyFile(fpe($d, $_, qw(txt)))} qw(a b c); is_deeply [sort map {fne $_} findFiles($d, qr(txt\Z))], [qw(a.txt b.txt c.txt)]; is_deeply [findDirs($D)], [$D, $d]; is_deeply [sort map {fne $_} searchDirectoryTreesForMatchingFiles($d)], ["a.txt", "b.txt", "c.txt"]; is_deeply [sort map {fne $_} fileList("$d/*.txt")], ["a.txt", "b.txt", "c.txt"]; ok -e $_ for @f; my @g = fileList(qq($D/*/*.txt)); ok @g == 3; clearFolder($D, 5); ok !-e $_ for @f; ok !-d $D; my $d = fpd(my $D = temporaryDirectory, qw(a)); my $f = fpe($d, qw(bbb txt)); ok !-d $d; eval q{checkFile($f)}; my $r = $@; my $q = quotemeta($D); ok nws($r) =~ m(Can only find.+?: $q)s; makePath($f); ok -d $d; ok -d $D; rmdir $_ for $d, $D; my $e = 𝘁𝗲𝗺𝗽𝗼𝗿𝗮𝗿𝘆𝗙𝗼𝗹𝗱𝗲𝗿; # Same as temporyDirectory ok -d $e; clearFolder($e, 2); my $t = temporaryFile; ok -f $t; unlink $t; ok !-f $t; if (0) {makePathRemote($e); # Make a path on the remote system }
temporaryDirectory is a synonym for temporaryFolder.
Find files and folders below a folder.
Find all the files under a $folder and optionally $filter the selected files with a regular expression.
Parameter Description 1 $folder Folder to start the search with 2 $filter Optional regular expression to filter files
my $D = temporaryFolder; ok -d $D; my $d = fpd($D, q(ddd)); ok !-d $d; my @f = map {createEmptyFile(fpe($d, $_, qw(txt)))} qw(a b c); is_deeply [sort map {fne $_} 𝗳𝗶𝗻𝗱𝗙𝗶𝗹𝗲𝘀($d, qr(txt\Z))], [qw(a.txt b.txt c.txt)]; is_deeply [findDirs($D)], [$D, $d]; is_deeply [sort map {fne $_} searchDirectoryTreesForMatchingFiles($d)], ["a.txt", "b.txt", "c.txt"]; is_deeply [sort map {fne $_} fileList("$d/*.txt")], ["a.txt", "b.txt", "c.txt"]; ok -e $_ for @f; my @g = fileList(qq($D/*/*.txt)); ok @g == 3; clearFolder($D, 5); ok !-e $_ for @f; ok !-d $D;
Find all the folders under a $folder and optionally $filter the selected folders with a regular expression.
my $D = temporaryFolder; ok -d $D; my $d = fpd($D, q(ddd)); ok !-d $d; my @f = map {createEmptyFile(fpe($d, $_, qw(txt)))} qw(a b c); is_deeply [sort map {fne $_} findFiles($d, qr(txt\Z))], [qw(a.txt b.txt c.txt)]; is_deeply [𝗳𝗶𝗻𝗱𝗗𝗶𝗿𝘀($D)], [$D, $d]; is_deeply [sort map {fne $_} searchDirectoryTreesForMatchingFiles($d)], ["a.txt", "b.txt", "c.txt"]; is_deeply [sort map {fne $_} fileList("$d/*.txt")], ["a.txt", "b.txt", "c.txt"]; ok -e $_ for @f; my @g = fileList(qq($D/*/*.txt)); ok @g == 3; clearFolder($D, 5); ok !-e $_ for @f; ok !-d $D;
Files that match a given search pattern interpreted by "bsd_glob" in perlfunc.
Parameter Description 1 $pattern Search pattern
my $D = temporaryFolder; ok -d $D; my $d = fpd($D, q(ddd)); ok !-d $d; my @f = map {createEmptyFile(fpe($d, $_, qw(txt)))} qw(a b c); is_deeply [sort map {fne $_} findFiles($d, qr(txt\Z))], [qw(a.txt b.txt c.txt)]; is_deeply [findDirs($D)], [$D, $d]; is_deeply [sort map {fne $_} searchDirectoryTreesForMatchingFiles($d)], ["a.txt", "b.txt", "c.txt"]; is_deeply [sort map {fne $_} 𝗳𝗶𝗹𝗲𝗟𝗶𝘀𝘁("$d/*.txt")], ["a.txt", "b.txt", "c.txt"]; ok -e $_ for @f; my @g = 𝗳𝗶𝗹𝗲𝗟𝗶𝘀𝘁(qq($D/*/*.txt)); ok @g == 3; clearFolder($D, 5); ok !-e $_ for @f; ok !-d $D;
Parameter Description 1 @FoldersandExtensions Mixture of folder names and extensions
my $D = temporaryFolder; ok -d $D; my $d = fpd($D, q(ddd)); ok !-d $d; my @f = map {createEmptyFile(fpe($d, $_, qw(txt)))} qw(a b c); is_deeply [sort map {fne $_} findFiles($d, qr(txt\Z))], [qw(a.txt b.txt c.txt)]; is_deeply [findDirs($D)], [$D, $d]; is_deeply [sort map {fne $_} 𝘀𝗲𝗮𝗿𝗰𝗵𝗗𝗶𝗿𝗲𝗰𝘁𝗼𝗿𝘆𝗧𝗿𝗲𝗲𝘀𝗙𝗼𝗿𝗠𝗮𝘁𝗰𝗵𝗶𝗻𝗴𝗙𝗶𝗹𝗲𝘀($d)], ["a.txt", "b.txt", "c.txt"]; is_deeply [sort map {fne $_} fileList("$d/*.txt")], ["a.txt", "b.txt", "c.txt"]; ok -e $_ for @f; my @g = fileList(qq($D/*/*.txt)); ok @g == 3; clearFolder($D, 5); ok !-e $_ for @f; ok !-d $D;
Hashify a list of file names to get the corresponding folder structure.
is_deeply 𝗵𝗮𝘀𝗵𝗶𝗳𝘆𝗙𝗼𝗹𝗱𝗲𝗿𝗦𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲(qw(/a/a/a /a/a/b /a/b/a /a/b/b)), {"" => {a => {a => { a => "/a/a/a", b => "/a/a/b" }, b => { a => "/a/b/a", b => "/a/b/b" }, }, }, };
Return a hash which counts the file extensions in and below the folders in the specified list.
Parameter Description 1 @folders Folders to search
𝗰𝗼𝘂𝗻𝘁𝗙𝗶𝗹𝗲𝗘𝘅𝘁𝗲𝗻𝘀𝗶𝗼𝗻𝘀(q(/home/phil/perl/));
Return a hash which counts, in parallel with a maximum number of processes: $maximumNumberOfProcesses, the results of applying the file command to each file ina nd under the specified @folders.
Parameter Description 1 $maximumNumberOfProcesses Maximum number of processes to run in parallel 2 @folders Folders to search
𝗰𝗼𝘂𝗻𝘁𝗙𝗶𝗹𝗲𝗧𝘆𝗽𝗲𝘀(4, q(/home/phil/perl/));
Return the deepest folder that exists along a given file name path.
my $d = filePath (my @d = qw(a b c d)); ok 𝗺𝗮𝘁𝗰𝗵𝗣𝗮𝘁𝗵($d) eq $d;
Find the first file that exists with a path and name of $file and an extension drawn from <@ext>.
Parameter Description 1 $file File name minus extensions 2 @ext Possible extensions
my $f = createEmptyFile(fpe(my $d = temporaryFolder, qw(a jpg))); my $F = 𝗳𝗶𝗻𝗱𝗙𝗶𝗹𝗲𝗪𝗶𝘁𝗵𝗘𝘅𝘁𝗲𝗻𝘀𝗶𝗼𝗻(fpf($d, q(a)), qw(txt data jpg)); ok $F eq "jpg";
Parameter Description 1 $folder Folder 2 $limitCount Maximum number of files to remove to limit damage 3 $noMsg No message if the folder cannot be completely removed.
my $D = temporaryFolder; ok -d $D; my $d = fpd($D, q(ddd)); ok !-d $d; my @f = map {createEmptyFile(fpe($d, $_, qw(txt)))} qw(a b c); is_deeply [sort map {fne $_} findFiles($d, qr(txt\Z))], [qw(a.txt b.txt c.txt)]; is_deeply [findDirs($D)], [$D, $d]; is_deeply [sort map {fne $_} searchDirectoryTreesForMatchingFiles($d)], ["a.txt", "b.txt", "c.txt"]; is_deeply [sort map {fne $_} fileList("$d/*.txt")], ["a.txt", "b.txt", "c.txt"]; ok -e $_ for @f; my @g = fileList(qq($D/*/*.txt)); ok @g == 3; 𝗰𝗹𝗲𝗮𝗿𝗙𝗼𝗹𝗱𝗲𝗿($D, 5); ok !-e $_ for @f; ok !-d $D; ok !-d $D;
Read and write strings from and to files creating paths to any created files as needed.
Parameter Description 1 $file Name of file to read
my $f = writeFile(undef, "aaa"); my $s = 𝗿𝗲𝗮𝗱𝗙𝗶𝗹𝗲($f); ok $s eq "aaa"; appendFile($f, "bbb"); my $S = 𝗿𝗲𝗮𝗱𝗙𝗶𝗹𝗲($f); ok $S eq "aaabbb"; my $f = writeFile(undef, q(aaaa)); ok 𝗿𝗲𝗮𝗱𝗙𝗶𝗹𝗲($f) eq q(aaaa); eval{writeFile($f, q(bbbb))}; ok $@ =~ m(\AFile already exists)s; ok 𝗿𝗲𝗮𝗱𝗙𝗶𝗹𝗲($f) eq q(aaaa); overWriteFile($f, q(bbbb)); ok 𝗿𝗲𝗮𝗱𝗙𝗶𝗹𝗲($f) eq q(bbbb); unlink $f;
Parameter Description 1 $file Name of file to read 2 $ip Optional ip address of server
my $f = writeFileToRemote(undef, q(aaaa)); unlink $f; ok 𝗿𝗲𝗮𝗱𝗙𝗶𝗹𝗲𝗙𝗿𝗼𝗺𝗥𝗲𝗺𝗼𝘁𝗲($f) eq q(aaaa); unlink $f;
Read a file containing Unicode content represented as utf8, "eval" in perlfunc the content, confess to any errors and then return any result with lvalue method methods to access each hash element.
Parameter Description 1 $file File to read
my $f = dumpFile(undef, {a=>1, b=>2}); my $d = 𝗲𝘃𝗮𝗹𝗙𝗶𝗹𝗲($f); ok $d->a == 1; ok $d->b == 2; unlink $f;
Read a file compressed with gzip containing Unicode content represented as utf8, "eval" in perlfunc the content, confess to any errors and then return any result with lvalue method methods to access each hash element. This is slower than using Storable but does produce much smaller files, see also: dumpGZipFile.
my $d = [1, 2, 3=>{a=>4, b=>5}]; my $file = dumpGZipFile(q(zzz.zip), $d); ok -e $file; my $D = 𝗲𝘃𝗮𝗹𝗚𝗭𝗶𝗽𝗙𝗶𝗹𝗲($file); is_deeply $d, $D; unlink $file;
Retrieve a $file created via Storable. This is much faster than evalFile as the stored data is not in text format.
my $f = storeFile(undef, my $d = [qw(aaa bbb ccc)]); my $s = 𝗿𝗲𝘁𝗿𝗶𝗲𝘃𝗲𝗙𝗶𝗹𝗲($f); is_deeply $s, $d; unlink $f;
Read a binary file on the local machine.
my $f = writeBinaryFile(undef, 0xff x 8); my $s = 𝗿𝗲𝗮𝗱𝗕𝗶𝗻𝗮𝗿𝘆𝗙𝗶𝗹𝗲($f); ok $s eq 0xff x 8;
Read the specified file containing compressed Unicode content represented as utf8 through gzip.
Parameter Description 1 $file File to read.
my $s = '𝝰'x1e3; my $file = writeGZipFile(q(zzz.zip), $s); ok -e $file; my $S = 𝗿𝗲𝗮𝗱𝗚𝗭𝗶𝗽𝗙𝗶𝗹𝗲($file); ok $s eq $S; ok length($s) == length($S); unlink $file;
Make the path for the specified file name or folder on the local machine. Confess to any failure.
Parameter Description 1 $file File or folder name
my $d = fpd(my $D = temporaryDirectory, qw(a)); my $f = fpe($d, qw(bbb txt)); ok !-d $d; eval q{checkFile($f)}; my $r = $@; my $q = quotemeta($D); ok nws($r) =~ m(Can only find.+?: $q)s; 𝗺𝗮𝗸𝗲𝗣𝗮𝘁𝗵($f); ok -d $d; ok -d $D; rmdir $_ for $d, $D; my $e = temporaryFolder; # Same as temporyDirectory ok -d $e; clearFolder($e, 2); my $t = temporaryFile; ok -f $t; unlink $t; ok !-f $t; if (0) {makePathRemote($e); # Make a path on the remote system }
Make the path for the specified $file or folder on the Amazon Web Services instance whose ip address is specified by $ip or returned by awsIp. Confess to any failures.
Parameter Description 1 $file File or folder name 2 $ip Optional ip address
my $d = fpd(my $D = temporaryDirectory, qw(a)); my $f = fpe($d, qw(bbb txt)); ok !-d $d; eval q{checkFile($f)}; my $r = $@; my $q = quotemeta($D); ok nws($r) =~ m(Can only find.+?: $q)s; makePath($f); ok -d $d; ok -d $D; rmdir $_ for $d, $D; my $e = temporaryFolder; # Same as temporyDirectory ok -d $e; clearFolder($e, 2); my $t = temporaryFile; ok -f $t; unlink $t; ok !-f $t; if (0) {𝗺𝗮𝗸𝗲𝗣𝗮𝘁𝗵𝗥𝗲𝗺𝗼𝘁𝗲($e); # Make a path on the remote system }
Write to a $file, after creating a path to the $file with makePath if necessary, a $string of Unicode content encoded as utf8. Return the name of the $file on success else confess to any failures. If the file already exists it will be overwritten.
Parameter Description 1 $file File to write to or B<undef> for a temporary file 2 $string Unicode string to write
my $f = writeFile(undef, q(aaaa)); ok readFile($f) eq q(aaaa); eval{writeFile($f, q(bbbb))}; ok $@ =~ m(\AFile already exists)s; ok readFile($f) eq q(aaaa); 𝗼𝘃𝗲𝗿𝗪𝗿𝗶𝘁𝗲𝗙𝗶𝗹𝗲($f, q(bbbb)); ok readFile($f) eq q(bbbb); unlink $f;
owf is a synonym for overWriteFile.
Parameter Description 1 $file New file to write to or B<undef> for a temporary file 2 $string String to write
my $f = 𝘄𝗿𝗶𝘁𝗲𝗙𝗶𝗹𝗲(undef, "aaa"); my $s = readFile($f); ok $s eq "aaa"; appendFile($f, "bbb"); my $S = readFile($f); ok $S eq "aaabbb"; my $f = 𝘄𝗿𝗶𝘁𝗲𝗙𝗶𝗹𝗲(undef, q(aaaa)); ok readFile($f) eq q(aaaa); eval{𝘄𝗿𝗶𝘁𝗲𝗙𝗶𝗹𝗲($f, q(bbbb))}; ok $@ =~ m(\AFile already exists)s; ok readFile($f) eq q(aaaa); overWriteFile($f, q(bbbb)); ok readFile($f) eq q(bbbb); unlink $f;
Parameter Description 1 $file New file to write to or B<undef> for a temporary file 2 $string String to write 3 $ip Optional ip address
my $f = 𝘄𝗿𝗶𝘁𝗲𝗙𝗶𝗹𝗲𝗧𝗼𝗥𝗲𝗺𝗼𝘁𝗲(undef, q(aaaa)); unlink $f; ok readFileFromRemote($f) eq q(aaaa); unlink $f;
Write to $file, after creating a path to the file with makePath if necessary, the binary content in $string. If the $file already exists it is overwritten. Return the name of the $file on success else confess.
Parameter Description 1 $file File to write to or B<undef> for a temporary file 2 $string L<Unicode|https://en.wikipedia.org/wiki/Unicode> string to write
if (1) {vec(my $a, 0, 8) = 254; vec(my $b, 0, 8) = 255; ok dump($a) eq dump("FE"); ok dump($b) eq dump("FF"); ok length($a) == 1; ok length($b) == 1; my $s = $a.$a.$b.$b; ok length($s) == 4; my $f = eval {writeFile(undef, $s)}; ok fileSize($f) == 8; eval {writeBinaryFile($f, $s)}; ok $@ =~ m(Binary file already exists:)s; eval {𝗼𝘃𝗲𝗿𝗪𝗿𝗶𝘁𝗲𝗕𝗶𝗻𝗮𝗿𝘆𝗙𝗶𝗹𝗲($f, $s)}; ok !$@; ok fileSize($f) == 4; ok $s eq eval {readBinaryFile($f)}; copyBinaryFile($f, my $F = temporaryFile); ok $s eq readBinaryFile($F); unlink $f, $F; }
Write to a new $file, after creating a path to the file with makePath if necessary, the binary content in $string. Return the name of the $file on success else confess if the file already exists or any other error occurs.
my $f = 𝘄𝗿𝗶𝘁𝗲𝗕𝗶𝗻𝗮𝗿𝘆𝗙𝗶𝗹𝗲(undef, 0xff x 8); my $s = readBinaryFile($f); ok $s eq 0xff x 8; if (1) {vec(my $a, 0, 8) = 254; vec(my $b, 0, 8) = 255; ok dump($a) eq dump("FE"); ok dump($b) eq dump("FF"); ok length($a) == 1; ok length($b) == 1; my $s = $a.$a.$b.$b; ok length($s) == 4; my $f = eval {writeFile(undef, $s)}; ok fileSize($f) == 8; eval {𝘄𝗿𝗶𝘁𝗲𝗕𝗶𝗻𝗮𝗿𝘆𝗙𝗶𝗹𝗲($f, $s)}; ok $@ =~ m(Binary file already exists:)s; eval {overWriteBinaryFile($f, $s)}; ok !$@; ok fileSize($f) == 4; ok $s eq eval {readBinaryFile($f)}; copyBinaryFile($f, my $F = temporaryFile); ok $s eq readBinaryFile($F); unlink $f, $F; }
Dump to a $file the referenced data $structure.
Parameter Description 1 $file File to write to or B<undef> for a temporary file 2 $structure Address of data structure to write
my $f = 𝗱𝘂𝗺𝗽𝗙𝗶𝗹𝗲(undef, my $d = [qw(aaa bbb ccc)]); my $s = evalFile($f); is_deeply $s, $d; unlink $f;
Store into a $file, after creating a path to the file with makePath if necessary, a data $structure via Storable. This is much faster than dumpFile but the stored results are not easily modified.
my $f = 𝘀𝘁𝗼𝗿𝗲𝗙𝗶𝗹𝗲(undef, my $d = [qw(aaa bbb ccc)]); my $s = retrieveFile($f); is_deeply $s, $d; unlink $f;
Write to a $file, after creating a path to the file with makePath if necessary, through gzip a $string whose content is encoded as utf8.
Parameter Description 1 $file File to write to 2 $string String to write
my $s = '𝝰'x1e3; my $file = 𝘄𝗿𝗶𝘁𝗲𝗚𝗭𝗶𝗽𝗙𝗶𝗹𝗲(q(zzz.zip), $s); ok -e $file; my $S = readGZipFile($file); ok $s eq $S; ok length($s) == length($S); unlink $file;
Write to a $file a data $structure through gzip. This technique produces files that are a lot more compact files than those produced by Storable, but the execution time is much longer. See also: evalGZipFile.
Parameter Description 1 $file File to write 2 $structure Reference to data
my $d = [1, 2, 3=>{a=>4, b=>5}]; my $file = 𝗱𝘂𝗺𝗽𝗚𝗭𝗶𝗽𝗙𝗶𝗹𝗲(q(zzz.zip), $d); ok -e $file; my $D = evalGZipFile($file); is_deeply $d, $D; unlink $file;
Write the values of a $hash reference into files identified by the key of each value using overWriteFile optionally swapping the prefix of each file from $old to $new.
Parameter Description 1 $hash Hash of key value pairs representing files and data 2 $old Optional old prefix 3 $new New prefix
my $d = temporaryFolder; my $a = fpd($d, q(aaa)); my $b = fpd($d, q(bbb)); my $c = fpd($d, q(ccc)); my ($a1, $a2) = map {fpe($a, $_, q(txt))} 1..2; my ($b1, $b2) = map {fpe($b, $_, q(txt))} 1..2; my $files = {$a1 => "1111", $a2 => "2222"}; 𝘄𝗿𝗶𝘁𝗲𝗙𝗶𝗹𝗲𝘀($files); my $ra = readFiles($a); is_deeply $files, $ra; copyFolder($a, $b); my $rb = readFiles($b); is_deeply [sort values %$ra], [sort values %$rb]; unlink $a2; mergeFolder($a, $b); ok -e $b1; ok -e $b2; copyFolder($a, $b); ok -e $b1; ok !-e $b2; copyFile($a1, $a2); ok readFile($a1) eq readFile($a2); 𝘄𝗿𝗶𝘁𝗲𝗙𝗶𝗹𝗲𝘀($files); ok !moveFileNoClobber ($a1, $a2); ok moveFileWithClobber($a1, $a2); ok !-e $a1; ok readFile($a2) eq q(1111); ok moveFileNoClobber ($a2, $a1); ok !-e $a2; ok readFile($a1) eq q(1111); clearFolder(q(aaa), 11); clearFolder(q(bbb), 11);
Read all the files in the specified list of folders into a hash.
Parameter Description 1 @folders Folders to read
my $d = temporaryFolder; my $a = fpd($d, q(aaa)); my $b = fpd($d, q(bbb)); my $c = fpd($d, q(ccc)); my ($a1, $a2) = map {fpe($a, $_, q(txt))} 1..2; my ($b1, $b2) = map {fpe($b, $_, q(txt))} 1..2; my $files = {$a1 => "1111", $a2 => "2222"}; writeFiles($files); my $ra = 𝗿𝗲𝗮𝗱𝗙𝗶𝗹𝗲𝘀($a); is_deeply $files, $ra; copyFolder($a, $b); my $rb = 𝗿𝗲𝗮𝗱𝗙𝗶𝗹𝗲𝘀($b); is_deeply [sort values %$ra], [sort values %$rb]; unlink $a2; mergeFolder($a, $b); ok -e $b1; ok -e $b2; copyFolder($a, $b); ok -e $b1; ok !-e $b2; copyFile($a1, $a2); ok readFile($a1) eq readFile($a2); writeFiles($files); ok !moveFileNoClobber ($a1, $a2); ok moveFileWithClobber($a1, $a2); ok !-e $a1; ok readFile($a2) eq q(1111); ok moveFileNoClobber ($a2, $a1); ok !-e $a2; ok readFile($a1) eq q(1111); clearFolder(q(aaa), 11); clearFolder(q(bbb), 11);
Append to $file a $string of Unicode content encoded with utf8, creating the $file first if necessary. Return the name of the $file on success else confess. The $file being appended to is locked before the write with "flock" in perlfunc to allow multiple processes to append linearly to the same file.
Parameter Description 1 $file File to append to 2 $string String to append
my $f = writeFile(undef, "aaa"); my $s = readFile($f); ok $s eq "aaa"; 𝗮𝗽𝗽𝗲𝗻𝗱𝗙𝗶𝗹𝗲($f, "bbb"); my $S = readFile($f); ok $S eq "aaabbb";
Create an empty file unless the file already exists and return the name of the file else confess if the file cannot be created.
Parameter Description 1 $file File to create or B<undef> for a temporary file
my $D = temporaryFolder; ok -d $D; my $d = fpd($D, q(ddd)); ok !-d $d; my @f = map {𝗰𝗿𝗲𝗮𝘁𝗲𝗘𝗺𝗽𝘁𝘆𝗙𝗶𝗹𝗲(fpe($d, $_, qw(txt)))} qw(a b c); is_deeply [sort map {fne $_} findFiles($d, qr(txt\Z))], [qw(a.txt b.txt c.txt)]; is_deeply [findDirs($D)], [$D, $d]; is_deeply [sort map {fne $_} searchDirectoryTreesForMatchingFiles($d)], ["a.txt", "b.txt", "c.txt"]; is_deeply [sort map {fne $_} fileList("$d/*.txt")], ["a.txt", "b.txt", "c.txt"]; ok -e $_ for @f; my @g = fileList(qq($D/*/*.txt)); ok @g == 3; clearFolder($D, 5); ok !-e $_ for @f; ok !-d $D;
Apply chmod to a $file to set its $permissions.
Parameter Description 1 $file File 2 $permissions Permissions settings per chmod
if (1) {my $f = temporaryFile(); 𝘀𝗲𝘁𝗣𝗲𝗿𝗺𝗶𝘀𝘀𝗶𝗼𝗻𝘀𝗙𝗼𝗿𝗙𝗶𝗹𝗲($f, q(ugo=r)); my $a = qx(ls -la $f); ok $a =~ m(-r--r--r--)s; 𝘀𝗲𝘁𝗣𝗲𝗿𝗺𝗶𝘀𝘀𝗶𝗼𝗻𝘀𝗙𝗼𝗿𝗙𝗶𝗹𝗲($f, q(u=rwx)); my $b = qx(ls -la $f); ok $b =~ m(-rwxr--r--)s; }
Return the number of lines in a file.
my $f = writeFile(undef, "a b "); ok 𝗻𝘂𝗺𝗯𝗲𝗿𝗢𝗳𝗟𝗶𝗻𝗲𝘀𝗜𝗻𝗙𝗶𝗹𝗲($f) == 2;
Copy files and folders. The \Acopy.*Md5Normalized.*\Z methods can be used to ensure that files have collision proof names that collapse duplicate content even when copied to another folder.
Copy the $source file encoded in utf8 to the specified $target file in and return $target.
Parameter Description 1 $source Source file 2 $target Target file
my $d = temporaryFolder; my $a = fpd($d, q(aaa)); my $b = fpd($d, q(bbb)); my $c = fpd($d, q(ccc)); my ($a1, $a2) = map {fpe($a, $_, q(txt))} 1..2; my ($b1, $b2) = map {fpe($b, $_, q(txt))} 1..2; my $files = {$a1 => "1111", $a2 => "2222"}; writeFiles($files); my $ra = readFiles($a); is_deeply $files, $ra; copyFolder($a, $b); my $rb = readFiles($b); is_deeply [sort values %$ra], [sort values %$rb]; unlink $a2; mergeFolder($a, $b); ok -e $b1; ok -e $b2; copyFolder($a, $b); ok -e $b1; ok !-e $b2; 𝗰𝗼𝗽𝘆𝗙𝗶𝗹𝗲($a1, $a2); ok readFile($a1) eq readFile($a2); writeFiles($files); ok !moveFileNoClobber ($a1, $a2); ok moveFileWithClobber($a1, $a2); ok !-e $a1; ok readFile($a2) eq q(1111); ok moveFileNoClobber ($a2, $a1); ok !-e $a2; ok readFile($a1) eq q(1111); clearFolder(q(aaa), 11); clearFolder(q(bbb), 11);
Rename the $source file, which must exist, to the $target file but only if the $target file does not exist already. Returns 1 if the $source file was successfully renamed to the $target file else 0.
my $d = temporaryFolder; my $a = fpd($d, q(aaa)); my $b = fpd($d, q(bbb)); my $c = fpd($d, q(ccc)); my ($a1, $a2) = map {fpe($a, $_, q(txt))} 1..2; my ($b1, $b2) = map {fpe($b, $_, q(txt))} 1..2; my $files = {$a1 => "1111", $a2 => "2222"}; writeFiles($files); my $ra = readFiles($a); is_deeply $files, $ra; copyFolder($a, $b); my $rb = readFiles($b); is_deeply [sort values %$ra], [sort values %$rb]; unlink $a2; mergeFolder($a, $b); ok -e $b1; ok -e $b2; copyFolder($a, $b); ok -e $b1; ok !-e $b2; copyFile($a1, $a2); ok readFile($a1) eq readFile($a2); writeFiles($files); ok !𝗺𝗼𝘃𝗲𝗙𝗶𝗹𝗲𝗡𝗼𝗖𝗹𝗼𝗯𝗯𝗲𝗿 ($a1, $a2); ok moveFileWithClobber($a1, $a2); ok !-e $a1; ok readFile($a2) eq q(1111); ok 𝗺𝗼𝘃𝗲𝗙𝗶𝗹𝗲𝗡𝗼𝗖𝗹𝗼𝗯𝗯𝗲𝗿 ($a2, $a1); ok !-e $a2; ok readFile($a1) eq q(1111); clearFolder(q(aaa), 11); clearFolder(q(bbb), 11);
my $d = temporaryFolder; my $a = fpd($d, q(aaa)); my $b = fpd($d, q(bbb)); my $c = fpd($d, q(ccc)); my ($a1, $a2) = map {fpe($a, $_, q(txt))} 1..2; my ($b1, $b2) = map {fpe($b, $_, q(txt))} 1..2; my $files = {$a1 => "1111", $a2 => "2222"}; writeFiles($files); my $ra = readFiles($a); is_deeply $files, $ra; copyFolder($a, $b); my $rb = readFiles($b); is_deeply [sort values %$ra], [sort values %$rb]; unlink $a2; mergeFolder($a, $b); ok -e $b1; ok -e $b2; copyFolder($a, $b); ok -e $b1; ok !-e $b2; copyFile($a1, $a2); ok readFile($a1) eq readFile($a2); writeFiles($files); ok !moveFileNoClobber ($a1, $a2); ok 𝗺𝗼𝘃𝗲𝗙𝗶𝗹𝗲𝗪𝗶𝘁𝗵𝗖𝗹𝗼𝗯𝗯𝗲𝗿($a1, $a2); ok !-e $a1; ok readFile($a2) eq q(1111); ok moveFileNoClobber ($a2, $a1); ok !-e $a2; ok readFile($a1) eq q(1111); clearFolder(q(aaa), 11); clearFolder(q(bbb), 11);
Copy the file named in $source to the specified $targetFolder/ or if $targetFolder/ is in fact a file into the folder containing this file and return the target file name. Confesses instead of copying if the target already exists.
Parameter Description 1 $source Source file 2 $targetFolder Target folder
my $sd = temporaryFolder; my $td = temporaryFolder; my $sf = writeFile fpe($sd, qw(test data)), q(aaaa); my $tf = 𝗰𝗼𝗽𝘆𝗙𝗶𝗹𝗲𝗧𝗼𝗙𝗼𝗹𝗱𝗲𝗿($sf, $td); ok readFile($tf) eq q(aaaa); ok fp ($tf) eq $td; ok fne($tf) eq q(test.data);
Create a readable name from an arbitrary string of text.
Parameter Description 1 $string String 2 %options Options
ok q(help) eq 𝗻𝗮𝗺𝗲𝗙𝗿𝗼𝗺𝗦𝘁𝗿𝗶𝗻𝗴(q(!@#$%^help___<>?><?>)); ok q(bm_The_skyscraper_analogy) eq 𝗻𝗮𝗺𝗲𝗙𝗿𝗼𝗺𝗦𝘁𝗿𝗶𝗻𝗴(<<END); <bookmap id="b1"> <title>The skyscraper analogy</title> </bookmap> END ok q(bm_The_skyscraper_analogy_An_exciting_tale_of_two_skyscrapers_that_meet_in_downtown_Houston) eq 𝗻𝗮𝗺𝗲𝗙𝗿𝗼𝗺𝗦𝘁𝗿𝗶𝗻𝗴(<<END); <bookmap id="b1"> <title>The skyscraper analogy</title> An exciting tale of two skyscrapers that meet in downtown Houston <concept><html> </bookmap> END ok q(bm_the_skyscraper_analogy) eq nameFromStringRestrictedToTitle(<<END); <bookmap id="b1"> <title>The skyscraper analogy</title> An exciting tale of two skyscrapers that meet in downtown Houston <concept><html> </bookmap> END
Create a readable name from a string of text that might contain a title tag - fall back to nameFromString if that is not possible.
ok q(help) eq nameFromString(q(!@#$%^help___<>?><?>)); ok q(bm_The_skyscraper_analogy) eq nameFromString(<<END); <bookmap id="b1"> <title>The skyscraper analogy</title> </bookmap> END ok q(bm_The_skyscraper_analogy_An_exciting_tale_of_two_skyscrapers_that_meet_in_downtown_Houston) eq nameFromString(<<END); <bookmap id="b1"> <title>The skyscraper analogy</title> An exciting tale of two skyscrapers that meet in downtown Houston <concept><html> </bookmap> END ok q(bm_the_skyscraper_analogy) eq 𝗻𝗮𝗺𝗲𝗙𝗿𝗼𝗺𝗦𝘁𝗿𝗶𝗻𝗴𝗥𝗲𝘀𝘁𝗿𝗶𝗰𝘁𝗲𝗱𝗧𝗼𝗧𝗶𝘁𝗹𝗲(<<END); <bookmap id="b1"> <title>The skyscraper analogy</title> An exciting tale of two skyscrapers that meet in downtown Houston <concept><html> </bookmap> END
Create a unique name from a file name and the md5 sum of its content
Parameter Description 1 $source Source file
my $f = owf(q(test.txt), join "", 1..100); ok 𝘂𝗻𝗶𝗾𝘂𝗲𝗡𝗮𝗺𝗲𝗙𝗿𝗼𝗺𝗙𝗶𝗹𝗲($f) eq q(test_ef69caaaeea9c17120821a9eb6c7f1de.txt); unlink $f;
Create a name from the last folder in the path of a file name. Return undef if the file does not have a path.
ok 𝗻𝗮𝗺𝗲𝗙𝗿𝗼𝗺𝗙𝗼𝗹𝗱𝗲𝗿(fpe(qw( a b c d e))) eq q(c);
Normalize the name of the specified $source file to the md5 sum of its content, retaining its current extension, while placing the original file name in a companion file if the companion file does not already exist. If no $target folder is supplied the file is renamed to its normalized form in situ, otherwise it is copied to the target folder and renamed there. A companion file for the $source file is created by removing the extension of the normalized file and writing the original $source file name to it unless such a file already exists as we assume that it contains the 'original' original name of the $source file. If the $source file is copied to a new location then the companion file is copied as well to maintain the link back to the original name of the file.
Parameter Description 1 $source Source file 2 $Target Target folder or a file in the target folder
my $dir = temporaryFolder; my $a = fpe($dir, qw(a a jpg)); my $b = fpe($dir, qw(b a jpg)); my $c = fpe($dir, qw(c a jpg)); my $content = join '', 1..1e3; my $A = copyFileMd5NormalizedCreate($a, $content, q(jpg), $a); ok readFile($A) eq $content; ok $A eq 𝗰𝗼𝗽𝘆𝗙𝗶𝗹𝗲𝗠𝗱𝟱𝗡𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗲𝗱($A); my $B = 𝗰𝗼𝗽𝘆𝗙𝗶𝗹𝗲𝗠𝗱𝟱𝗡𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗲𝗱($A, $b); ok readFile($B) eq $content; ok $B eq 𝗰𝗼𝗽𝘆𝗙𝗶𝗹𝗲𝗠𝗱𝟱𝗡𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗲𝗱($B); my $C = 𝗰𝗼𝗽𝘆𝗙𝗶𝗹𝗲𝗠𝗱𝟱𝗡𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗲𝗱($B, $c); ok readFile($C) eq $content; ok $C eq 𝗰𝗼𝗽𝘆𝗙𝗶𝗹𝗲𝗠𝗱𝟱𝗡𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗲𝗱($C); ok fne($A) eq fne($_) for $B, $C; ok readFile($_) eq $content for $A, $B, $C; ok copyFileMd5NormalizedGetCompanionContent($_) eq $a for $A, $B, $C; ok 6 == searchDirectoryTreesForMatchingFiles($dir); copyFileMd5NormalizedDelete($A); ok 4 == searchDirectoryTreesForMatchingFiles($dir); copyFileMd5NormalizedDelete($B); ok 2 == searchDirectoryTreesForMatchingFiles($dir); copyFileMd5NormalizedDelete($C); ok 0 == searchDirectoryTreesForMatchingFiles($dir); clearFolder($dir, 10); ok 0 == searchDirectoryTreesForMatchingFiles($dir);
Name a file using the GB Standard
Parameter Description 1 $content Content 2 $extension Extension 3 %options Options
ok 𝗰𝗼𝗽𝘆𝗙𝗶𝗹𝗲𝗠𝗱𝟱𝗡𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗲𝗱𝗡𝗮𝗺𝗲(<<END, q(txt)) eq <p>Hello<b>World</b></p> END q(Hello_World_6ba23858c1b4811660896c324acac6fa.txt);
Create a file in the specified $folder whose name is constructed from the md5 sum of the specified $content, whose content is $content, whose extension is $extension and which has a companion file with the same name minus the extension which contains the specified $companionContent. Such a file can be copied multiple times by copyFileMd5Normalized regardless of the other files in the target folders.
Parameter Description 1 $Folder Target folder or a file in that folder 2 $content Content of the file 3 $extension File extension 4 $companionContent Contents of the companion file 5 %options Options.
my $dir = temporaryFolder; my $a = fpe($dir, qw(a a jpg)); my $b = fpe($dir, qw(b a jpg)); my $c = fpe($dir, qw(c a jpg)); my $content = join '', 1..1e3; my $A = 𝗰𝗼𝗽𝘆𝗙𝗶𝗹𝗲𝗠𝗱𝟱𝗡𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗲𝗱𝗖𝗿𝗲𝗮𝘁𝗲($a, $content, q(jpg), $a); ok readFile($A) eq $content; ok $A eq copyFileMd5Normalized($A); my $B = copyFileMd5Normalized($A, $b); ok readFile($B) eq $content; ok $B eq copyFileMd5Normalized($B); my $C = copyFileMd5Normalized($B, $c); ok readFile($C) eq $content; ok $C eq copyFileMd5Normalized($C); ok fne($A) eq fne($_) for $B, $C; ok readFile($_) eq $content for $A, $B, $C; ok copyFileMd5NormalizedGetCompanionContent($_) eq $a for $A, $B, $C; ok 6 == searchDirectoryTreesForMatchingFiles($dir); copyFileMd5NormalizedDelete($A); ok 4 == searchDirectoryTreesForMatchingFiles($dir); copyFileMd5NormalizedDelete($B); ok 2 == searchDirectoryTreesForMatchingFiles($dir); copyFileMd5NormalizedDelete($C); ok 0 == searchDirectoryTreesForMatchingFiles($dir); clearFolder($dir, 10); ok 0 == searchDirectoryTreesForMatchingFiles($dir);
Return the content of the companion file to the specified $source file after it has been normalized via copyFileMd5Normalized or copyFileMd5NormalizedCreate or return undef if the corresponding companion file does not exist.
Parameter Description 1 $source Source file.
my $dir = temporaryFolder; my $a = fpe($dir, qw(a a jpg)); my $b = fpe($dir, qw(b a jpg)); my $c = fpe($dir, qw(c a jpg)); my $content = join '', 1..1e3; my $A = copyFileMd5NormalizedCreate($a, $content, q(jpg), $a); ok readFile($A) eq $content; ok $A eq copyFileMd5Normalized($A); my $B = copyFileMd5Normalized($A, $b); ok readFile($B) eq $content; ok $B eq copyFileMd5Normalized($B); my $C = copyFileMd5Normalized($B, $c); ok readFile($C) eq $content; ok $C eq copyFileMd5Normalized($C); ok fne($A) eq fne($_) for $B, $C; ok readFile($_) eq $content for $A, $B, $C; ok 𝗰𝗼𝗽𝘆𝗙𝗶𝗹𝗲𝗠𝗱𝟱𝗡𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗲𝗱𝗚𝗲𝘁𝗖𝗼𝗺𝗽𝗮𝗻𝗶𝗼𝗻𝗖𝗼𝗻𝘁𝗲𝗻𝘁($_) eq $a for $A, $B, $C; ok 6 == searchDirectoryTreesForMatchingFiles($dir); copyFileMd5NormalizedDelete($A); ok 4 == searchDirectoryTreesForMatchingFiles($dir); copyFileMd5NormalizedDelete($B); ok 2 == searchDirectoryTreesForMatchingFiles($dir); copyFileMd5NormalizedDelete($C); ok 0 == searchDirectoryTreesForMatchingFiles($dir); clearFolder($dir, 10); ok 0 == searchDirectoryTreesForMatchingFiles($dir);
Delete a normalized and its companion file
my $dir = temporaryFolder; my $a = fpe($dir, qw(a a jpg)); my $b = fpe($dir, qw(b a jpg)); my $c = fpe($dir, qw(c a jpg)); my $content = join '', 1..1e3; my $A = copyFileMd5NormalizedCreate($a, $content, q(jpg), $a); ok readFile($A) eq $content; ok $A eq copyFileMd5Normalized($A); my $B = copyFileMd5Normalized($A, $b); ok readFile($B) eq $content; ok $B eq copyFileMd5Normalized($B); my $C = copyFileMd5Normalized($B, $c); ok readFile($C) eq $content; ok $C eq copyFileMd5Normalized($C); ok fne($A) eq fne($_) for $B, $C; ok readFile($_) eq $content for $A, $B, $C; ok copyFileMd5NormalizedGetCompanionContent($_) eq $a for $A, $B, $C; ok 6 == searchDirectoryTreesForMatchingFiles($dir); 𝗰𝗼𝗽𝘆𝗙𝗶𝗹𝗲𝗠𝗱𝟱𝗡𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗲𝗱𝗗𝗲𝗹𝗲𝘁𝗲($A); ok 4 == searchDirectoryTreesForMatchingFiles($dir); 𝗰𝗼𝗽𝘆𝗙𝗶𝗹𝗲𝗠𝗱𝟱𝗡𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗲𝗱𝗗𝗲𝗹𝗲𝘁𝗲($B); ok 2 == searchDirectoryTreesForMatchingFiles($dir); 𝗰𝗼𝗽𝘆𝗙𝗶𝗹𝗲𝗠𝗱𝟱𝗡𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗲𝗱𝗗𝗲𝗹𝗲𝘁𝗲($C); ok 0 == searchDirectoryTreesForMatchingFiles($dir); clearFolder($dir, 10); ok 0 == searchDirectoryTreesForMatchingFiles($dir);
Copy the binary file $source to a file named <%target> and return the target file name,
if (1) {vec(my $a, 0, 8) = 254; vec(my $b, 0, 8) = 255; ok dump($a) eq dump("FE"); ok dump($b) eq dump("FF"); ok length($a) == 1; ok length($b) == 1; my $s = $a.$a.$b.$b; ok length($s) == 4; my $f = eval {writeFile(undef, $s)}; ok fileSize($f) == 8; eval {writeBinaryFile($f, $s)}; ok $@ =~ m(Binary file already exists:)s; eval {overWriteBinaryFile($f, $s)}; ok !$@; ok fileSize($f) == 4; ok $s eq eval {readBinaryFile($f)}; 𝗰𝗼𝗽𝘆𝗕𝗶𝗻𝗮𝗿𝘆𝗙𝗶𝗹𝗲($f, my $F = temporaryFile); ok $s eq readBinaryFile($F); unlink $f, $F; }
my $dir = temporaryFolder; my $a = fpe($dir, qw(a a jpg)); my $b = fpe($dir, qw(b a jpg)); my $c = fpe($dir, qw(c a jpg)); my $content = join '', 1..1e3; my $A = copyBinaryFileMd5NormalizedCreate($a, $content, q(jpg), $a); ok readBinaryFile($A) eq $content; ok $A eq 𝗰𝗼𝗽𝘆𝗕𝗶𝗻𝗮𝗿𝘆𝗙𝗶𝗹𝗲𝗠𝗱𝟱𝗡𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗲𝗱($A); my $B = 𝗰𝗼𝗽𝘆𝗕𝗶𝗻𝗮𝗿𝘆𝗙𝗶𝗹𝗲𝗠𝗱𝟱𝗡𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗲𝗱($A, $b); ok readBinaryFile($B) eq $content; ok $B eq 𝗰𝗼𝗽𝘆𝗕𝗶𝗻𝗮𝗿𝘆𝗙𝗶𝗹𝗲𝗠𝗱𝟱𝗡𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗲𝗱($B); my $C = 𝗰𝗼𝗽𝘆𝗕𝗶𝗻𝗮𝗿𝘆𝗙𝗶𝗹𝗲𝗠𝗱𝟱𝗡𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗲𝗱($B, $c); ok readBinaryFile($C) eq $content; ok $C eq 𝗰𝗼𝗽𝘆𝗕𝗶𝗻𝗮𝗿𝘆𝗙𝗶𝗹𝗲𝗠𝗱𝟱𝗡𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗲𝗱($C); ok fne($A) eq fne($_) for $B, $C; ok readBinaryFile($_) eq $content for $A, $B, $C; ok copyBinaryFileMd5NormalizedGetCompanionContent($_) eq $a for $A, $B, $C; ok 6 == searchDirectoryTreesForMatchingFiles($dir); clearFolder($dir, 10);
Create a file in the specified $folder whose name is constructed from the md5 sum of the specified $content, whose content is $content, whose extension is $extension and which has a companion file with the same name minus the extension which contains the specified $companionContent. Such a file can be copied multiple times by copyBinaryFileMd5Normalized regardless of the other files in the target folders while retaining the original name information.
Parameter Description 1 $Folder Target folder or a file in that folder 2 $content Content of the file 3 $extension File extension 4 $companionContent Optional content of the companion file.
my $dir = temporaryFolder; my $a = fpe($dir, qw(a a jpg)); my $b = fpe($dir, qw(b a jpg)); my $c = fpe($dir, qw(c a jpg)); my $content = join '', 1..1e3; my $A = 𝗰𝗼𝗽𝘆𝗕𝗶𝗻𝗮𝗿𝘆𝗙𝗶𝗹𝗲𝗠𝗱𝟱𝗡𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗲𝗱𝗖𝗿𝗲𝗮𝘁𝗲($a, $content, q(jpg), $a); ok readBinaryFile($A) eq $content; ok $A eq copyBinaryFileMd5Normalized($A); my $B = copyBinaryFileMd5Normalized($A, $b); ok readBinaryFile($B) eq $content; ok $B eq copyBinaryFileMd5Normalized($B); my $C = copyBinaryFileMd5Normalized($B, $c); ok readBinaryFile($C) eq $content; ok $C eq copyBinaryFileMd5Normalized($C); ok fne($A) eq fne($_) for $B, $C; ok readBinaryFile($_) eq $content for $A, $B, $C; ok copyBinaryFileMd5NormalizedGetCompanionContent($_) eq $a for $A, $B, $C; ok 6 == searchDirectoryTreesForMatchingFiles($dir); clearFolder($dir, 10);
Return the original name of the specified $source file after it has been normalized via copyBinaryFileMd5Normalized or copyBinaryFileMd5NormalizedCreate or return undef if the corresponding companion file does not exist.
my $dir = temporaryFolder; my $a = fpe($dir, qw(a a jpg)); my $b = fpe($dir, qw(b a jpg)); my $c = fpe($dir, qw(c a jpg)); my $content = join '', 1..1e3; my $A = copyBinaryFileMd5NormalizedCreate($a, $content, q(jpg), $a); ok readBinaryFile($A) eq $content; ok $A eq copyBinaryFileMd5Normalized($A); my $B = copyBinaryFileMd5Normalized($A, $b); ok readBinaryFile($B) eq $content; ok $B eq copyBinaryFileMd5Normalized($B); my $C = copyBinaryFileMd5Normalized($B, $c); ok readBinaryFile($C) eq $content; ok $C eq copyBinaryFileMd5Normalized($C); ok fne($A) eq fne($_) for $B, $C; ok readBinaryFile($_) eq $content for $A, $B, $C; ok 𝗰𝗼𝗽𝘆𝗕𝗶𝗻𝗮𝗿𝘆𝗙𝗶𝗹𝗲𝗠𝗱𝟱𝗡𝗼𝗿𝗺𝗮𝗹𝗶𝘇𝗲𝗱𝗚𝗲𝘁𝗖𝗼𝗺𝗽𝗮𝗻𝗶𝗼𝗻𝗖𝗼𝗻𝘁𝗲𝗻𝘁($_) eq $a for $A, $B, $C; ok 6 == searchDirectoryTreesForMatchingFiles($dir); clearFolder($dir, 10);
Copy the specified local $file to the server whose ip address is specified by $ip or returned by awsIp.
Parameter Description 1 $file Source file 2 $ip Optional ip address
if (0) {𝗰𝗼𝗽𝘆𝗙𝗶𝗹𝗲𝗧𝗼𝗥𝗲𝗺𝗼𝘁𝗲 (q(/home/phil/perl/cpan/aaa.txt)); copyFileFromRemote (q(/home/phil/perl/cpan/aaa.txt)); copyFolderToRemote (q(/home/phil/perl/cpan/)); mergeFolderFromRemote(q(/home/phil/perl/cpan/)); }
Copy the specified $file from the server whose ip address is specified by $ip or returned by awsIp.
if (0) {copyFileToRemote (q(/home/phil/perl/cpan/aaa.txt)); 𝗰𝗼𝗽𝘆𝗙𝗶𝗹𝗲𝗙𝗿𝗼𝗺𝗥𝗲𝗺𝗼𝘁𝗲 (q(/home/phil/perl/cpan/aaa.txt)); copyFolderToRemote (q(/home/phil/perl/cpan/)); mergeFolderFromRemote(q(/home/phil/perl/cpan/)); }
Copy the $source folder to the $target folder after clearing the $target folder.
my $d = temporaryFolder; my $a = fpd($d, q(aaa)); my $b = fpd($d, q(bbb)); my $c = fpd($d, q(ccc)); my ($a1, $a2) = map {fpe($a, $_, q(txt))} 1..2; my ($b1, $b2) = map {fpe($b, $_, q(txt))} 1..2; my $files = {$a1 => "1111", $a2 => "2222"}; writeFiles($files); my $ra = readFiles($a); is_deeply $files, $ra; 𝗰𝗼𝗽𝘆𝗙𝗼𝗹𝗱𝗲𝗿($a, $b); my $rb = readFiles($b); is_deeply [sort values %$ra], [sort values %$rb]; unlink $a2; mergeFolder($a, $b); ok -e $b1; ok -e $b2; 𝗰𝗼𝗽𝘆𝗙𝗼𝗹𝗱𝗲𝗿($a, $b); ok -e $b1; ok !-e $b2; copyFile($a1, $a2); ok readFile($a1) eq readFile($a2); writeFiles($files); ok !moveFileNoClobber ($a1, $a2); ok moveFileWithClobber($a1, $a2); ok !-e $a1; ok readFile($a2) eq q(1111); ok moveFileNoClobber ($a2, $a1); ok !-e $a2; ok readFile($a1) eq q(1111); clearFolder(q(aaa), 11); clearFolder(q(bbb), 11);
Copy the $source folder into the $target folder retaining any existing files not replaced by copied files.
my $d = temporaryFolder; my $a = fpd($d, q(aaa)); my $b = fpd($d, q(bbb)); my $c = fpd($d, q(ccc)); my ($a1, $a2) = map {fpe($a, $_, q(txt))} 1..2; my ($b1, $b2) = map {fpe($b, $_, q(txt))} 1..2; my $files = {$a1 => "1111", $a2 => "2222"}; writeFiles($files); my $ra = readFiles($a); is_deeply $files, $ra; copyFolder($a, $b); my $rb = readFiles($b); is_deeply [sort values %$ra], [sort values %$rb]; unlink $a2; 𝗺𝗲𝗿𝗴𝗲𝗙𝗼𝗹𝗱𝗲𝗿($a, $b); ok -e $b1; ok -e $b2; copyFolder($a, $b); ok -e $b1; ok !-e $b2; copyFile($a1, $a2); ok readFile($a1) eq readFile($a2); writeFiles($files); ok !moveFileNoClobber ($a1, $a2); ok moveFileWithClobber($a1, $a2); ok !-e $a1; ok readFile($a2) eq q(1111); ok moveFileNoClobber ($a2, $a1); ok !-e $a2; ok readFile($a1) eq q(1111); clearFolder(q(aaa), 11); clearFolder(q(bbb), 11);
Copy the specified local $Source folder to the corresponding remote folder on the server whose ip address is specified by $ip or returned by awsIp. The default userid supplied by .ssh/config will be used on the remote server.
Parameter Description 1 $Source Source file 2 $ip Optional ip address of server
if (0) {copyFileToRemote (q(/home/phil/perl/cpan/aaa.txt)); copyFileFromRemote (q(/home/phil/perl/cpan/aaa.txt)); 𝗰𝗼𝗽𝘆𝗙𝗼𝗹𝗱𝗲𝗿𝗧𝗼𝗥𝗲𝗺𝗼𝘁𝗲 (q(/home/phil/perl/cpan/)); mergeFolderFromRemote(q(/home/phil/perl/cpan/)); }
Merge the specified $Source folder from the corresponding remote folder on the server whose ip address is specified by $ip or returned by awsIp. The default userid supplied by .ssh/config will be used on the remote server.
if (0) {copyFileToRemote (q(/home/phil/perl/cpan/aaa.txt)); copyFileFromRemote (q(/home/phil/perl/cpan/aaa.txt)); copyFolderToRemote (q(/home/phil/perl/cpan/)); 𝗺𝗲𝗿𝗴𝗲𝗙𝗼𝗹𝗱𝗲𝗿𝗙𝗿𝗼𝗺𝗥𝗲𝗺𝗼𝘁𝗲(q(/home/phil/perl/cpan/)); }
Methods to assist with testing
Remove all file paths from a specified $structure to make said $structure testable with "is_deeply" in Test::More.
Parameter Description 1 $structure Data structure reference
if (1) {my $d = {"/home/aaa/bbb.txt"=>1, "ccc/ddd.txt"=>2, "eee.txt"=>3}; my $D = 𝗿𝗲𝗺𝗼𝘃𝗲𝗙𝗶𝗹𝗲𝗣𝗮𝘁𝗵𝘀𝗙𝗿𝗼𝗺𝗦𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲($d); is_deeply 𝗿𝗲𝗺𝗼𝘃𝗲𝗙𝗶𝗹𝗲𝗣𝗮𝘁𝗵𝘀𝗙𝗿𝗼𝗺𝗦𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲($d), {"bbb.txt"=>1, "ddd.txt"=>2, "eee.txt"=>3}; ok writeStructureTest($d, q($d)) eq <<'END'; is_deeply 𝗿𝗲𝗺𝗼𝘃𝗲𝗙𝗶𝗹𝗲𝗣𝗮𝘁𝗵𝘀𝗙𝗿𝗼𝗺𝗦𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲($d), { "bbb.txt" => 1, "ddd.txt" => 2, "eee.txt" => 3 }; END }
Write a test for a data $structure with file names in it.
Parameter Description 1 $structure Data structure reference 2 $expr Expression
if (1) {my $d = {"/home/aaa/bbb.txt"=>1, "ccc/ddd.txt"=>2, "eee.txt"=>3}; my $D = removeFilePathsFromStructure($d); is_deeply removeFilePathsFromStructure($d), {"bbb.txt"=>1, "ddd.txt"=>2, "eee.txt"=>3}; ok 𝘄𝗿𝗶𝘁𝗲𝗦𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲𝗧𝗲𝘀𝘁($d, q($d)) eq <<'END'; is_deeply removeFilePathsFromStructure($d), { "bbb.txt" => 1, "ddd.txt" => 2, "eee.txt" => 3 }; END }
Image operations.
Return (width, height) of an $image.
Parameter Description 1 $image File containing image
my ($width, $height) = 𝗶𝗺𝗮𝗴𝗲𝗦𝗶𝘇𝗲(fpe(qw(a image jpg)));
Convert a docx $inputFile file to a fodt $outputFile using unoconv which must not be running elsewhere at the time. Unoconv can be installed via:
sudo apt install sharutils unoconv
Parameters:
Parameter Description 1 $inputFile Input file 2 $outputFile Output file
𝗰𝗼𝗻𝘃𝗲𝗿𝘁𝗗𝗼𝗰𝘅𝗧𝗼𝗙𝗼𝗱𝘁(fpe(qw(a docx)), fpe(qw(a fodt)));
Cut out the images embedded in a fodt file, perhaps produced via convertDocxToFodt, placing them in the specified folder and replacing them in the source file with:
<image href="$imageFile" outputclass="imageType">.
This conversion requires that you have both Imagemagick and unoconv installed on your system:
sudo apt install sharutils imagemagick unoconv
Parameter Description 1 $inputFile Input file 2 $outputFolder Output folder for images 3 $imagePrefix A prefix to be added to image file names
𝗰𝘂𝘁𝗢𝘂𝘁𝗜𝗺𝗮𝗴𝗲𝘀𝗜𝗻𝗙𝗼𝗱𝘁𝗙𝗶𝗹𝗲(fpe(qw(source fodt)), fpd(qw(images)), q(image));
Encode and decode using Json and Mime.
Convert a Perl $string to Json.
Parameter Description 1 $string Data to encode
my $A = 𝗲𝗻𝗰𝗼𝗱𝗲𝗝𝘀𝗼𝗻(my $a = {a=>1,b=>2, c=>[1..2]}); my $b = decodeJson($A); is_deeply $a, $b;
Convert a Json $string to Perl.
Parameter Description 1 $string Data to decode
my $A = encodeJson(my $a = {a=>1,b=>2, c=>[1..2]}); my $b = 𝗱𝗲𝗰𝗼𝗱𝗲𝗝𝘀𝗼𝗻($A); is_deeply $a, $b;
Encode an Ascii $string in base 64.
Parameter Description 1 $string String to encode
my $A = 𝗲𝗻𝗰𝗼𝗱𝗲𝗕𝗮𝘀𝗲𝟲𝟰(my $a = "Hello World" x 10); my $b = decodeBase64($A); ok $a eq $b;
Decode an Ascii $string in base 64.
Parameter Description 1 $string String to decode
my $A = encodeBase64(my $a = "Hello World" x 10); my $b = 𝗱𝗲𝗰𝗼𝗱𝗲𝗕𝗮𝘀𝗲𝟲𝟰($A); ok $a eq $b;
Convert a $string with Unicode codepoints that are not directly representable in Ascii into string that replaces these code points with their representation in Xml making the string usable in Xml documents.
Parameter Description 1 $string String to convert
ok 𝗰𝗼𝗻𝘃𝗲𝗿𝘁𝗨𝗻𝗶𝗰𝗼𝗱𝗲𝗧𝗼𝗫𝗺𝗹('setenta e três') eq q(setenta e três);
Encode an Ascii string as a string of hexadecimal digits.
Parameter Description 1 $ascii Ascii string
ok 𝗮𝘀𝗰𝗶𝗶𝗧𝗼𝗛𝗲𝘅𝗦𝘁𝗿𝗶𝗻𝗴("Hello World!") eq "48656c6c6f20576f726c6421"; ok "Hello World!" eq hexToAsciiString("48656c6c6f20576f726c6421");
Decode a string of hexadecimal digits as an Ascii string.
Parameter Description 1 $hex Hexadecimal string
ok asciiToHexString("Hello World!") eq "48656c6c6f20576f726c6421"; ok "Hello World!" eq 𝗵𝗲𝘅𝗧𝗼𝗔𝘀𝗰𝗶𝗶𝗦𝘁𝗿𝗶𝗻𝗴("48656c6c6f20576f726c6421");
Percent encode a url per: https://en.wikipedia.org/wiki/Percent-encoding#Percent-encoding_reserved_characters
ok 𝘄𝘄𝘄𝗘𝗻𝗰𝗼𝗱𝗲(q(a {b} <c>)) eq q(a%20%20%7bb%7d%20%3cc%3e); ok 𝘄𝘄𝘄𝗘𝗻𝗰𝗼𝗱𝗲(q(../)) eq q(%2e%2e/); ok wwwDecode(𝘄𝘄𝘄𝗘𝗻𝗰𝗼𝗱𝗲 $_) eq $_ for q(a {b} <c>), q(a b|c), q(%), q(%%), q(%%.%%); sub 𝘄𝘄𝘄𝗘𝗻𝗰𝗼𝗱𝗲($) {my ($string) = @_; # String join '', map {$translatePercentEncoding{$_}//$_} split //, $string }
Percent decode a url $string per: https://en.wikipedia.org/wiki/Percent-encoding#Percent-encoding_reserved_characters
ok wwwEncode(q(a {b} <c>)) eq q(a%20%20%7bb%7d%20%3cc%3e); ok wwwEncode(q(../)) eq q(%2e%2e/); ok 𝘄𝘄𝘄𝗗𝗲𝗰𝗼𝗱𝗲(wwwEncode $_) eq $_ for q(a {b} <c>), q(a b|c), q(%), q(%%), q(%%.%%); sub 𝘄𝘄𝘄𝗗𝗲𝗰𝗼𝗱𝗲($) {my ($string) = @_; # String my $r = ''; my @s = split //, $string; while(@s) {my $c = shift @s; if ($c eq q(%) and @s >= 2) {$c .= shift(@s).shift(@s); $r .= $TranslatePercentEncoding{$c}//$c; } else {$r .= $c; } } $r =~ s(%0d0a) ( )gs; # Awkward characters that appear in urls $r =~ s(\+) ( )gs; $r }
Numeric operations,
Test whether a number $n is a power of two, return the power if it is else undef.
Parameter Description 1 $n Number to check
ok 𝗽𝗼𝘄𝗲𝗿𝗢𝗳𝗧𝘄𝗼(1) == 0; ok 𝗽𝗼𝘄𝗲𝗿𝗢𝗳𝗧𝘄𝗼(2) == 1; ok !𝗽𝗼𝘄𝗲𝗿𝗢𝗳𝗧𝘄𝗼(3); ok 𝗽𝗼𝘄𝗲𝗿𝗢𝗳𝗧𝘄𝗼(4) == 2;
Find log two of the lowest power of two greater than or equal to a number $n.
ok 𝗰𝗼𝗻𝘁𝗮𝗶𝗻𝗶𝗻𝗴𝗣𝗼𝘄𝗲𝗿𝗢𝗳𝗧𝘄𝗼(1) == 0; ok 𝗰𝗼𝗻𝘁𝗮𝗶𝗻𝗶𝗻𝗴𝗣𝗼𝘄𝗲𝗿𝗢𝗳𝗧𝘄𝗼(2) == 1; ok 𝗰𝗼𝗻𝘁𝗮𝗶𝗻𝗶𝗻𝗴𝗣𝗼𝘄𝗲𝗿𝗢𝗳𝗧𝘄𝗼(3) == 2; ok 𝗰𝗼𝗻𝘁𝗮𝗶𝗻𝗶𝗻𝗴𝗣𝗼𝘄𝗲𝗿𝗢𝗳𝗧𝘄𝗼(4) == 2;
Set operations.
Merge a list of hashes @h by summing their values
Parameter Description 1 @h List of hashes to be summed
is_deeply +{a=>1, b=>2, c=>3}, 𝗺𝗲𝗿𝗴𝗲𝗛𝗮𝘀𝗵𝗲𝘀𝗕𝘆𝗦𝘂𝗺𝗺𝗶𝗻𝗴𝗩𝗮𝗹𝘂𝗲𝘀 +{a=>1,b=>1, c=>1}, +{b=>1,c=>1}, +{c=>1};
Invert a hash of hashes: given {a}{b} = c return {b}{c} = c
Parameter Description 1 $h Hash of hashes
my $h = {a=>{A=>q(aA), B=>q(aB)}, b=>{A=>q(bA), B=>q(bB)}}; my $g = {A=>{a=>q(aA), b=>q(bA)}, B=>{a=>q(aB), b=>q(bB)}}; is_deeply 𝗶𝗻𝘃𝗲𝗿𝘁𝗛𝗮𝘀𝗵𝗢𝗳𝗛𝗮𝘀𝗵𝗲𝘀($h), $g; is_deeply 𝗶𝗻𝘃𝗲𝗿𝘁𝗛𝗮𝘀𝗵𝗢𝗳𝗛𝗮𝘀𝗵𝗲𝘀($g), $h;
Form the union of the keys of the specified hashes @h as one hash whose keys represent the union.
Parameter Description 1 @h List of hashes to be united
if (1) {is_deeply 𝘂𝗻𝗶𝗼𝗻𝗢𝗳𝗛𝗮𝘀𝗵𝗞𝗲𝘆𝘀 ({a=>1,b=>2}, {b=>1,c=>1}, {c=>2}), {a=>1, b=>2, c=>2}; is_deeply intersectionOfHashKeys ({a=>1,b=>2},{b=>1,c=>1},{b=>3,c=>2}), {b=>1}; }
Form the intersection of the keys of the specified hashes @h as one hash whose keys represent the intersection.
Parameter Description 1 @h List of hashes to be intersected
if (1) {is_deeply unionOfHashKeys ({a=>1,b=>2}, {b=>1,c=>1}, {c=>2}), {a=>1, b=>2, c=>2}; is_deeply 𝗶𝗻𝘁𝗲𝗿𝘀𝗲𝗰𝘁𝗶𝗼𝗻𝗢𝗳𝗛𝗮𝘀𝗵𝗞𝗲𝘆𝘀 ({a=>1,b=>2},{b=>1,c=>1},{b=>3,c=>2}), {b=>1}; }
Form the union of the specified hashes @h as one hash whose values are a array of corresponding values from each hash
if (1) {is_deeply 𝘂𝗻𝗶𝗼𝗻𝗢𝗳𝗛𝗮𝘀𝗵𝗲𝘀𝗔𝘀𝗔𝗿𝗿𝗮𝘆𝘀 ({a=>1,b=>2}, {b=>1,c=>1}, {c=>2}), {a=>[1], b=>[2,1], c=>[undef,1,2]}; is_deeply intersectionOfHashesAsArrays ({a=>1,b=>2},{b=>1,c=>1},{b=>3,c=>2}), {b=>[2,1,3]}; }
Form the intersection of the specified hashes @h as one hash whose values are an array of corresponding values from each hash
if (1) {is_deeply unionOfHashesAsArrays ({a=>1,b=>2}, {b=>1,c=>1}, {c=>2}), {a=>[1], b=>[2,1], c=>[undef,1,2]}; is_deeply 𝗶𝗻𝘁𝗲𝗿𝘀𝗲𝗰𝘁𝗶𝗼𝗻𝗢𝗳𝗛𝗮𝘀𝗵𝗲𝘀𝗔𝘀𝗔𝗿𝗿𝗮𝘆𝘀 ({a=>1,b=>2},{b=>1,c=>1},{b=>3,c=>2}), {b=>[2,1,3]}; }
Union of sets @s represented as arrays of strings and/or the keys of hashes
Parameter Description 1 @s Array of arrays of strings and/or hashes
is_deeply [qw(a b c)], [𝘀𝗲𝘁𝗨𝗻𝗶𝗼𝗻(qw(a b c a a b b b))]; is_deeply [qw(a b c d e)], [𝘀𝗲𝘁𝗨𝗻𝗶𝗼𝗻 {a=>1, b=>2, e=>3}, [qw(c d e)], qw(e)];
Intersection of sets @s represented as arrays of strings and/or the keys of hashes
is_deeply [qw(a b c)], [𝘀𝗲𝘁𝗜𝗻𝘁𝗲𝗿𝘀𝗲𝗰𝘁𝗶𝗼𝗻[qw(e f g a b c )],[qw(a A b B c C)]]; is_deeply [qw(e)], [𝘀𝗲𝘁𝗜𝗻𝘁𝗲𝗿𝘀𝗲𝗰𝘁𝗶𝗼𝗻 {a=>1, b=>2, e=>3}, [qw(c d e)], qw(e)];
Returns the size of the intersection over the size of the union of one or more sets @s represented as arrays and/or hashes
my $f = 𝘀𝗲𝘁𝗜𝗻𝘁𝗲𝗿𝘀𝗲𝗰𝘁𝗶𝗼𝗻𝗢𝘃𝗲𝗿𝗨𝗻𝗶𝗼𝗻 {a=>1, b=>2, e=>3}, [qw(c d e)], qw(e); ok $f > 0.199999 && $f < 2.00001;
Partition, at a level of $confidence between 0 and 1, a set of sets @sets so that within each partition the setIntersectionOverUnion of any two sets in the partition is never less than the specified level of $confidence**2
Parameter Description 1 $confidence Minimum setIntersectionOverUnion 2 @sets Array of arrays of strings and/or hashes representing sets
is_deeply [𝘀𝗲𝘁𝗣𝗮𝗿𝘁𝗶𝘁𝗶𝗼𝗻𝗢𝗻𝗜𝗻𝘁𝗲𝗿𝘀𝗲𝗰𝘁𝗶𝗼𝗻𝗢𝘃𝗲𝗿𝗨𝗻𝗶𝗼𝗻 (0.80, [qw(a A b c d e)], [qw(a A B b c d e)], [qw(a A B C b c d)], )], [[["A", "B", "a".."e"], ["A", "a".."e"]], [["A".."C", "a".."d"]], ]; } if (1) { is_deeply [setPartitionOnIntersectionOverUnionOfSetsOfWords (0.80, [qw(a A b c d e)], [qw(a A B b c d e)], [qw(a A B C b c d)], )], [[["a", "A", "B", "C", "b", "c", "d"]], [["a", "A", "B", "b" .. "e"], ["a", "A", "b" .. "e"]], ];
Partition, at a level of $confidence between 0 and 1, a set of sets @sets of words so that within each partition the setIntersectionOverUnion of any two sets of words in the partition is never less than the specified $confidence**2
is_deeply [𝘀𝗲𝘁𝗣𝗮𝗿𝘁𝗶𝘁𝗶𝗼𝗻𝗢𝗻𝗜𝗻𝘁𝗲𝗿𝘀𝗲𝗰𝘁𝗶𝗼𝗻𝗢𝘃𝗲𝗿𝗨𝗻𝗶𝗼𝗻𝗢𝗳𝗦𝗲𝘁𝘀𝗢𝗳𝗪𝗼𝗿𝗱𝘀 (0.80, [qw(a A b c d e)], [qw(a A B b c d e)], [qw(a A B C b c d)], )], [[["a", "A", "B", "C", "b", "c", "d"]], [["a", "A", "B", "b" .. "e"], ["a", "A", "b" .. "e"]], ];
Partition, at a level of $confidence between 0 and 1, a set of sets @strings, each set represented by a string containing words and punctuation, each word possibly capitalized, so that within each partition the setPartitionOnIntersectionOverUnionOfSetsOfWords of any two sets of words in the partition is never less than the specified $confidence**2
Parameter Description 1 $confidence Minimum setIntersectionOverUnion 2 @strings Sets represented by strings
is_deeply [𝘀𝗲𝘁𝗣𝗮𝗿𝘁𝗶𝘁𝗶𝗼𝗻𝗢𝗻𝗜𝗻𝘁𝗲𝗿𝘀𝗲𝗰𝘁𝗶𝗼𝗻𝗢𝘃𝗲𝗿𝗨𝗻𝗶𝗼𝗻𝗢𝗳𝗦𝘁𝗿𝗶𝗻𝗴𝗦𝗲𝘁𝘀 (0.80, q(The Emu are seen here sometimes.), q(The Emu, Gnu are seen here sometimes.), q(The Emu, Gnu, Colt are seen here.), )], [["The Emu, Gnu, Colt are seen here."], ["The Emu, Gnu are seen here sometimes.", "The Emu are seen here sometimes.", ]];
Partition, at a level of $confidence between 0 and 1, a set of sets $hashSet represented by a hash, each hash value being a string containing words and punctuation, each word possibly capitalized, so that within each partition the setPartitionOnIntersectionOverUnionOfSetsOfWords of any two sets of words in the partition is never less than the specified $confidence**2 and the partition entries are the hash keys of the string sets.
Parameter Description 1 $confidence Minimum setIntersectionOverUnion 2 $hashSet Sets represented by the hash value strings
is_deeply [𝘀𝗲𝘁𝗣𝗮𝗿𝘁𝗶𝘁𝗶𝗼𝗻𝗢𝗻𝗜𝗻𝘁𝗲𝗿𝘀𝗲𝗰𝘁𝗶𝗼𝗻𝗢𝘃𝗲𝗿𝗨𝗻𝗶𝗼𝗻𝗢𝗳𝗛𝗮𝘀𝗵𝗦𝘁𝗿𝗶𝗻𝗴𝗦𝗲𝘁𝘀 (0.80, {e =>q(The Emu are seen here sometimes.), eg =>q(The Emu, Gnu are seen here sometimes.), egc=>q(The Emu, Gnu, Colt are seen here.), } )], [["e", "eg"], ["egc"]];
Partition, at a level of $confidence between 0 and 1, a set of sets $hashSet represented by a hash, each hash value being a string containing words and punctuation, each word possibly capitalized, so that within each partition the setPartitionOnIntersectionOverUnionOfSetsOfWords of any two sets of words in the partition is never less than the specified $confidence**2 and the partition entries are the hash keys of the string sets. The partition is performed in square root parallel.
my $N = 8; my %s; for my $a('a'..'z') {my @w; for my $b('a'..'e') {for my $c('a'..'e') {push @w, qq($a$b$c); } } for my $i(1..$N) {$s{qq($a$i)} = join ' ', @w; } } my $expected = [["a1" .. "a8"], ["b1" .. "b8"], ["c1" .. "c8"], ["d1" .. "d8"], ["e1" .. "e8"], ["f1" .. "f8"], ["g1" .. "g8"], ["h1" .. "h8"], ["i1" .. "i8"], ["j1" .. "j8"], ["k1" .. "k8"], ["l1" .. "l8"], ["m1" .. "m8"], ["n1" .. "n8"], ["o1" .. "o8"], ["p1" .. "p8"], ["q1" .. "q8"], ["r1" .. "r8"], ["s1" .. "s8"], ["t1" .. "t8"], ["u1" .. "u8"], ["v1" .. "v8"], ["w1" .. "w8"], ["x1" .. "x8"], ["y1" .. "y8"], ["z1" .. "z8"], ]; is_deeply $expected, [setPartitionOnIntersectionOverUnionOfHashStringSets (0.50, \%s)]; my $expectedInParallel = ["a1 a2 a3 a4 a5 a6 a7 a8", # Same strings in multiple parallel processes "b1 b2 b3 b4 b5 b6 b7 b8", "b1 b2 b3 b4 b5 b6 b7 b8", "c1 c2 c3 c4 c5 c6 c7 c8", "d1 d2 d3 d4 d5 d6 d7 d8", "d1 d2 d3 d4 d5 d6 d7 d8", "e1 e2 e3 e4 e5 e6 e7 e8", "f1 f2 f3 f4 f5 f6 f7 f8", "f1 f2 f3 f4 f5 f6 f7 f8", "g1 g2 g3 g4 g5 g6 g7 g8", "h1 h2 h3 h4 h5 h6 h7 h8", "h1 h2 h3 h4 h5 h6 h7 h8", "i1 i2 i3 i4 i5 i6 i7 i8", "j1 j2 j3 j4 j5 j6 j7 j8", "j1 j2 j3 j4 j5 j6 j7 j8", "k1 k2 k3 k4 k5 k6 k7 k8", "l1 l2 l3 l4 l5 l6 l7 l8", "l1 l2 l3 l4 l5 l6 l7 l8", "m1 m2 m3 m4 m5 m6 m7 m8", "n1 n2 n3 n4 n5 n6 n7 n8", "n1 n2 n3 n4 n5 n6 n7 n8", "o1 o2 o3 o4 o5 o6 o7 o8", "p1 p2 p3 p4 p5 p6 p7 p8", "q1 q2 q3 q4 q5 q6 q7 q8", "q1 q2 q3 q4 q5 q6 q7 q8", "r1 r2 r3 r4 r5 r6 r7 r8", "s1 s2 s3 s4 s5 s6 s7 s8", "s1 s2 s3 s4 s5 s6 s7 s8", "t1 t2 t3 t4 t5 t6 t7 t8", "u1 u2 u3 u4 u5 u6 u7 u8", "u1 u2 u3 u4 u5 u6 u7 u8", "v1 v2 v3 v4 v5 v6 v7 v8", "w1 w2 w3 w4 w5 w6 w7 w8", "w1 w2 w3 w4 w5 w6 w7 w8", "x1 x2 x3 x4 x5 x6 x7 x8", "y1 y2 y3 y4 y5 y6 y7 y8", "y1 y2 y3 y4 y5 y6 y7 y8", "z1 z2 z3 z4 z5 z6 z7 z8", ]; if (1) {my @p = 𝘀𝗲𝘁𝗣𝗮𝗿𝘁𝗶𝘁𝗶𝗼𝗻𝗢𝗻𝗜𝗻𝘁𝗲𝗿𝘀𝗲𝗰𝘁𝗶𝗼𝗻𝗢𝘃𝗲𝗿𝗨𝗻𝗶𝗼𝗻𝗢𝗳𝗛𝗮𝘀𝗵𝗦𝘁𝗿𝗶𝗻𝗴𝗦𝗲𝘁𝘀𝗜𝗻𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹 (0.50, \%s); is_deeply $expectedInParallel, [sort map {join ' ', @$_} @p]; }
Returns the indices at which an $item matches elements of the specified @array. If the item is a regular expression then it is matched as one, else it is a number it is matched as a number, else as a string.
Parameter Description 1 $item Item 2 @array Array
is_deeply [1], [𝗰𝗼𝗻𝘁𝗮𝗶𝗻𝘀(1,0..1)]; is_deeply [1,3], [𝗰𝗼𝗻𝘁𝗮𝗶𝗻𝘀(1, qw(0 1 0 1 0 0))]; is_deeply [0, 5], [𝗰𝗼𝗻𝘁𝗮𝗶𝗻𝘀('a', qw(a b c d e a b c d e))]; is_deeply [0, 1, 5], [𝗰𝗼𝗻𝘁𝗮𝗶𝗻𝘀(qr(a+), qw(a baa c d e aa b c d e))];
Returns the number of occurrences in $inString of $searchFor.
Parameter Description 1 $inString String to search in 2 $searchFor String to search for.
if (1) {ok 𝗰𝗼𝘂𝗻𝘁𝗢𝗰𝗰𝘂𝗿𝗲𝗻𝗰𝗲𝘀𝗜𝗻𝗦𝘁𝗿𝗶𝗻𝗴(q(a<b>c<b><b>d), q(<b>)) == 3; }
Partition a hash of strings and associated sizes into partitions with either a maximum size $maxSize or only one element; the hash %Sizes consisting of a mapping {string=>size}; with each partition being named with the shortest string prefix that identifies just the strings in that partition. Returns a list of {prefix => size}... describing each partition.
if (1) {my $ps = \&𝗽𝗮𝗿𝘁𝗶𝘁𝗶𝗼𝗻𝗦𝘁𝗿𝗶𝗻𝗴𝘀𝗢𝗻𝗣𝗿𝗲𝗳𝗶𝘅𝗕𝘆𝗦𝗶𝘇𝗲; is_deeply {&$ps(1)}, {}; is_deeply {&$ps(1, 1=>0)}, {q()=>0}; is_deeply {&$ps(1, 1=>1)}, {q()=>1}; is_deeply {&$ps(1, 1=>2)}, {1=>2}; is_deeply {&$ps(1, 1=>1,2=>1)}, {1=>1,2=>1}; is_deeply {&$ps(2, 11=>1,12=>1, 21=>1,22=>1)}, {1=>2, 2=>2}; is_deeply {&$ps(2, 111=>1,112=>1,113=>1, 121=>1,122=>1,123=>1, 131=>1,132=>1,133=>1)}, { 111 => 1, 112 => 1, 113 => 1, 121 => 1, 122 => 1, 123 => 1, 131 => 1, 132 => 1, 133 => 1 }; for(3..8) {is_deeply {&$ps($_, 111=>1,112=>1,113=>1, 121=>1,122=>1,123=>1, 131=>1,132=>1,133=>1)}, { 11 => 3, 12 => 3, 13 => 3 }; } is_deeply {&$ps(9, 111=>1,112=>1,113=>1, 121=>1,122=>1,123=>1, 131=>1,132=>1,133=>1)}, { q()=> 9}; is_deeply {&$ps(3, 111=>1,112=>1,113=>1, 121=>1,122=>1,123=>1, 131=>1,132=>1,133=>2)}, { 11 => 3, 12 => 3, 131 => 1, 132 => 1, 133 => 2 }; is_deeply {&$ps(4, 111=>1,112=>1,113=>1, 121=>1,122=>1,123=>1, 131=>1,132=>1,133=>2)}, { 11 => 3, 12 => 3, 13 => 4 }; }
Find the smallest and largest elements of arrays.
Find the minimum number in a list of numbers confessing to any ill defined values.
Parameter Description 1 @m Numbers
ok 𝗺𝗶𝗻(1) == 1; ok 𝗺𝗶𝗻(5,4,2,3) == 2;
Find the index of the minimum number in a list of numbers confessing to any ill defined values.
ok 𝗶𝗻𝗱𝗲𝘅𝗢𝗳𝗠𝗶𝗻(qw(2 3 1 2)) == 2;
Find the maximum number in a list of numbers confessing to any ill defined values.
ok !𝗺𝗮𝘅; ok 𝗺𝗮𝘅(1) == 1; ok 𝗺𝗮𝘅(1,4,2,3) == 4;
Find the index of the maximum number in a list of numbers confessing to any ill defined values.
{ok 𝗶𝗻𝗱𝗲𝘅𝗢𝗳𝗠𝗮𝘅(qw(2 3 1 2)) == 1;
Find the sum of any strings that look like numbers in an array.
Parameter Description 1 @a Array to sum
{ok 𝗮𝗿𝗿𝗮𝘆𝗦𝘂𝗺 (1..10) == 55;
Find the product of any strings that look like numbers in an array.
Parameter Description 1 @a Array to multiply
ok 𝗮𝗿𝗿𝗮𝘆𝗣𝗿𝗼𝗱𝘂𝗰𝘁(1..5) == 120;
Multiply by $multiplier each element of the array @a and return as the result.
Parameter Description 1 $multiplier Multiplier 2 @a Array to multiply and return
is_deeply[𝗮𝗿𝗿𝗮𝘆𝗧𝗶𝗺𝗲𝘀(2, 1..5)], [qw(2 4 6 8 10)];
Format data structures as tables.
Find the longest line in a $string.
Parameter Description 1 $string String of lines of text
ok 3 == 𝗺𝗮𝘅𝗶𝗺𝘂𝗺𝗟𝗶𝗻𝗲𝗟𝗲𝗻𝗴𝘁𝗵(<<END); a bb ccc END
Tabularize an array of arrays of text.
Parameter Description 1 $data Reference to an array of arrays of data to be formatted as a table.
my $d = [[qw(a 1)], [qw(bb 22)], [qw(ccc 333)], [qw(dddd 4444)]]; ok 𝗳𝗼𝗿𝗺𝗮𝘁𝗧𝗮𝗯𝗹𝗲𝗕𝗮𝘀𝗶𝗰($d) eq <<END, q(ftb); a 1 bb 22 ccc 333 dddd 4444 END } if (0) { my %pids; sub{startProcess {} %pids, 1; ok 1 >= keys %pids}->() for 1..8; waitForAllStartedProcessesToFinish(%pids); ok !keys(%pids)
Parameter Description 1 $data Data to be formatted 2 $columnTitles Optional reference to an array of titles or string of column descriptions 3 @options Options
ok 𝗳𝗼𝗿𝗺𝗮𝘁𝗧𝗮𝗯𝗹𝗲 ([[qw(A B C D )], [qw(AA BB CC DD )], [qw(AAA BBB CCC DDD )], [qw(AAAA BBBB CCCC DDDD)], [qw(1 22 333 4444)]], [qw(aa bb cc)]) eq <<END; aa bb cc 1 A B C D 2 AA BB CC DD 3 AAA BBB CCC DDD 4 AAAA BBBB CCCC DDDD 5 1 22 333 4444 END ok 𝗳𝗼𝗿𝗺𝗮𝘁𝗧𝗮𝗯𝗹𝗲 ([[qw(1 B C)], [qw(22 BB CC)], [qw(333 BBB CCC)], [qw(4444 22 333)]], [qw(aa bb cc)]) eq <<END; aa bb cc 1 1 B C 2 22 BB CC 3 333 BBB CCC 4 4444 22 333 END ok 𝗳𝗼𝗿𝗺𝗮𝘁𝗧𝗮𝗯𝗹𝗲 ([{aa=>'A', bb=>'B', cc=>'C'}, {aa=>'AA', bb=>'BB', cc=>'CC'}, {aa=>'AAA', bb=>'BBB', cc=>'CCC'}, {aa=>'1', bb=>'22', cc=>'333'} ]) eq <<END; aa bb cc 1 A B C 2 AA BB CC 3 AAA BBB CCC 4 1 22 333 END ok 𝗳𝗼𝗿𝗺𝗮𝘁𝗧𝗮𝗯𝗹𝗲 ({''=>[qw(aa bb cc)], 1=>[qw(A B C)], 22=>[qw(AA BB CC)], 333=>[qw(AAA BBB CCC)], 4444=>[qw(1 22 333)]}) eq <<END; aa bb cc 1 A B C 22 AA BB CC 333 AAA BBB CCC 4444 1 22 333 END ok 𝗳𝗼𝗿𝗺𝗮𝘁𝗧𝗮𝗯𝗹𝗲 ({1=>{aa=>'A', bb=>'B', cc=>'C'}, 22=>{aa=>'AA', bb=>'BB', cc=>'CC'}, 333=>{aa=>'AAA', bb=>'BBB', cc=>'CCC'}, 4444=>{aa=>'1', bb=>'22', cc=>'333'}}) eq <<END; aa bb cc 1 A B C 22 AA BB CC 333 AAA BBB CCC 4444 1 22 333 END ok 𝗳𝗼𝗿𝗺𝗮𝘁𝗧𝗮𝗯𝗹𝗲({aa=>'A', bb=>'B', cc=>'C'}, [qw(aaaa bbbb)]) eq <<END; aaaa bbbb aa A bb B cc C END my $file = fpe(qw(report txt)); # Create a report my $t = 𝗳𝗼𝗿𝗺𝗮𝘁𝗧𝗮𝗯𝗹𝗲 ([["a",undef], [undef, "b0ac"]], # Data - please replace 0a with a new line [undef, "BC"], # Column titles file=>$file, # Output file head=><<END); # Header Sample report. Table has NNNN rows. END ok -e $file; ok readFile($file) eq $t; unlink $file; ok nws($t) eq nws(<<END); Sample report. Table has 2 rows. This file: report.txt BC 1 a 2 b c END
Report of all the reports created. The optional parameters are the same as for formatTable
Parameter Description 1 @options Options
@formatTables = (); for my $m(2..8) {formatTable([map {[$_, $_*$m]} 1..$m], [q(Single), qq(* $m)], title=>qq(Multiply by $m)); } ok nws(𝗳𝗼𝗿𝗺𝗮𝘁𝘁𝗲𝗱𝗧𝗮𝗯𝗹𝗲𝘀𝗥𝗲𝗽𝗼𝗿𝘁) eq nws(<<END); Rows Title File 1 2 Multiply by 2 2 3 Multiply by 3 3 4 Multiply by 4 4 5 Multiply by 5 5 6 Multiply by 6 6 7 Multiply by 7 7 8 Multiply by 8 END
Count the number of unique instances of each value a column in a table assumes.
Parameter Description 1 $data Table == array of arrays 2 $column Column number to summarize.
is_deeply [𝘀𝘂𝗺𝗺𝗮𝗿𝗶𝘇𝗲𝗖𝗼𝗹𝘂𝗺𝗻([map {[$_]} qw(A B D B C D C D A C D C B B D)], 0)], [[5, "D"], [4, "B"], [4, "C"], [2, "A"]]; ok nws(formatTable ([map {[split m//, $_]} qw(AA CB CD BC DC DD CD AD AA DC CD CC BB BB BD)], [qw(Col-1 Col-2)], summarize=>1)) eq nws(<<'END'); Summary_of_column - Count of unique values found in each column Use the Geany flick capability by placing your cursor on the first word Comma_Separated_Values_of_column - Comma separated list of the unique values found in each column of these lines and pressing control + down arrow to see each sub report. Col-1 Col-2 1 A A 2 C B 3 C D 4 B C 5 D C 6 D D 7 C D 8 A D 9 A A 10 D C 11 C D 12 C C 13 B B 14 B B 15 B D Summary_of_column_Col-1 Count Col-1 1 5 C 2 4 B 3 3 A 4 3 D Comma_Separated_Values_of_column_Col-1: "A","B","C","D" Summary_of_column_Col-2 Count Col-2 1 6 D 2 4 C 3 3 B 4 2 A Comma_Separated_Values_of_column_Col-2: "A","B","C","D" END
Count keys down to the specified level.
Parameter Description 1 $maxDepth Maximum depth to count to 2 $ref Reference to an array or a hash
my $a = [[1..3], {map{$_=>1} 1..3}]; my $h = {a=>[1..3], b=>{map{$_=>1} 1..3}}; ok 𝗸𝗲𝘆𝗖𝗼𝘂𝗻𝘁(2, $a) == 6; ok 𝗸𝗲𝘆𝗖𝗼𝘂𝗻𝘁(2, $h) == 6;
Format an array of arrays of scalars as an html table using the %options described in formatTableCheckKeys.
Parameter Description 1 $data Data to be formatted 2 %options Options
if (1) {my $t = 𝗳𝗼𝗿𝗺𝗮𝘁𝗛𝘁𝗺𝗹𝗧𝗮𝗯𝗹𝗲 ([ [qw(1 a)], [qw(2 b)], ], title => q(Sample html table), head => q(Head NNNN rows), foot => q(Footer), columns=> <<END, source The source number target The target letter END ); my $T = <<'END'; <h1>Sample html table</h1> <p>Head 2 rows</p> <p><table borders="0" cellpadding="10" cellspacing="5"> <tr><th><span title="The source number">source</span><th><span title="The target letter">target</span> <tr><td>1<td>a <tr><td>2<td>b </table></p> <p><pre> source The source number target The target letter </pre></p> <p>Footer</p> <span class="options" style="display: none">{ columns => "source The source number target The target letter ", foot => "Footer", head => "Head NNNN rows", rows => 2, title => "Sample html table", }</span> END ok "$t " eq $T; }
Create an index of html reports.
Parameter Description 1 $reports Reports folder 2 $title Title of report of reports 3 $url $url to get files 4 $columns Number of columns - defaults to 1
if (1) {my $reports = temporaryFolder; formatHtmlAndTextTables ($reports, $reports, q(/cgi-bin/getFile.pl?), q(/a/), [[qw(1 /a/a)], [qw(2 /a/b)], ], title => q(Bad files), head => q(Head NNNN rows), foot => q(Footer), file => q(bad.html), facet => q(files), aspectColor => "red", columns => <<END, source The source number target The target letter END ); formatHtmlAndTextTables ($reports, $reports, q(/cgi-bin/getFile.pl?file=), q(/a/), [[qw(1 /a/a1)], [qw(2 /a/b2)], [qw(3 /a/b3)], ], title => q(Good files), head => q(Head NNNN rows), foot => q(Footer), file => q(good.html), facet => q(files), aspectColor => "green", columns => <<END, source The source number target The target letter END ); formatHtmlAndTextTablesWaitPids; my $result = 𝗳𝗼𝗿𝗺𝗮𝘁𝗛𝘁𝗺𝗹𝗧𝗮𝗯𝗹𝗲𝘀𝗜𝗻𝗱𝗲𝘅($reports, q(TITLE), q(/cgi-bin/getFile.pl?file=)); ok $result =~ m(3.*Good files); ok $result =~ m(2.*Bad files); # ok $result =~ m(green.*>3<.*>Good files); # ok $result =~ m(red.*>2<.*>Bad files); clearFolder($reports, 11); }
Wait on all table formatting pids to complete
if (1) {my $reports = temporaryFolder; formatHtmlAndTextTables ($reports, $reports, q(/cgi-bin/getFile.pl?), q(/a/), [[qw(1 /a/a)], [qw(2 /a/b)], ], title => q(Bad files), head => q(Head NNNN rows), foot => q(Footer), file => q(bad.html), facet => q(files), aspectColor => "red", columns => <<END, source The source number target The target letter END ); formatHtmlAndTextTables ($reports, $reports, q(/cgi-bin/getFile.pl?file=), q(/a/), [[qw(1 /a/a1)], [qw(2 /a/b2)], [qw(3 /a/b3)], ], title => q(Good files), head => q(Head NNNN rows), foot => q(Footer), file => q(good.html), facet => q(files), aspectColor => "green", columns => <<END, source The source number target The target letter END ); 𝗳𝗼𝗿𝗺𝗮𝘁𝗛𝘁𝗺𝗹𝗔𝗻𝗱𝗧𝗲𝘅𝘁𝗧𝗮𝗯𝗹𝗲𝘀𝗪𝗮𝗶𝘁𝗣𝗶𝗱𝘀; my $result = formatHtmlTablesIndex($reports, q(TITLE), q(/cgi-bin/getFile.pl?file=)); ok $result =~ m(3.*Good files); ok $result =~ m(2.*Bad files); # ok $result =~ m(green.*>3<.*>Good files); # ok $result =~ m(red.*>2<.*>Bad files); clearFolder($reports, 11); }
Create text and html versions of a tabular report
Parameter Description 1 $reports Folder to contain text reports 2 $html Folder to contain html reports 3 $getFile L<url|https://en.wikipedia.org/wiki/URL> to get files 4 $filePrefix File prefix to be removed from file entries or array of file prefixes 5 $data Data 6 %options Options
if (1) {my $reports = temporaryFolder; 𝗳𝗼𝗿𝗺𝗮𝘁𝗛𝘁𝗺𝗹𝗔𝗻𝗱𝗧𝗲𝘅𝘁𝗧𝗮𝗯𝗹𝗲𝘀 ($reports, $reports, q(/cgi-bin/getFile.pl?), q(/a/), [[qw(1 /a/a)], [qw(2 /a/b)], ], title => q(Bad files), head => q(Head NNNN rows), foot => q(Footer), file => q(bad.html), facet => q(files), aspectColor => "red", columns => <<END, source The source number target The target letter END ); 𝗳𝗼𝗿𝗺𝗮𝘁𝗛𝘁𝗺𝗹𝗔𝗻𝗱𝗧𝗲𝘅𝘁𝗧𝗮𝗯𝗹𝗲𝘀 ($reports, $reports, q(/cgi-bin/getFile.pl?file=), q(/a/), [[qw(1 /a/a1)], [qw(2 /a/b2)], [qw(3 /a/b3)], ], title => q(Good files), head => q(Head NNNN rows), foot => q(Footer), file => q(good.html), facet => q(files), aspectColor => "green", columns => <<END, source The source number target The target letter END ); formatHtmlAndTextTablesWaitPids; my $result = formatHtmlTablesIndex($reports, q(TITLE), q(/cgi-bin/getFile.pl?file=)); ok $result =~ m(3.*Good files); ok $result =~ m(2.*Bad files); # ok $result =~ m(green.*>3<.*>Good files); # ok $result =~ m(red.*>2<.*>Bad files); clearFolder($reports, 11); }
Load data structures from lines.
Load an array from lines of text in a string.
Parameter Description 1 $string The string of lines from which to create an array
my $s = 𝗹𝗼𝗮𝗱𝗔𝗿𝗿𝗮𝘆𝗙𝗿𝗼𝗺𝗟𝗶𝗻𝗲𝘀 <<END; a a b b END is_deeply $s, [q(a a), q(b b)]; ok formatTable($s) eq <<END; 0 a a 1 b b END
Load a hash: first word of each line is the key and the rest is the value.
Parameter Description 1 $string The string of lines from which to create a hash
my $s = 𝗹𝗼𝗮𝗱𝗛𝗮𝘀𝗵𝗙𝗿𝗼𝗺𝗟𝗶𝗻𝗲𝘀 <<END; a 10 11 12 b 20 21 22 END is_deeply $s, {a => q(10 11 12), b =>q(20 21 22)}; ok formatTable($s) eq <<END; a 10 11 12 b 20 21 22 END
Load an array of arrays from lines of text: each line is an array of words.
Parameter Description 1 $string The string of lines from which to create an array of arrays
my $s = 𝗹𝗼𝗮𝗱𝗔𝗿𝗿𝗮𝘆𝗔𝗿𝗿𝗮𝘆𝗙𝗿𝗼𝗺𝗟𝗶𝗻𝗲𝘀 <<END; A B C AA BB CC END is_deeply $s, [[qw(A B C)], [qw(AA BB CC)]]; ok formatTable($s) eq <<END; 1 A B C 2 AA BB CC END
Load a hash of arrays from lines of text: the first word of each line is the key, the remaining words are the array contents.
Parameter Description 1 $string The string of lines from which to create a hash of arrays
my $s = 𝗹𝗼𝗮𝗱𝗛𝗮𝘀𝗵𝗔𝗿𝗿𝗮𝘆𝗙𝗿𝗼𝗺𝗟𝗶𝗻𝗲𝘀 <<END; a A B C b AA BB CC END is_deeply $s, {a =>[qw(A B C)], b => [qw(AA BB CC)] }; ok formatTable($s) eq <<END; a A B C b AA BB CC END
Load an array of hashes from lines of text: each line is a hash of words.
my $s = 𝗹𝗼𝗮𝗱𝗔𝗿𝗿𝗮𝘆𝗛𝗮𝘀𝗵𝗙𝗿𝗼𝗺𝗟𝗶𝗻𝗲𝘀 <<END; A 1 B 2 AA 11 BB 22 END is_deeply $s, [{A=>1, B=>2}, {AA=>11, BB=>22}]; ok formatTable($s) eq <<END; A AA B BB 1 1 2 2 11 22 END
Load a hash of hashes from lines of text: the first word of each line is the key, the remaining words are the sub hash contents.
my $s = 𝗹𝗼𝗮𝗱𝗛𝗮𝘀𝗵𝗛𝗮𝘀𝗵𝗙𝗿𝗼𝗺𝗟𝗶𝗻𝗲𝘀 <<END; a A 1 B 2 b AA 11 BB 22 END is_deeply $s, {a=>{A=>1, B=>2}, b=>{AA=>11, BB=>22}}; ok formatTable($s) eq <<END; A AA B BB a 1 2 b 11 22 END
Check the keys in a hash conform to those $permitted.
Parameter Description 1 $hash The hash to test 2 $permitted A hash of the permitted keys and their meanings
eval q{𝗰𝗵𝗲𝗰𝗸𝗞𝗲𝘆𝘀({a=>1, b=>2, d=>3}, {a=>1, b=>2, c=>3})}; ok nws($@) =~ m(\AInvalid options chosen: d Permitted.+?: a 1 b 2 c 3);
Replace $a->{value} = $b with $a->value = $b which reduces the amount of typing required, is easier to read and provides a hard check that {value} is spelled correctly.
Generate lvalue method scalar methods in the current package, A method whose value has not yet been set will return a new scalar with value undef. Suffixing X to the scalar name will confess if a value has not been set.
Parameter Description 1 @names List of method names
package Scalars; my $a = bless{}; Data::Table::Text::𝗴𝗲𝗻𝗟𝗩𝗮𝗹𝘂𝗲𝗦𝗰𝗮𝗹𝗮𝗿𝗠𝗲𝘁𝗵𝗼𝗱𝘀(qw(aa bb cc)); $a->aa = 'aa'; Test::More::ok $a->aa eq 'aa'; Test::More::ok !$a->bb; Test::More::ok $a->bbX eq q(); $a->aa = undef; Test::More::ok !$a->aa;
Generate lvalue method scalar methods in the current package if they do not already exist. A method whose value has not yet been set will return a new scalar with value undef. Suffixing X to the scalar name will confess if a value has not been set.
my $class = "Data::Table::Text::Test"; my $a = bless{}, $class; 𝗮𝗱𝗱𝗟𝗩𝗮𝗹𝘂𝗲𝗦𝗰𝗮𝗹𝗮𝗿𝗠𝗲𝘁𝗵𝗼𝗱𝘀(qq(${class}::$_)) for qw(aa bb aa bb); $a->aa = 'aa'; ok $a->aa eq 'aa'; ok !$a->bb; ok $a->bbX eq q(); $a->aa = undef; ok !$a->aa;
Generate lvalue method scalar methods with default values in the current package. A reference to a method whose value has not yet been set will return a scalar whose value is the name of the method.
package ScalarsWithDefaults; my $a = bless{}; Data::Table::Text::𝗴𝗲𝗻𝗟𝗩𝗮𝗹𝘂𝗲𝗦𝗰𝗮𝗹𝗮𝗿𝗠𝗲𝘁𝗵𝗼𝗱𝘀𝗪𝗶𝘁𝗵𝗗𝗲𝗳𝗮𝘂𝗹𝘁𝗩𝗮𝗹𝘂𝗲𝘀(qw(aa bb cc)); Test::More::ok $a->aa eq 'aa';
Generate lvalue method array methods in the current package. A reference to a method that has no yet been set will return a reference to an empty array.
package Arrays; my $a = bless{}; Data::Table::Text::𝗴𝗲𝗻𝗟𝗩𝗮𝗹𝘂𝗲𝗔𝗿𝗿𝗮𝘆𝗠𝗲𝘁𝗵𝗼𝗱𝘀(qw(aa bb cc)); $a->aa->[1] = 'aa'; Test::More::ok $a->aa->[1] eq 'aa';
Generate lvalue method hash methods in the current package. A reference to a method that has no yet been set will return a reference to an empty hash.
Parameter Description 1 @names Method names
package Hashes; my $a = bless{}; Data::Table::Text::𝗴𝗲𝗻𝗟𝗩𝗮𝗹𝘂𝗲𝗛𝗮𝘀𝗵𝗠𝗲𝘁𝗵𝗼𝗱𝘀(qw(aa bb cc)); $a->aa->{a} = 'aa'; Test::More::ok $a->aa->{a} eq 'aa';
Parameter Description 1 $bless Package name 2 %attributes Hash of attribute names and values
my $o = 𝗴𝗲𝗻𝗛𝗮𝘀𝗵(q(TestHash), # Definition of a blessed hash. a=>q(aa), # Definition of attribute aa. b=>q(bb), # Definition of attribute bb. ); ok $o->a eq q(aa); is_deeply $o, {a=>"aa", b=>"bb"}; my $p = 𝗴𝗲𝗻𝗛𝗮𝘀𝗵(q(TestHash), c=>q(cc), # Definition of attribute cc. ); ok $p->c eq q(cc); ok $p->a = q(aa); ok $p->a eq q(aa); is_deeply $p, {a=>"aa", c=>"cc"}; loadHash($p, a=>11, b=>22); # Load the hash is_deeply $p, {a=>11, b=>22, c=>"cc"}; my $r = eval {loadHash($p, d=>44)}; # Try to load the hash ok $@ =~ m(Cannot load attribute: d);
Load the specified blessed $hash generated with genHash with %attributes. Confess to any unknown attribute names.
Parameter Description 1 $hash Hash 2 %attributes Hash of attribute names and values to be loaded
my $o = genHash(q(TestHash), # Definition of a blessed hash. a=>q(aa), # Definition of attribute aa. b=>q(bb), # Definition of attribute bb. ); ok $o->a eq q(aa); is_deeply $o, {a=>"aa", b=>"bb"}; my $p = genHash(q(TestHash), c=>q(cc), # Definition of attribute cc. ); ok $p->c eq q(cc); ok $p->a = q(aa); ok $p->a eq q(aa); is_deeply $p, {a=>"aa", c=>"cc"}; 𝗹𝗼𝗮𝗱𝗛𝗮𝘀𝗵($p, a=>11, b=>22); # Load the hash is_deeply $p, {a=>11, b=>22, c=>"cc"}; my $r = eval {𝗹𝗼𝗮𝗱𝗛𝗮𝘀𝗵($p, d=>44)}; # Try to load the hash ok $@ =~ m(Cannot load attribute: d);
Ensures that all the hashes within a tower of data structures have LValue methods to get and set their current keys.
Parameter Description 1 $d Data structure
if (1) {my $a = bless [bless {aaa=>42}, "AAAA"], "BBBB"; eval {$a->[0]->aaa}; ok $@ =~ m(\ACan.t locate object method .aaa. via package .AAAA.); 𝗿𝗲𝗹𝗼𝗮𝗱𝗛𝗮𝘀𝗵𝗲𝘀($a); ok $a->[0]->aaa == 42; } if (1) {my $a = bless [bless {ccc=>42}, "CCCC"], "DDDD"; eval {$a->[0]->ccc}; ok $@ =~ m(\ACan.t locate object method .ccc. via package .CCCC.); 𝗿𝗲𝗹𝗼𝗮𝗱𝗛𝗮𝘀𝗵𝗲𝘀($a); ok $a->[0]->ccc == 42; }
Set a package search order for methods requested in the current package via AUTOLOAD.
Parameter Description 1 $set Package to set 2 @search Package names in search order.
if (1) {if (1) {package AAAA; sub aaaa{q(AAAAaaaa)} sub bbbb{q(AAAAbbbb)} sub cccc{q(AAAAcccc)} } if (1) {package BBBB; sub aaaa{q(BBBBaaaa)} sub bbbb{q(BBBBbbbb)} sub dddd{q(BBBBdddd)} } if (1) {package CCCC; sub aaaa{q(CCCCaaaa)} sub dddd{q(CCCCdddd)} sub eeee{q(CCCCeeee)} } 𝘀𝗲𝘁𝗣𝗮𝗰𝗸𝗮𝗴𝗲𝗦𝗲𝗮𝗿𝗰𝗵𝗢𝗿𝗱𝗲𝗿(__PACKAGE__, qw(CCCC BBBB AAAA)); ok &aaaa eq q(CCCCaaaa); ok &bbbb eq q(BBBBbbbb); ok &cccc eq q(AAAAcccc); ok &aaaa eq q(CCCCaaaa); ok &bbbb eq q(BBBBbbbb); ok &cccc eq q(AAAAcccc); ok &dddd eq q(CCCCdddd); ok &eeee eq q(CCCCeeee); 𝘀𝗲𝘁𝗣𝗮𝗰𝗸𝗮𝗴𝗲𝗦𝗲𝗮𝗿𝗰𝗵𝗢𝗿𝗱𝗲𝗿(__PACKAGE__, qw(AAAA BBBB CCCC)); ok &aaaa eq q(AAAAaaaa); ok &bbbb eq q(AAAAbbbb); ok &cccc eq q(AAAAcccc); ok &aaaa eq q(AAAAaaaa); ok &bbbb eq q(AAAAbbbb); ok &cccc eq q(AAAAcccc); ok &dddd eq q(BBBBdddd); ok &eeee eq q(CCCCeeee); }
Test whether the specified $package contains the subroutine <$sub>.
Parameter Description 1 $package Package name 2 $sub Subroutine name
if (1) {sub AAAA::Call {q(AAAA)} sub BBBB::Call {q(BBBB)} sub BBBB::call {q(bbbb)} if (1) {package BBBB; use Test::More; *ok = *Test::More::ok; *𝗶𝘀𝗦𝘂𝗯𝗜𝗻𝗣𝗮𝗰𝗸𝗮𝗴𝗲 = *Data::Table::Text::𝗶𝘀𝗦𝘂𝗯𝗜𝗻𝗣𝗮𝗰𝗸𝗮𝗴𝗲; ok 𝗶𝘀𝗦𝘂𝗯𝗜𝗻𝗣𝗮𝗰𝗸𝗮𝗴𝗲(q(AAAA), q(Call)); ok !𝗶𝘀𝗦𝘂𝗯𝗜𝗻𝗣𝗮𝗰𝗸𝗮𝗴𝗲(q(AAAA), q(call)); ok 𝗶𝘀𝗦𝘂𝗯𝗜𝗻𝗣𝗮𝗰𝗸𝗮𝗴𝗲(q(BBBB), q(Call)); ok 𝗶𝘀𝗦𝘂𝗯𝗜𝗻𝗣𝗮𝗰𝗸𝗮𝗴𝗲(q(BBBB), q(call)); ok Call eq q(BBBB); ok call eq q(bbbb); &Data::Table::Text::overrideMethods(qw(AAAA BBBB Call call)); *𝗶𝘀𝗦𝘂𝗯𝗜𝗻𝗣𝗮𝗰𝗸𝗮𝗴𝗲 = *Data::Table::Text::𝗶𝘀𝗦𝘂𝗯𝗜𝗻𝗣𝗮𝗰𝗸𝗮𝗴𝗲; ok 𝗶𝘀𝗦𝘂𝗯𝗜𝗻𝗣𝗮𝗰𝗸𝗮𝗴𝗲(q(AAAA), q(Call)); ok 𝗶𝘀𝗦𝘂𝗯𝗜𝗻𝗣𝗮𝗰𝗸𝗮𝗴𝗲(q(AAAA), q(call)); ok 𝗶𝘀𝗦𝘂𝗯𝗜𝗻𝗣𝗮𝗰𝗸𝗮𝗴𝗲(q(BBBB), q(Call)); ok 𝗶𝘀𝗦𝘂𝗯𝗜𝗻𝗣𝗮𝗰𝗸𝗮𝗴𝗲(q(BBBB), q(call)); ok Call eq q(AAAA); ok call eq q(bbbb); package AAAA; use Test::More; *ok = *Test::More::ok; ok Call eq q(AAAA); ok &call eq q(bbbb); } }
For each method, if it exists in package $from then export it to package $to replacing any existing method in $to, otherwise export the method from package $to to package $from in order to merge the behavior of the $from and $to packages with respect to the named methods with duplicates resolved if favour of package $from.
Parameter Description 1 $from Name of package from which to import methods 2 $to Package into which to import the methods 3 @methods List of methods to try importing.
if (1) {sub AAAA::Call {q(AAAA)} sub BBBB::Call {q(BBBB)} sub BBBB::call {q(bbbb)} if (1) {package BBBB; use Test::More; *ok = *Test::More::ok; *isSubInPackage = *Data::Table::Text::isSubInPackage; ok isSubInPackage(q(AAAA), q(Call)); ok !isSubInPackage(q(AAAA), q(call)); ok isSubInPackage(q(BBBB), q(Call)); ok isSubInPackage(q(BBBB), q(call)); ok Call eq q(BBBB); ok call eq q(bbbb); &Data::Table::Text::𝗼𝘃𝗲𝗿𝗿𝗶𝗱𝗲𝗠𝗲𝘁𝗵𝗼𝗱𝘀(qw(AAAA BBBB Call call)); *isSubInPackage = *Data::Table::Text::isSubInPackage; ok isSubInPackage(q(AAAA), q(Call)); ok isSubInPackage(q(AAAA), q(call)); ok isSubInPackage(q(BBBB), q(Call)); ok isSubInPackage(q(BBBB), q(call)); ok Call eq q(AAAA); ok call eq q(bbbb); package AAAA; use Test::More; *ok = *Test::More::ok; ok Call eq q(AAAA); ok &call eq q(bbbb); } }
This is a static method and so should be invoked as:
Data::Table::Text::overrideMethods
Override methods down the list of @packages then reabsorb any unused methods back up the list of packages so that all the packages have the same methods as the last package with methods from packages mentioned earlier overriding methods from packages mentioned later. The methods to override and reabsorb are listed by the sub overridableMethods in the last package in the packages list. Confess to any errors.
Parameter Description 1 @packages List of packages
ok 𝗼𝘃𝗲𝗿𝗿𝗶𝗱𝗲𝗔𝗻𝗱𝗥𝗲𝗮𝗯𝘀𝗼𝗿𝗯𝗠𝗲𝘁𝗵𝗼𝗱𝘀(qw(main Edit::Xml Data::Edit::Xml));
Data::Table::Text::overrideAndReabsorbMethods
Confirm that the specified references are to the specified package
Parameter Description 1 $package Package 2 @refs References
eval q{𝗮𝘀𝘀𝗲𝗿𝘁𝗣𝗮𝗰𝗸𝗮𝗴𝗲𝗥𝗲𝗳𝘀(q(bbb), bless {}, q(aaa))}; ok $@ =~ m(\AWanted reference to bbb, but got aaa);
Confirm that the specified references are to the package into which this routine has been exported.
Parameter Description 1 @refs References
eval q{𝗮𝘀𝘀𝗲𝗿𝘁𝗥𝗲𝗳(bless {}, q(aaa))}; ok $@ =~ m(\AWanted reference to Data::Table::Text, but got aaa);
Create a hash from an array
Parameter Description 1 @array Array
is_deeply 𝗮𝗿𝗿𝗮𝘆𝗧𝗼𝗛𝗮𝘀𝗵(qw(a b c)), {a=>1, b=>1, c=>1};
Flatten an array of scalars, array and hash references to make an array of scalars by flattening the array references and hash values.
Parameter Description 1 @array Array to flatten
is_deeply [1..5], [𝗳𝗹𝗮𝘁𝘁𝗲𝗻𝗔𝗿𝗿𝗮𝘆𝗔𝗻𝗱𝗛𝗮𝘀𝗵𝗩𝗮𝗹𝘂𝗲𝘀([1], [[2]], {a=>3, b=>[4, [5]]})], 'ggg';
Returns the (package, name, file, line) of a perl $sub reference.
Parameter Description 1 $sub Reference to a sub with a name.
is_deeply [(𝗴𝗲𝘁𝗦𝘂𝗯𝗡𝗮𝗺𝗲(\&dateTime))[0,1]], ["Data::Table::Text", "dateTime"];
Actions on strings.
Get the Md5 sum of a $string that might contain utf8 code points.
my $s = join '', 1..100; my $m = q(ef69caaaeea9c17120821a9eb6c7f1de); ok 𝘀𝘁𝗿𝗶𝗻𝗴𝗠𝗱𝟱𝗦𝘂𝗺($s) eq $m; my $f = writeFile(undef, $s); ok fileMd5Sum($f) eq $m; unlink $f; ok guidFromString(join '', 1..100) eq q(GUID-ef69caaa-eea9-c171-2082-1a9eb6c7f1de); ok guidFromMd5(𝘀𝘁𝗿𝗶𝗻𝗴𝗠𝗱𝟱𝗦𝘂𝗺(join('', 1..100))) eq q(GUID-ef69caaa-eea9-c171-2082-1a9eb6c7f1de); ok md5FromGuid(q(GUID-ef69caaa-eea9-c171-2082-1a9eb6c7f1de)) eq q(ef69caaaeea9c17120821a9eb6c7f1de); ok 𝘀𝘁𝗿𝗶𝗻𝗴𝗠𝗱𝟱𝗦𝘂𝗺(q(𝝰 𝝱 𝝲)) eq q(3c2b7c31b1011998bd7e1f66fb7c024d); } if (1) {ok arraySum (1..10) == 55; ok arrayProduct(1..5) == 120; is_deeply[arrayTimes(2, 1..5)], [qw(2 4 6 8 10)];
Indent lines contained in a string or formatted table by the specified string.
Parameter Description 1 $string The string of lines to indent 2 $indent The indenting string
my $t = [qw(aa bb cc)]; my $d = [[qw(A B C)], [qw(AA BB CC)], [qw(AAA BBB CCC)], [qw(1 22 333)]]; my $s = 𝗶𝗻𝗱𝗲𝗻𝘁𝗦𝘁𝗿𝗶𝗻𝗴(formatTable($d), ' ')." "; ok $s eq <<END; 1 A B C 2 AA BB CC 3 AAA BBB CCC 4 1 22 333 END
Replace all instances in $string of $source with $target
Parameter Description 1 $string String in which to replace substrings 2 $source The string to be replaced 3 $target The replacement string
ok 𝗿𝗲𝗽𝗹𝗮𝗰𝗲𝗦𝘁𝗿𝗶𝗻𝗴𝗪𝗶𝘁𝗵𝗦𝘁𝗿𝗶𝗻𝗴(q(abababZ), q(ab), q(c)) eq q(cccZ), 'eee';
Format the specified $string so it can be displayed in $width columns.
Parameter Description 1 $string The string of text to format 2 $width The formatted width.
ok 𝗳𝗼𝗿𝗺𝗮𝘁𝗦𝘁𝗿𝗶𝗻𝗴(<<END, 16) eq <<END, 'fff'; Now is the time for all good men to come to the rescue of the ailing B<party>. END
Test whether a string is blank.
ok 𝗶𝘀𝗕𝗹𝗮𝗻𝗸(""); ok 𝗶𝘀𝗕𝗹𝗮𝗻𝗸(" ");
Remove any white space from the front and end of a string.
ok 𝘁𝗿𝗶𝗺(" a b ") eq join ' ', qw(a b);
Pad the specified $string to a multiple of the specified $length with blanks or the specified padding character to a multiple of a specified length.
Parameter Description 1 $string String 2 $length Tab width 3 $padding Padding string
ok 𝗽𝗮𝗱('abc ', 2).'=' eq "abc ="; ok 𝗽𝗮𝗱('abc ', 3).'=' eq "abc="; ok 𝗽𝗮𝗱('abc ', 4, q(.)).'=' eq "abc.=";
First N characters of a string.
Parameter Description 1 $string String 2 $length Length
ok 𝗳𝗶𝗿𝘀𝘁𝗡𝗖𝗵𝗮𝗿𝘀(q(abc), 2) eq q(ab); ok 𝗳𝗶𝗿𝘀𝘁𝗡𝗖𝗵𝗮𝗿𝘀(q(abc), 4) eq q(abc);
Normalize white space in a string to make comparisons easier. Leading and trailing white space is removed; blocks of white space in the interior are reduced to a single space. In effect: this puts everything on one long line with never more than one space at a time. Optionally a maximum length is applied to the normalized string.
Parameter Description 1 $string String to normalize 2 $length Maximum length of result
ok 𝗻𝘄𝘀(qq(a b c)) eq q(a b c);
Remove sequentially duplicate words in a string
Parameter Description 1 $s String to deduplicate
ok 𝗱𝗲𝗱𝘂𝗽𝗹𝗶𝗰𝗮𝘁𝗲𝗦𝗲𝗾𝘂𝗲𝗻𝘁𝗶𝗮𝗹𝗪𝗼𝗿𝗱𝘀𝗜𝗻𝗦𝘁𝗿𝗶𝗻𝗴(<<END) eq qq(\(aa \[bb \-cc dd ee ); (aa [bb bb -cc cc dd dd dd dd ee ee ee ee END
Remove html or Xml tags from a string
Parameter Description 1 $string String to detag
ok 𝗱𝗲𝘁𝗮𝗴𝗦𝘁𝗿𝗶𝗻𝗴(q(<a><a href="aaaa">a </a><a/>b </a>c)) eq q(a b c), 'hhh';
Parse a $string into words and quoted strings with an optional $limit on the number of words and strings to parse out
Parameter Description 1 $string String 2 $limit Optional limit
if (1) {is_deeply [𝗽𝗮𝗿𝘀𝗲𝗜𝗻𝘁𝗼𝗪𝗼𝗿𝗱𝘀𝗔𝗻𝗱𝗦𝘁𝗿𝗶𝗻𝗴𝘀 (q( aa12! a"b "aa !! ++ bb" ' ', '"', "'", ' '. "" ''.)) ], ["aa12!", "a\"b", "aa !! ++ bb", " ", ",", "\"", ",", "'", ",", " ", ".", "", "" ]; }
Return the common start followed by the two non equal tails of two non equal strings or an empty list if the strings are equal.
Parameter Description 1 $a First string 2 $b Second string
ok !𝘀𝘁𝗿𝗶𝗻𝗴𝘀𝗔𝗿𝗲𝗡𝗼𝘁𝗘𝗾𝘂𝗮𝗹(q(abc), q(abc)); ok 𝘀𝘁𝗿𝗶𝗻𝗴𝘀𝗔𝗿𝗲𝗡𝗼𝘁𝗘𝗾𝘂𝗮𝗹(q(abc), q(abd)); is_deeply [𝘀𝘁𝗿𝗶𝗻𝗴𝘀𝗔𝗿𝗲𝗡𝗼𝘁𝗘𝗾𝘂𝗮𝗹(q(abc), q(abd))], [qw(ab c d)]; is_deeply [𝘀𝘁𝗿𝗶𝗻𝗴𝘀𝗔𝗿𝗲𝗡𝗼𝘁𝗘𝗾𝘂𝗮𝗹(q(ab), q(abd))], [q(ab), '', q(d)];
Print an array of words in qw() format.
Parameter Description 1 @words Array of words
ok 𝗽𝗿𝗶𝗻𝘁𝗤𝘄(qw(a b c)) eq q(qw(a b c));
The number of lines in a string.
ok 𝗻𝘂𝗺𝗯𝗲𝗿𝗢𝗳𝗟𝗶𝗻𝗲𝘀𝗜𝗻𝗦𝘁𝗿𝗶𝗻𝗴("a b ") == 2;
Extract the package name from a java string or file.
Parameter Description 1 $java Java file if it exists else the string of java
my $j = writeFile(undef, <<END); // Test package com.xyz; END ok 𝗷𝗮𝘃𝗮𝗣𝗮𝗰𝗸𝗮𝗴𝗲($j) eq "com.xyz"; ok javaPackageAsFileName($j) eq "com/xyz"; unlink $j; my $p = writeFile(undef, <<END); package a::b; END ok perlPackage($p) eq "a::b"; unlink $p;
Extract the package name from a java string or file and convert it to a file name.
my $j = writeFile(undef, <<END); // Test package com.xyz; END ok javaPackage($j) eq "com.xyz"; ok 𝗷𝗮𝘃𝗮𝗣𝗮𝗰𝗸𝗮𝗴𝗲𝗔𝘀𝗙𝗶𝗹𝗲𝗡𝗮𝗺𝗲($j) eq "com/xyz"; unlink $j; my $p = writeFile(undef, <<END); package a::b; END ok perlPackage($p) eq "a::b"; unlink $p;
Extract the package name from a perl string or file.
Parameter Description 1 $perl Perl file if it exists else the string of perl
my $j = writeFile(undef, <<END); // Test package com.xyz; END ok javaPackage($j) eq "com.xyz"; ok javaPackageAsFileName($j) eq "com/xyz"; unlink $j; my $p = writeFile(undef, <<END); package a::b; END ok 𝗽𝗲𝗿𝗹𝗣𝗮𝗰𝗸𝗮𝗴𝗲($p) eq "a::b"; unlink $p; my $p = writeFile(undef, <<END); package a::b; END ok 𝗽𝗲𝗿𝗹𝗣𝗮𝗰𝗸𝗮𝗴𝗲($p) eq "a::b";
Extract the Javascript functions marked for export in a file or string. Functions are marked for export by placing function in column 1 followed by //E on the same line. The end of the exported function is located by }
Parameter Description 1 $fileOrString File or string
ok 𝗷𝗮𝘃𝗮𝗦𝗰𝗿𝗶𝗽𝘁𝗘𝘅𝗽𝗼𝗿𝘁𝘀(<<END) eq <<END; function aaa() //E {console.log('aaa');
Choose a string at random from the list of @strings supplied.
Parameter Description 1 @strings Strings to chose from
ok q(a) eq 𝗰𝗵𝗼𝗼𝘀𝗲𝗦𝘁𝗿𝗶𝗻𝗴𝗔𝘁𝗥𝗮𝗻𝗱𝗼𝗺(qw(a a a a));
Translate Ascii alphanumerics in strings to various Unicode blocks.
Convert alphanumerics in a string to Unicode Mathematical Bold.
ok 𝗺𝗮𝘁𝗵𝗲𝗺𝗮𝘁𝗶𝗰𝗮𝗹𝗕𝗼𝗹𝗱𝗦𝘁𝗿𝗶𝗻𝗴 (q(APPLES and ORANGES)) eq q(𝐀𝐏𝐏𝐋𝐄𝐒 𝐚𝐧𝐝 𝐎𝐑𝐀𝐍𝐆𝐄𝐒);
Undo alphanumerics in a string to Unicode Mathematical Bold..
ok 𝗺𝗮𝘁𝗵𝗲𝗺𝗮𝘁𝗶𝗰𝗮𝗹𝗕𝗼𝗹𝗱𝗦𝘁𝗿𝗶𝗻𝗴𝗨𝗻𝗱𝗼 (q(𝐀𝐏𝐏𝐋𝐄𝐒 𝐚𝐧𝐝 𝐎𝐑𝐀𝐍𝐆𝐄𝐒)) eq q(APPLES and ORANGES);
Convert alphanumerics in a string to Unicode Mathematical Bold Italic.
ok 𝗺𝗮𝘁𝗵𝗲𝗺𝗮𝘁𝗶𝗰𝗮𝗹𝗕𝗼𝗹𝗱𝗜𝘁𝗮𝗹𝗶𝗰𝗦𝘁𝗿𝗶𝗻𝗴 (q(APPLES and ORANGES)) eq q(𝑨𝑷𝑷𝑳𝑬𝑺 𝒂𝒏𝒅 𝑶𝑹𝑨𝑵𝑮𝑬𝑺);
Undo alphanumerics in a string to Unicode Mathematical Bold Italic.
ok 𝗺𝗮𝘁𝗵𝗲𝗺𝗮𝘁𝗶𝗰𝗮𝗹𝗕𝗼𝗹𝗱𝗜𝘁𝗮𝗹𝗶𝗰𝗦𝘁𝗿𝗶𝗻𝗴𝗨𝗻𝗱𝗼 (q(𝑨𝑷𝑷𝑳𝑬𝑺 𝒂𝒏𝒅 𝑶𝑹𝑨𝑵𝑮𝑬𝑺)) eq q(APPLES and ORANGES);
Convert alphanumerics in a string to Unicode Mathematical Sans Serif.
ok 𝗺𝗮𝘁𝗵𝗲𝗺𝗮𝘁𝗶𝗰𝗮𝗹𝗦𝗮𝗻𝘀𝗦𝗲𝗿𝗶𝗳𝗦𝘁𝗿𝗶𝗻𝗴 (q(APPLES and ORANGES)) eq q(𝖠𝖯𝖯𝖫𝖤𝖲 𝖺𝗇𝖽 𝖮𝖱𝖠𝖭𝖦𝖤𝖲);
Undo alphanumerics in a string to Unicode Mathematical Sans Serif.
ok 𝗺𝗮𝘁𝗵𝗲𝗺𝗮𝘁𝗶𝗰𝗮𝗹𝗦𝗮𝗻𝘀𝗦𝗲𝗿𝗶𝗳𝗦𝘁𝗿𝗶𝗻𝗴𝗨𝗻𝗱𝗼 (q(𝖠𝖯𝖯𝖫𝖤𝖲 𝖺𝗇𝖽 𝖮𝖱𝖠𝖭𝖦𝖤𝖲)) eq q(APPLES and ORANGES);
Convert alphanumerics in a string to Unicode Mathematical Sans Serif Bold.
ok 𝗺𝗮𝘁𝗵𝗲𝗺𝗮𝘁𝗶𝗰𝗮𝗹𝗦𝗮𝗻𝘀𝗦𝗲𝗿𝗶𝗳𝗕𝗼𝗹𝗱𝗦𝘁𝗿𝗶𝗻𝗴 (q(APPLES and ORANGES)) eq q(𝗔𝗣𝗣𝗟𝗘𝗦 𝗮𝗻𝗱 𝗢𝗥𝗔𝗡𝗚𝗘𝗦);
Undo alphanumerics in a string to Unicode Mathematical Sans Serif Bold.
ok 𝗺𝗮𝘁𝗵𝗲𝗺𝗮𝘁𝗶𝗰𝗮𝗹𝗦𝗮𝗻𝘀𝗦𝗲𝗿𝗶𝗳𝗕𝗼𝗹𝗱𝗦𝘁𝗿𝗶𝗻𝗴𝗨𝗻𝗱𝗼 (q(𝗔𝗣𝗣𝗟𝗘𝗦 𝗮𝗻𝗱 𝗢𝗥𝗔𝗡𝗚𝗘𝗦)) eq q(APPLES and ORANGES);
Convert alphanumerics in a string to Unicode Mathematical Sans Serif Italic.
ok 𝗺𝗮𝘁𝗵𝗲𝗺𝗮𝘁𝗶𝗰𝗮𝗹𝗦𝗮𝗻𝘀𝗦𝗲𝗿𝗶𝗳𝗜𝘁𝗮𝗹𝗶𝗰𝗦𝘁𝗿𝗶𝗻𝗴 (q(APPLES and ORANGES)) eq q(𝘈𝘗𝘗𝘓𝘌𝘚 𝘢𝘯𝘥 𝘖𝘙𝘈𝘕𝘎𝘌𝘚);
Undo alphanumerics in a string to Unicode Mathematical Sans Serif Italic.
ok 𝗺𝗮𝘁𝗵𝗲𝗺𝗮𝘁𝗶𝗰𝗮𝗹𝗦𝗮𝗻𝘀𝗦𝗲𝗿𝗶𝗳𝗜𝘁𝗮𝗹𝗶𝗰𝗦𝘁𝗿𝗶𝗻𝗴𝗨𝗻𝗱𝗼 (q(𝘈𝘗𝘗𝘓𝘌𝘚 𝘢𝘯𝘥 𝘖𝘙𝘈𝘕𝘎𝘌𝘚)) eq q(APPLES and ORANGES);
Convert alphanumerics in a string to Unicode Mathematical Sans Serif Bold Italic.
ok 𝗺𝗮𝘁𝗵𝗲𝗺𝗮𝘁𝗶𝗰𝗮𝗹𝗦𝗮𝗻𝘀𝗦𝗲𝗿𝗶𝗳𝗕𝗼𝗹𝗱𝗜𝘁𝗮𝗹𝗶𝗰𝗦𝘁𝗿𝗶𝗻𝗴 (q(APPLES and ORANGES)) eq q(𝘼𝙋𝙋𝙇𝙀𝙎 𝙖𝙣𝙙 𝙊𝙍𝘼𝙉𝙂𝙀𝙎);
Undo alphanumerics in a string to Unicode Mathematical Sans Serif Bold Italic.
ok 𝗺𝗮𝘁𝗵𝗲𝗺𝗮𝘁𝗶𝗰𝗮𝗹𝗦𝗮𝗻𝘀𝗦𝗲𝗿𝗶𝗳𝗕𝗼𝗹𝗱𝗜𝘁𝗮𝗹𝗶𝗰𝗦𝘁𝗿𝗶𝗻𝗴𝗨𝗻𝗱𝗼(q(𝘼𝙋𝙋𝙇𝙀𝙎 𝙖𝙣𝙙 𝙊𝙍𝘼𝙉𝙂𝙀𝙎)) eq q(APPLES and ORANGES);
Convert alphanumerics in a string to Unicode Mathematical MonoSpace.
ok 𝗺𝗮𝘁𝗵𝗲𝗺𝗮𝘁𝗶𝗰𝗮𝗹𝗠𝗼𝗻𝗼𝗦𝗽𝗮𝗰𝗲𝗦𝘁𝗿𝗶𝗻𝗴 (q(APPLES and ORANGES)) eq q(𝙰𝙿𝙿𝙻𝙴𝚂 𝚊𝚗𝚍 𝙾𝚁𝙰𝙽𝙶𝙴𝚂);
Undo alphanumerics in a string to Unicode Mathematical MonoSpace.
ok 𝗺𝗮𝘁𝗵𝗲𝗺𝗮𝘁𝗶𝗰𝗮𝗹𝗠𝗼𝗻𝗼𝗦𝗽𝗮𝗰𝗲𝗦𝘁𝗿𝗶𝗻𝗴𝗨𝗻𝗱𝗼 (q(𝙰𝙿𝙿𝙻𝙴𝚂 𝚊𝚗𝚍 𝙾𝚁𝙰𝙽𝙶𝙴𝚂)) eq q(APPLES and ORANGES);
Convert alphanumerics in a string to bold.
ok 𝗯𝗼𝗹𝗱𝗦𝘁𝗿𝗶𝗻𝗴(q(zZ)) eq q(𝘇𝗭);
Undo alphanumerics in a string to bold.
if (1) {my $n = 1234567890; ok 𝗯𝗼𝗹𝗱𝗦𝘁𝗿𝗶𝗻𝗴𝗨𝗻𝗱𝗼 (boldString($n)) == $n; ok enclosedStringUndo (enclosedString($n)) == $n; ok enclosedReversedStringUndo(enclosedReversedString($n)) == $n; ok superScriptStringUndo (superScriptString($n)) == $n; ok subScriptStringUndo (subScriptString($n)) == $n; }
Convert alphanumerics in a string to enclosed alphanumerics.
ok 𝗲𝗻𝗰𝗹𝗼𝘀𝗲𝗱𝗦𝘁𝗿𝗶𝗻𝗴(q(hello world 1234)) eq q(ⓗⓔⓛⓛⓞ ⓦⓞⓡⓛⓓ ①②③④);
Undo alphanumerics in a string to enclosed alphanumerics.
if (1) {my $n = 1234567890; ok boldStringUndo (boldString($n)) == $n; ok 𝗲𝗻𝗰𝗹𝗼𝘀𝗲𝗱𝗦𝘁𝗿𝗶𝗻𝗴𝗨𝗻𝗱𝗼 (enclosedString($n)) == $n; ok enclosedReversedStringUndo(enclosedReversedString($n)) == $n; ok superScriptStringUndo (superScriptString($n)) == $n; ok subScriptStringUndo (subScriptString($n)) == $n; }
Convert alphanumerics in a string to enclosed reversed alphanumerics.
ok 𝗲𝗻𝗰𝗹𝗼𝘀𝗲𝗱𝗥𝗲𝘃𝗲𝗿𝘀𝗲𝗱𝗦𝘁𝗿𝗶𝗻𝗴(q(hello world 1234)) eq q(🅗🅔🅛🅛🅞 🅦🅞🅡🅛🅓 ➊➋➌➍);
Undo alphanumerics in a string to enclosed reversed alphanumerics.
if (1) {my $n = 1234567890; ok boldStringUndo (boldString($n)) == $n; ok enclosedStringUndo (enclosedString($n)) == $n; ok 𝗲𝗻𝗰𝗹𝗼𝘀𝗲𝗱𝗥𝗲𝘃𝗲𝗿𝘀𝗲𝗱𝗦𝘁𝗿𝗶𝗻𝗴𝗨𝗻𝗱𝗼(enclosedReversedString($n)) == $n; ok superScriptStringUndo (superScriptString($n)) == $n; ok subScriptStringUndo (subScriptString($n)) == $n; }
Convert alphanumerics in a string to super scripts
ok 𝘀𝘂𝗽𝗲𝗿𝗦𝗰𝗿𝗶𝗽𝘁𝗦𝘁𝗿𝗶𝗻𝗴(1234567890) eq q(¹²³⁴⁵⁶⁷⁸⁹⁰);
Undo alphanumerics in a string to super scripts
if (1) {my $n = 1234567890; ok boldStringUndo (boldString($n)) == $n; ok enclosedStringUndo (enclosedString($n)) == $n; ok enclosedReversedStringUndo(enclosedReversedString($n)) == $n; ok 𝘀𝘂𝗽𝗲𝗿𝗦𝗰𝗿𝗶𝗽𝘁𝗦𝘁𝗿𝗶𝗻𝗴𝗨𝗻𝗱𝗼 (superScriptString($n)) == $n; ok subScriptStringUndo (subScriptString($n)) == $n; }
Convert alphanumerics in a string to sub scripts
ok 𝘀𝘂𝗯𝗦𝗰𝗿𝗶𝗽𝘁𝗦𝘁𝗿𝗶𝗻𝗴(1234567890) eq q(₁₂₃₄₅₆₇₈₉₀);
Undo alphanumerics in a string to sub scripts
if (1) {my $n = 1234567890; ok boldStringUndo (boldString($n)) == $n; ok enclosedStringUndo (enclosedString($n)) == $n; ok enclosedReversedStringUndo(enclosedReversedString($n)) == $n; ok superScriptStringUndo (superScriptString($n)) == $n; ok 𝘀𝘂𝗯𝗦𝗰𝗿𝗶𝗽𝘁𝗦𝘁𝗿𝗶𝗻𝗴𝗨𝗻𝗱𝗼 (subScriptString($n)) == $n; }
Return the file name quoted if its contents are in utf8 else return undef
my $f = writeFile(undef, "aaa"); ok 𝗶𝘀𝗙𝗶𝗹𝗲𝗨𝘁𝗳𝟴 $f;
Send messages between processes via a unix domain socket.
Create a communications server - a means to communicate between processes on the same machine via Udsr::read and Udsr::write.
Parameter Description 1 @parms Attributes per L<Udsr Definition|/Udsr Definition>
my $N = 20; my $s = 𝗻𝗲𝘄𝗨𝗱𝘀𝗿𝗦𝗲𝗿𝘃𝗲𝗿(serverAction=>sub {my ($u) = @_; my $r = $u->read; $u->write(qq(Hello from server $r)); }); my $p = newProcessStarter(min(100, $N)); # Run some clients for my $i(1..$N) {$p->start(sub {my $count = 0; for my $j(1..$N) {my $c = newUdsrClient; my $m = qq(Hello from client $i x $j); $c->write($m); my $r = $c->read; ++$count if $r eq qq(Hello from server $m); } [$count] }); } my $count; for my $r($p->finish) # Consolidate results {my ($c) = @$r; $count += $c; } ok $count == $N*$N; # Check results and kill $s->kill;
Create a new communications client - a means to communicate between processes on the same machine via Udsr::read and Udsr::write.
my $N = 20; my $s = newUdsrServer(serverAction=>sub {my ($u) = @_; my $r = $u->read; $u->write(qq(Hello from server $r)); }); my $p = newProcessStarter(min(100, $N)); # Run some clients for my $i(1..$N) {$p->start(sub {my $count = 0; for my $j(1..$N) {my $c = 𝗻𝗲𝘄𝗨𝗱𝘀𝗿𝗖𝗹𝗶𝗲𝗻𝘁; my $m = qq(Hello from client $i x $j); $c->write($m); my $r = $c->read; ++$count if $r eq qq(Hello from server $m); } [$count] }); } my $count; for my $r($p->finish) # Consolidate results {my ($c) = @$r; $count += $c; } ok $count == $N*$N; # Check results and kill $s->kill;
Write a communications message to the newUdsrServer or the newUdsrClient.
Parameter Description 1 $u Communicator 2 $msg Message
my $N = 20; my $s = newUdsrServer(serverAction=>sub {my ($u) = @_; my $r = $u->read; $u->write(qq(Hello from server $r)); }); my $p = newProcessStarter(min(100, $N)); # Run some clients for my $i(1..$N) {$p->start(sub {my $count = 0; for my $j(1..$N) {my $c = newUdsrClient; my $m = qq(Hello from client $i x $j); $c->write($m); my $r = $c->read; ++$count if $r eq qq(Hello from server $m); } [$count] }); } my $count; for my $r($p->finish) # Consolidate results {my ($c) = @$r; $count += $c; } ok $count == $N*$N; # Check results and kill $s->kill;
Read a message from the newUdsrServer or the newUdsrClient.
Parameter Description 1 $u Communicator
Kill a communications server.
Create a systemd installed server that processes http requests using a specified userid. The systemd and CGI files plus an installation script are written to the specified folder after it has been cleared and installed on the server unless $noInstall is true. The serverAction string contains the code to be executed by the server: the contained sub genResponse($hash) will be called with a hash of the CGI variables, it should return the response to be sent back to the client. Returns the installation script file name.
Parameter Description 1 $u Communicator 2 $folder Folder to contain server code 3 $noInstall Do not install if true optionally
if (0) {my $fold = fpd(qw(/home phil zzz)); # Folder to contain server code my $name = q(test); # Service my $user = q(phil); # User my $udsr = newUdsr # Create a Udsr parameter list (serviceName => $name, serviceUser => $user, socketPath => qq(/home/phil/$name.socket), serverAction=> <<'END' my $user = userId; my $list = qx(ls -l); my $dtts = dateTimeStamp; return <<END2; Content-type: text/html <h1>Hello World to you $user on $dtts!</h1> <pre> $list </pre> END2 END ); 𝗨𝗱𝘀𝗿::𝘄𝗲𝗯𝗨𝘀𝗲𝗿($udsr, $fold); # Create and install web service interface my $ip = awsIp; say STDERR qx(curl http://$ip/cgi-bin/$name/client.pl); # Enable port 80 on AWS first) }
Useful for operating across the cloud.
Force die to confess where the death occurred
𝗺𝗮𝗸𝗲𝗗𝗶𝗲𝗖𝗼𝗻𝗳𝗲𝘀𝘀
Get ip address of server at Amazon Web Services.
ok saveAwsIp(q(0.0.0.0)) eq 𝗮𝘄𝘀𝗜𝗽;
Make the server at Amazon Web Services with the given IP address the default primary server as used by all the methods whose names end in r or Remote. Returns the given IP address.
ok 𝘀𝗮𝘃𝗲𝗔𝘄𝘀𝗜𝗽(q(0.0.0.0)) eq awsIp;
Get an item of meta data for the Amazon Web Services server we are currently running on if we are running on an Amazon Web Services server else return a blank string.
Parameter Description 1 $item Meta data field
ok 𝗮𝘄𝘀𝗠𝗲𝘁𝗮𝗗𝗮𝘁𝗮(q(instance-id)) eq q(i-06a4b221b30bf7a37);
Get the ip address of the AWS server we are currently running on if we are running on an Amazon Web Services server else return a blank string.
𝗮𝘄𝘀𝗖𝘂𝗿𝗿𝗲𝗻𝘁𝗜𝗽; confirmHasCommandLineCommand(q(find)); ok 𝗮𝘄𝘀𝗖𝘂𝗿𝗿𝗲𝗻𝘁𝗜𝗽 eq q(31.41.59.26);
Get the instance id of the Amazon Web Services server we are currently running on if we are running on an Amazon Web Services server else return a blank string.
ok 𝗮𝘄𝘀𝗖𝘂𝗿𝗿𝗲𝗻𝘁𝗜𝗻𝘀𝘁𝗮𝗻𝗰𝗲𝗜𝗱 eq q(i-06a4b221b30bf7a37);
Get the availability zone of the Amazon Web Services server we are currently running on if we are running on an Amazon Web Services server else return a blank string.
ok 𝗮𝘄𝘀𝗖𝘂𝗿𝗿𝗲𝗻𝘁𝗔𝘃𝗮𝗶𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆𝗭𝗼𝗻𝗲 eq q(us-east-2a);
Get the region of the Amazon Web Services server we are currently running on if we are running on an Amazon Web Services server else return a blank string.
ok 𝗮𝘄𝘀𝗖𝘂𝗿𝗿𝗲𝗻𝘁𝗥𝗲𝗴𝗶𝗼𝗻 eq q(us-east-2);
Get the instance type of the Amazon Web Services server if we are running on an Amazon Web Services server else return a blank string.
ok 𝗮𝘄𝘀𝗖𝘂𝗿𝗿𝗲𝗻𝘁𝗜𝗻𝘀𝘁𝗮𝗻𝗰𝗲𝗧𝘆𝗽𝗲 eq q(r4.4xlarge);
Execute an AWs command and return its response
Parameter Description 1 $command Command to execute 2 %options Aws cli options
ok 𝗮𝘄𝘀𝗘𝘅𝗲𝗰𝗖𝗹𝗶(q(aws s3 ls)) =~ m(ryffine)i; my $p = awsExecCliJson(q(aws ec2 describe-vpcs), region=>q(us-east-1)); ok $p->Vpcs->[0]->VpcId =~ m(\Avpc-)i;
Execute an AWs command and decode the json so produced
ok awsExecCli(q(aws s3 ls)) =~ m(ryffine)i; my $p = 𝗮𝘄𝘀𝗘𝘅𝗲𝗰𝗖𝗹𝗶𝗝𝘀𝗼𝗻(q(aws ec2 describe-vpcs), region=>q(us-east-1)); ok $p->Vpcs->[0]->VpcId =~ m(\Avpc-)i;
Describe the Amazon Web Services instances running in a $region.
Parameter Description 1 %options Options
my %options = (region => q(us-east-2), profile=>q(fmc)); my $r = 𝗮𝘄𝘀𝗘𝗰𝟮𝗗𝗲𝘀𝗰𝗿𝗶𝗯𝗲𝗜𝗻𝘀𝘁𝗮𝗻𝗰𝗲𝘀 (%options); my %i = awsEc2DescribeInstancesGetIPAddresses(%options); is_deeply \%i, { "i-068a7176ba9140057" => { "18.221.162.39" => 1 } };
Return a hash of {instanceId => public ip address} for all running instances on Amazon Web Services with ip addresses.
my %options = (region => q(us-east-2), profile=>q(fmc)); my $r = awsEc2DescribeInstances (%options); my %i = 𝗮𝘄𝘀𝗘𝗰𝟮𝗗𝗲𝘀𝗰𝗿𝗶𝗯𝗲𝗜𝗻𝘀𝘁𝗮𝗻𝗰𝗲𝘀𝗚𝗲𝘁𝗜𝗣𝗔𝗱𝗱𝗿𝗲𝘀𝘀𝗲𝘀(%options); is_deeply \%i, { "i-068a7176ba9140057" => { "18.221.162.39" => 1 } };
Return the IP address of a named instance on Amazon Web Services else return undef.
Parameter Description 1 $instanceId Instance id 2 %options Options
ok q(3.33.133.233) eq 𝗮𝘄𝘀𝗘𝗰𝟮𝗜𝗻𝘀𝘁𝗮𝗻𝗰𝗲𝗜𝗽𝗔𝗱𝗱𝗿𝗲𝘀𝘀 ("i-xxx", region => q(us-east-2), profile=>q(fmc));
Create an image snap shot with the specified $name of the AWS server we are currently running on if we are running on an AWS server else return false. It is safe to shut down the instance immediately after initiating the snap shot - the snap continues even though the instance has terminated.
Parameter Description 1 $name Image name 2 %options Options
𝗮𝘄𝘀𝗘𝗰𝟮𝗖𝗿𝗲𝗮𝘁𝗲𝗜𝗺𝗮𝗴𝗲(q(099 Gold));
Find images with a tag that matches the specified regular expression $value.
Parameter Description 1 $value Regular expression 2 %options Options
is_deeply [𝗮𝘄𝘀𝗘𝗰𝟮𝗙𝗶𝗻𝗱𝗜𝗺𝗮𝗴𝗲𝘀𝗪𝗶𝘁𝗵𝗧𝗮𝗴𝗩𝗮𝗹𝘂𝗲(qr(boot)i, region=>'us-east-2', profile=>'fmc')], ["ami-011b4273c6123ae76"];
Describe images available.
𝗮𝘄𝘀𝗘𝗰𝟮𝗗𝗲𝘀𝗰𝗿𝗶𝗯𝗲𝗜𝗺𝗮𝗴𝗲𝘀(region => q(us-east-2), profile=>q(fmc));
Return {instance type} = cheapest spot price in dollars per hour for the given region
𝗮𝘄𝘀𝗖𝘂𝗿𝗿𝗲𝗻𝘁𝗟𝗶𝗻𝘂𝘅𝗦𝗽𝗼𝘁𝗣𝗿𝗶𝗰𝗲𝘀(region => q(us-east-2), profile=>q(fmc));
Return details of the specified instance type.
Parameter Description 1 $instanceType Instance type name 2 %options Options
my $i = 𝗮𝘄𝘀𝗘𝗰𝟮𝗗𝗲𝘀𝗰𝗿𝗶𝗯𝗲𝗜𝗻𝘀𝘁𝗮𝗻𝗰𝗲𝗧𝘆𝗽𝗲 ("m4.large", region=>'us-east-2', profile=>'fmc'); is_deeply $i->{VCpuInfo}, {DefaultCores => 1, DefaultThreadsPerCore => 2, DefaultVCpus => 2, ValidCores => [1], ValidThreadsPerCore => [1, 2], };
Report the prices of all the spot instances whose type matches a regular expression $instanceTypeRe. The report is sorted by price in millidollars per cpu ascending.
Parameter Description 1 $instanceTypeRe Regular expression for instance type name 2 %options Options
my $a = 𝗮𝘄𝘀𝗘𝗰𝟮𝗥𝗲𝗽𝗼𝗿𝘁𝗦𝗽𝗼𝘁𝗜𝗻𝘀𝘁𝗮𝗻𝗰𝗲𝗣𝗿𝗶𝗰𝗲𝘀 (qr(\.metal), region=>'us-east-2', profile=>'fmc'); ok $a->report eq <<END; CPUs by price 10 instances types found on 2019-12-24 at 22:53:26 Cheapest Instance Type: m5.metal Price Per Cpu hour : 6.65 in millidollars per hour Column Description 1 Instance_Type Instance type name 2 Price Price in millidollars per hour 3 CPUs Number of Cpus 4 Price_per_CPU The price per CPU in millidollars per hour Instance_Type Price CPUs Price_per_CPU 1 m5.metal 638 96 6.65 2 r5.metal 668 96 6.97 3 r5d.metal 668 96 6.97 4 m5d.metal 826 96 8.61 5 c5d.metal 912 96 9.50 6 c5.metal 1037 96 10.81 7 c5n.metal 912 72 12.67 8 i3.metal 1497 72 20.80 9 z1d.metal 1339 48 27.90 10 i3en.metal 3254 96 33.90 END
Request spot instances as long as they can be started within the next minute. Return a list of spot instance request ids one for each instance requested.
Parameter Description 1 $count Number of instances 2 $instanceType Instance type 3 $ami AMI 4 $price Price in dollars per hour 5 $securityGroup Security group 6 $key Key name 7 %options Options.
my $r = 𝗮𝘄𝘀𝗘𝗰𝟮𝗥𝗲𝗾𝘂𝗲𝘀𝘁𝗦𝗽𝗼𝘁𝗜𝗻𝘀𝘁𝗮𝗻𝗰𝗲𝘀 (2, q(t2.micro), "ami-xxx", 0.01, q(xxx), q(yyy), region=>'us-east-2', profile=>'fmc');
Return a hash {spot instance request => spot instance details} describing the status of active spot instances.
Parameter Description 1 %options Options.
my $r = 𝗮𝘄𝘀𝗘𝗰𝟮𝗗𝗲𝘀𝗰𝗿𝗶𝗯𝗲𝗦𝗽𝗼𝘁𝗜𝗻𝘀𝘁𝗮𝗻𝗰𝗲𝘀(region => q(us-east-2), profile=>q(fmc));
Tag an Ec2 resource with the supplied tags.
Parameter Description 1 $resource Resource 2 $name Tag name 3 $value Tag value 4 %options Options.
𝗮𝘄𝘀𝗘𝗰𝟮𝗧𝗮𝗴 ("i-xxxx", Name=>q(Conversion), region => q(us-east-2), profile=>q(fmc));
Check that the specified b<$cmd> is present on the current system. Use $ENV{PATH} to add folders containing commands as necessary.
Parameter Description 1 $cmd Command to check for
awsCurrentIp; 𝗰𝗼𝗻𝗳𝗶𝗿𝗺𝗛𝗮𝘀𝗖𝗼𝗺𝗺𝗮𝗻𝗱𝗟𝗶𝗻𝗲𝗖𝗼𝗺𝗺𝗮𝗻𝗱(q(find));
Number of cpus scaled by an optional factor - but only if you have nproc. If you do not have nproc but do have a convenient way for determining the number of cpus on your system please let me know.
Parameter Description 1 $scale Scale factor
ok 𝗻𝘂𝗺𝗯𝗲𝗿𝗢𝗳𝗖𝗽𝘂𝘀(8) >= 8, 'ddd';
Get the ip address of a server on the local network by hostname via arp
Parameter Description 1 $hostName Host name
𝗶𝗽𝗔𝗱𝗱𝗿𝗲𝘀𝘀𝗩𝗶𝗮𝗔𝗿𝗽(q(secarias));
Parse an S3 bucket/folder name into a bucket and a folder name removing any initial s3://.
Parameter Description 1 $name Bucket/folder name
if (1) {is_deeply [𝗽𝗮𝗿𝘀𝗲𝗦𝟯𝗕𝘂𝗰𝗸𝗲𝘁𝗔𝗻𝗱𝗙𝗼𝗹𝗱𝗲𝗿𝗡𝗮𝗺𝗲(q(s3://bbbb/ffff/dddd/))], [qw(bbbb ffff/dddd/)], q(iii); is_deeply [𝗽𝗮𝗿𝘀𝗲𝗦𝟯𝗕𝘂𝗰𝗸𝗲𝘁𝗔𝗻𝗱𝗙𝗼𝗹𝗱𝗲𝗿𝗡𝗮𝗺𝗲(q(s3://bbbb/))], [qw(bbbb), q()]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗦𝟯𝗕𝘂𝗰𝗸𝗲𝘁𝗔𝗻𝗱𝗙𝗼𝗹𝗱𝗲𝗿𝗡𝗮𝗺𝗲(q( bbbb/))], [qw(bbbb), q()]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗦𝟯𝗕𝘂𝗰𝗸𝗲𝘁𝗔𝗻𝗱𝗙𝗼𝗹𝗱𝗲𝗿𝗡𝗮𝗺𝗲(q( bbbb))], [qw(bbbb), q()]; }
Save source code every $saveCodeEvery seconds by zipping folder $folder to zip file $zipFileName then saving this zip file in the specified S3 $bucket using any additional S3 parameters in $S3Parms.
Parameter Description 1 $saveCodeEvery Save every seconds 2 $folder Folder to save 3 $zipFileName Zip file name 4 $bucket Bucket/key 5 $S3Parms Additional S3 parameters like profile or region as a string
𝘀𝗮𝘃𝗲𝗖𝗼𝗱𝗲𝗧𝗼𝗦𝟯(1200, q(.), q(projectName), q(bucket/folder), q(--quiet));
Add a certificate to the current ssh session.
Parameter Description 1 $file File containing certificate
𝗮𝗱𝗱𝗖𝗲𝗿𝘁𝗶𝗳𝗶𝗰𝗮𝘁𝗲(fpf(qw(.ssh cert)));
The name of the host we are running on.
𝗵𝗼𝘀𝘁𝗡𝗮𝗺𝗲;
Get or confirm the userid we are currently running under.
Parameter Description 1 $user Userid to confirm
𝘂𝘀𝗲𝗿𝗜𝗱;
Translate $text from English to a specified $language using AWS Translate with the specified global $options and return the translated string. Translations are cached in the specified $cacheFolder for reuse were feasible.
Parameter Description 1 $string String to translate 2 $language Language code 3 $cacheFolder Cache folder 4 $Options Aws global options string
ok 𝗮𝘄𝘀𝗧𝗿𝗮𝗻𝘀𝗹𝗮𝘁𝗲𝗧𝗲𝘅𝘁("Hello", "it", ".translations/") eq q(Ciao);
Parallel computing across multiple instances running on Amazon Web Services.
Returns 1 if we are on AWS else return 0.
ok 𝗼𝗻𝗔𝘄𝘀; ok !onAwsSecondary; ok onAwsPrimary;
Return 1 if we are on Amazon Web Services and we are on the primary session instance as defined by awsParallelPrimaryInstanceId, return 0 if we are on a secondary session instance, else return undef if we are not on Amazon Web Services.
ok onAws; ok !onAwsSecondary; ok 𝗼𝗻𝗔𝘄𝘀𝗣𝗿𝗶𝗺𝗮𝗿𝘆;
Return 1 if we are on Amazon Web Services but we are not on the primary session instance as defined by awsParallelPrimaryInstanceId, return 0 if we are on the primary session instance, else return undef if we are not on Amazon Web Services.
ok onAws; ok !𝗼𝗻𝗔𝘄𝘀𝗦𝗲𝗰𝗼𝗻𝗱𝗮𝗿𝘆; ok onAwsPrimary;
Return the instance id of the primary instance. The primary instance is the instance at Amazon Web Services that we communicate with - it controls all the secondary instances that form part of the parallel session. The primary instance is located by finding the first running instance in instance Id order whose Name tag contains the word primary. If no running instance has been identified as the primary instance, then the first viable instance is made the primary. The ip address of the primary is recorded in /tmp/awsPrimaryIpAddress.data so that it can be quickly reused by xxxr, copyFolderToRemote, mergeFolderFromRemote etc. Returns the instanceId of the primary instance or undef if no suitable instance exists.
ok "i-xxx" eq 𝗮𝘄𝘀𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗣𝗿𝗶𝗺𝗮𝗿𝘆𝗜𝗻𝘀𝘁𝗮𝗻𝗰𝗲𝗜𝗱 (region => q(us-east-2), profile=>q(fmc));
On Amazon Web Services: copies a specified $folder from the primary instance, see: awsParallelPrimaryInstanceId, in parallel, to all the secondary instances in the session. If running locally: copies the specified folder to all Amazon Web Services session instances both primary and secondary.
Parameter Description 1 $folder Fully qualified folder name 2 %options Options
my $d = temporaryFolder; my ($f1, $f2) = map {fpe($d, $_, q(txt))} 1..2; my $files = {$f1 => "1111", $f2 => "2222"}; writeFiles($files); 𝗮𝘄𝘀𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗦𝗽𝗿𝗲𝗮𝗱𝗙𝗼𝗹𝗱𝗲𝗿($d); clearFolder($d, 3); awsParallelGatherFolder($d); my $r = readFiles($d); is_deeply $files, $r; clearFolder($d, 3);
On Amazon Web Services: merges all the files in the specified $folder on each secondary instance to the corresponding folder on the primary instance in parallel. If running locally: merges all the files in the specified folder on each Amazon Web Services session instance (primary and secondary) to the corresponding folder on the local machine. The folder merges are done in parallel which makes it impossible to rely on the order of the merges.
my $d = temporaryFolder; my ($f1, $f2) = map {fpe($d, $_, q(txt))} 1..2; my $files = {$f1 => "1111", $f2 => "2222"}; writeFiles($files); awsParallelSpreadFolder($d); clearFolder($d, 3); 𝗮𝘄𝘀𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗚𝗮𝘁𝗵𝗲𝗿𝗙𝗼𝗹𝗱𝗲𝗿($d); my $r = readFiles($d); is_deeply $files, $r; clearFolder($d, 3);
Return the IP addresses of any primary instance on Amazon Web Services.
ok 𝗮𝘄𝘀𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗣𝗿𝗶𝗺𝗮𝗿𝘆𝗜𝗽𝗔𝗱𝗱𝗿𝗲𝘀𝘀 eq q(3.1.4.4); is_deeply [awsParallelSecondaryIpAddresses], [qw(3.1.4.5 3.1.4.6)]; is_deeply [awsParallelIpAddresses], [qw(3.1.4.4 3.1.4.5 3.1.4.6)];
Return a list containing the IP addresses of any secondary instances on Amazon Web Services.
ok awsParallelPrimaryIpAddress eq q(3.1.4.4); is_deeply [𝗮𝘄𝘀𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗦𝗲𝗰𝗼𝗻𝗱𝗮𝗿𝘆𝗜𝗽𝗔𝗱𝗱𝗿𝗲𝘀𝘀𝗲𝘀], [qw(3.1.4.5 3.1.4.6)]; is_deeply [awsParallelIpAddresses], [qw(3.1.4.4 3.1.4.5 3.1.4.6)];
Return the IP addresses of all the Amazon Web Services session instances.
ok awsParallelPrimaryIpAddress eq q(3.1.4.4); is_deeply [awsParallelSecondaryIpAddresses], [qw(3.1.4.5 3.1.4.6)]; is_deeply [𝗮𝘄𝘀𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗜𝗽𝗔𝗱𝗱𝗿𝗲𝘀𝘀𝗲𝘀], [qw(3.1.4.4 3.1.4.5 3.1.4.6)];
Recreate the code context for a referenced sub
Parameter Description 1 $sub Sub reference
ok 𝗴𝗲𝘁𝗖𝗼𝗱𝗲𝗖𝗼𝗻𝘁𝗲𝘅𝘁(\&𝗴𝗲𝘁𝗖𝗼𝗱𝗲𝗖𝗼𝗻𝘁𝗲𝘅𝘁) =~ m(use strict)ims;
Parameter Description 1 $userData User data or undef 2 $parallel Parallel sub reference 3 $results Series sub reference 4 $files [files to process] 5 %options Aws cli options.
my $N = 2001; # Number of files to process my $options = q(region => q(us-east-2), profile=>q(fmc)); # Aws cli options my %options = eval "($options)"; for my $dir(q(/home/phil/perl/cpan/DataTableText/lib/Data/Table/), # Folders we will need on aws q(/home/phil/.aws/)) {awsParallelSpreadFolder($dir, %options); } my $d = temporaryFolder; # Create a temporary folder my $resultsFile = fpe($d, qw(results data)); # Save results in this temporary file if (my $r = execPerlOnRemote(join " ", # Execute some code on a server getCodeContext(\&awsParallelProcessFilesTestParallel), # Get code context of the sub we want to call. <<SESSIONLEADER)) # Launch code on session leader use Data::Table::Text qw(:all); my \$r = 𝗮𝘄𝘀𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗙𝗶𝗹𝗲𝘀 # Process files on multiple L<Amazon Web Services|http://aws.amazon.com> instances in parallel ({file=>4, time=>timeStamp}, # User data \\\&Data::Table::Text::awsParallelProcessFilesTestParallel, # Reference to code to execute in parallel on each session instance \\\&Data::Table::Text::awsParallelProcessFilesTestResults, # Reference to code to execute in series to merge the results of each parallel computation [map {writeFile(fpe(q($d), \$_, qw(txt)), \$_)} 1..$N], # Files to process $options); # Aws cli options as we will be running on Aws storeFile(q($resultsFile), \$r); # Save results in a file SESSIONLEADER {copyFileFromRemote($resultsFile); # Retrieve user data my $userData = retrieveFile($resultsFile); # Recover user data my @i = awsParallelSecondaryIpAddresses(%options); # Ip addresses of secondary instances my @I = keys $userData->{ip}->%*; is_deeply [sort @i], [sort @I]; # Each secondary ip address was used ok $userData->{file} == 4; # Prove we can pass data in and get it back ok $userData->{merge} == 1 + @i, 'ii'; # Number of merges my %f; my %i; # Files processed on each ip for my $i(sort keys $userData->{ipFile}->%*) # Ip {for my $f(sort keys $userData->{ipFile}{$i}->%*) # File {$f{fn($f)}++; # Files processed $i{$i}++; # Count files on each ip } } is_deeply \%f, {map {$_=>1} 1..$N}; # Check each file was processed if (1) {my @rc; my @ra; # Range of number of files processed on each ip - computed, actually counted my $l = $N/@i-1; # Lower limit of number of files per IP address my $h = $N/@i+1; # Upper limit of number of files per IP address for my $i(keys %i) {my $nc = $i{$i}; # Number of files processed on this ip - computed my $na = $userData->{ip}{$i}; # Number of files processed on this ip - actually counted push @rc, ($nc >= $l and $nc <= $h) ? 1 : 0; # 1 - in range, 0 - out of range push @ra, ($na >= $l and $na <= $h) ? 1 : 0; # 1 - in range, 0 - out of range } ok @i == grep {$_} @ra; # Check each ip processed the expected number of files ok @i == grep {$_} @rc; } ok $userData->{files}{&fpe($d, qw(4 txt))} eq # Check the computed MD5 sum for the specified file q(a87ff679a2f3e71d9181a67b7542122c); } if (1) # Process files in series on local machine {my $N = 42; my $d = temporaryFolder; my $r = 𝗮𝘄𝘀𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗙𝗶𝗹𝗲𝘀 # Process files in series on local machine ({file => 4}, # User data \&Data::Table::Text::awsParallelProcessFilesTestParallel, # Code to execute on each session instance including the session leader written as a string because it has to be shipped to each instance \&Data::Table::Text::awsParallelProcessFilesTestResults, # Code to execute in series on the session leader to analyze the results of the parallel runs [map {writeFile(fpe($d, $_, qw(txt)), $_)} 1..$N], # Files to process ()); # No Aws cli options as we are running locally ok $r->{file} == 4, 'aaa'; # Prove we can pass data in and get it back ok $r->{merge} == 1, 'bbb'; # Only one merge as we are running locally ok $r->{ip}{localHost} == $N, 'ccc'; # Number of files processed locally ok keys($r->{files}->%*) == $N; # Number of files processed ok $r->{files}{fpe($d, qw(4 txt))} eq q(a87ff679a2f3e71d9181a67b7542122c); # Check the computed MD5 sum for the specified file clearFolder($d, $N+2); }
Work with S3 as if it were a file system.
Return {file=>size} for all the files in a specified $folderOrFile on S3 using the specified %options if any.
Parameter Description 1 $folderOrFile Source on S3 - which will be truncated to a folder name 2 %options Options
my %options = (profile => q(fmc)); s3DownloadFolder (q(s3://bucket/folder/), q(home/phil/s3/folder/), %options, delete=>1); s3ZipFolder ( q(home/phil/s3/folder/) => q(s3://bucket/folder/), %options); s3ZipFolders({q(home/phil/s3/folder/) => q(s3://bucket/folder/)}, %options); is_deeply {𝘀𝟯𝗟𝗶𝘀𝘁𝗙𝗶𝗹𝗲𝘀𝗔𝗻𝗱𝗦𝗶𝘇𝗲𝘀(q(s3://salesforce.dita/originals4/images), %options) }, {"s3://salesforce.dita/originals4/images/business_plan_sections.png" => ["originals4/images/business_plan_sections.png", 112525, "2019-08-13", "20:01:10", ], "s3://salesforce.dita/originals4/images/non-referenced.png" => ["originals4/images/non-referenced.png", 19076, "2019-08-20", "01:25:04", ], }; my $data = q(0123456789); my $file = q(s3://salesforce.dita/zzz/111.txt); if (1) { s3WriteString($file, $data, %options); my $r = s3ReadString($file, %options); ok $r eq $data; } if (1) {my @r = s3FileExists($file, %options); ok $r[0] eq "zzz/111.txt"; ok $r[1] == 10; } if (1) {my $d = $data x 2; my $f = writeFile(undef, $d); s3WriteFile($file, $f, %options); unlink $f; s3ReadFile ($file, $f, %options); ok readFile($f) eq $d; unlink $f; }
Return (name, size, date, time) for a $file that exists on S3 else () using the specified %options if any.
Parameter Description 1 $file File on S3 - which will be truncated to a folder name 2 %options Options
my %options = (profile => q(fmc)); s3DownloadFolder (q(s3://bucket/folder/), q(home/phil/s3/folder/), %options, delete=>1); s3ZipFolder ( q(home/phil/s3/folder/) => q(s3://bucket/folder/), %options); s3ZipFolders({q(home/phil/s3/folder/) => q(s3://bucket/folder/)}, %options); is_deeply {s3ListFilesAndSizes(q(s3://salesforce.dita/originals4/images), %options) }, {"s3://salesforce.dita/originals4/images/business_plan_sections.png" => ["originals4/images/business_plan_sections.png", 112525, "2019-08-13", "20:01:10", ], "s3://salesforce.dita/originals4/images/non-referenced.png" => ["originals4/images/non-referenced.png", 19076, "2019-08-20", "01:25:04", ], }; my $data = q(0123456789); my $file = q(s3://salesforce.dita/zzz/111.txt); if (1) { s3WriteString($file, $data, %options); my $r = s3ReadString($file, %options); ok $r eq $data; } if (1) {my @r = 𝘀𝟯𝗙𝗶𝗹𝗲𝗘𝘅𝗶𝘀𝘁𝘀($file, %options); ok $r[0] eq "zzz/111.txt"; ok $r[1] == 10; } if (1) {my $d = $data x 2; my $f = writeFile(undef, $d); s3WriteFile($file, $f, %options); unlink $f; s3ReadFile ($file, $f, %options); ok readFile($f) eq $d; unlink $f; }
Write to a file $fileS3 on S3 the contents of a local file $fileLocal using the specified %options if any. $fileLocal will be removed if %options contains a key cleanUp with a true value.
Parameter Description 1 $fileS3 File to write to on S3 2 $fileLocal String to write into file 3 %options Options
my %options = (profile => q(fmc)); s3DownloadFolder (q(s3://bucket/folder/), q(home/phil/s3/folder/), %options, delete=>1); s3ZipFolder ( q(home/phil/s3/folder/) => q(s3://bucket/folder/), %options); s3ZipFolders({q(home/phil/s3/folder/) => q(s3://bucket/folder/)}, %options); is_deeply {s3ListFilesAndSizes(q(s3://salesforce.dita/originals4/images), %options) }, {"s3://salesforce.dita/originals4/images/business_plan_sections.png" => ["originals4/images/business_plan_sections.png", 112525, "2019-08-13", "20:01:10", ], "s3://salesforce.dita/originals4/images/non-referenced.png" => ["originals4/images/non-referenced.png", 19076, "2019-08-20", "01:25:04", ], }; my $data = q(0123456789); my $file = q(s3://salesforce.dita/zzz/111.txt); if (1) { s3WriteString($file, $data, %options); my $r = s3ReadString($file, %options); ok $r eq $data; } if (1) {my @r = s3FileExists($file, %options); ok $r[0] eq "zzz/111.txt"; ok $r[1] == 10; } if (1) {my $d = $data x 2; my $f = writeFile(undef, $d); 𝘀𝟯𝗪𝗿𝗶𝘁𝗲𝗙𝗶𝗹𝗲($file, $f, %options); unlink $f; s3ReadFile ($file, $f, %options); ok readFile($f) eq $d; unlink $f; }
Write to a $file on S3 the contents of $string using the specified %options if any.
Parameter Description 1 $file File to write to on S3 2 $string String to write into file 3 %options Options
my %options = (profile => q(fmc)); s3DownloadFolder (q(s3://bucket/folder/), q(home/phil/s3/folder/), %options, delete=>1); s3ZipFolder ( q(home/phil/s3/folder/) => q(s3://bucket/folder/), %options); s3ZipFolders({q(home/phil/s3/folder/) => q(s3://bucket/folder/)}, %options); is_deeply {s3ListFilesAndSizes(q(s3://salesforce.dita/originals4/images), %options) }, {"s3://salesforce.dita/originals4/images/business_plan_sections.png" => ["originals4/images/business_plan_sections.png", 112525, "2019-08-13", "20:01:10", ], "s3://salesforce.dita/originals4/images/non-referenced.png" => ["originals4/images/non-referenced.png", 19076, "2019-08-20", "01:25:04", ], }; my $data = q(0123456789); my $file = q(s3://salesforce.dita/zzz/111.txt); if (1) { 𝘀𝟯𝗪𝗿𝗶𝘁𝗲𝗦𝘁𝗿𝗶𝗻𝗴($file, $data, %options); my $r = s3ReadString($file, %options); ok $r eq $data; } if (1) {my @r = s3FileExists($file, %options); ok $r[0] eq "zzz/111.txt"; ok $r[1] == 10; } if (1) {my $d = $data x 2; my $f = writeFile(undef, $d); s3WriteFile($file, $f, %options); unlink $f; s3ReadFile ($file, $f, %options); ok readFile($f) eq $d; unlink $f; }
Read from a $file on S3 and write the contents to a local file $local using the specified %options if any. Any pre existing version of the local file $local will be deleted. Returns whether the local file exists after completion of the download.
Parameter Description 1 $file File to read from on S3 2 $local Local file to write to 3 %options Options
my %options = (profile => q(fmc)); s3DownloadFolder (q(s3://bucket/folder/), q(home/phil/s3/folder/), %options, delete=>1); s3ZipFolder ( q(home/phil/s3/folder/) => q(s3://bucket/folder/), %options); s3ZipFolders({q(home/phil/s3/folder/) => q(s3://bucket/folder/)}, %options); is_deeply {s3ListFilesAndSizes(q(s3://salesforce.dita/originals4/images), %options) }, {"s3://salesforce.dita/originals4/images/business_plan_sections.png" => ["originals4/images/business_plan_sections.png", 112525, "2019-08-13", "20:01:10", ], "s3://salesforce.dita/originals4/images/non-referenced.png" => ["originals4/images/non-referenced.png", 19076, "2019-08-20", "01:25:04", ], }; my $data = q(0123456789); my $file = q(s3://salesforce.dita/zzz/111.txt); if (1) { s3WriteString($file, $data, %options); my $r = s3ReadString($file, %options); ok $r eq $data; } if (1) {my @r = s3FileExists($file, %options); ok $r[0] eq "zzz/111.txt"; ok $r[1] == 10; } if (1) {my $d = $data x 2; my $f = writeFile(undef, $d); s3WriteFile($file, $f, %options); unlink $f; 𝘀𝟯𝗥𝗲𝗮𝗱𝗙𝗶𝗹𝗲 ($file, $f, %options); ok readFile($f) eq $d; unlink $f; }
Read from a $file on S3 and return the contents as a string using specified %options if any. Any pre existing version of $local will be deleted. Returns whether the local file exists after completion of the download.
Parameter Description 1 $file File to read from on S3 2 %options Options
my %options = (profile => q(fmc)); s3DownloadFolder (q(s3://bucket/folder/), q(home/phil/s3/folder/), %options, delete=>1); s3ZipFolder ( q(home/phil/s3/folder/) => q(s3://bucket/folder/), %options); s3ZipFolders({q(home/phil/s3/folder/) => q(s3://bucket/folder/)}, %options); is_deeply {s3ListFilesAndSizes(q(s3://salesforce.dita/originals4/images), %options) }, {"s3://salesforce.dita/originals4/images/business_plan_sections.png" => ["originals4/images/business_plan_sections.png", 112525, "2019-08-13", "20:01:10", ], "s3://salesforce.dita/originals4/images/non-referenced.png" => ["originals4/images/non-referenced.png", 19076, "2019-08-20", "01:25:04", ], }; my $data = q(0123456789); my $file = q(s3://salesforce.dita/zzz/111.txt); if (1) { s3WriteString($file, $data, %options); my $r = 𝘀𝟯𝗥𝗲𝗮𝗱𝗦𝘁𝗿𝗶𝗻𝗴($file, %options); ok $r eq $data; } if (1) {my @r = s3FileExists($file, %options); ok $r[0] eq "zzz/111.txt"; ok $r[1] == 10; } if (1) {my $d = $data x 2; my $f = writeFile(undef, $d); s3WriteFile($file, $f, %options); unlink $f; s3ReadFile ($file, $f, %options); ok readFile($f) eq $d; unlink $f; }
Download a specified $folder on S3 to a $local folder using the specified %options if any. Any existing data in the $local folder will be will be deleted if delete=>1 is specified as an option. Returns undef on failure else the name of the $local on success.
Parameter Description 1 $folder Folder to read from on S3 2 $local Local folder to write to 3 %options Options
my %options = (profile => q(fmc)); 𝘀𝟯𝗗𝗼𝘄𝗻𝗹𝗼𝗮𝗱𝗙𝗼𝗹𝗱𝗲𝗿 (q(s3://bucket/folder/), q(home/phil/s3/folder/), %options, delete=>1); s3ZipFolder ( q(home/phil/s3/folder/) => q(s3://bucket/folder/), %options); s3ZipFolders({q(home/phil/s3/folder/) => q(s3://bucket/folder/)}, %options); is_deeply {s3ListFilesAndSizes(q(s3://salesforce.dita/originals4/images), %options) }, {"s3://salesforce.dita/originals4/images/business_plan_sections.png" => ["originals4/images/business_plan_sections.png", 112525, "2019-08-13", "20:01:10", ], "s3://salesforce.dita/originals4/images/non-referenced.png" => ["originals4/images/non-referenced.png", 19076, "2019-08-20", "01:25:04", ], }; my $data = q(0123456789); my $file = q(s3://salesforce.dita/zzz/111.txt); if (1) { s3WriteString($file, $data, %options); my $r = s3ReadString($file, %options); ok $r eq $data; } if (1) {my @r = s3FileExists($file, %options); ok $r[0] eq "zzz/111.txt"; ok $r[1] == 10; } if (1) {my $d = $data x 2; my $f = writeFile(undef, $d); s3WriteFile($file, $f, %options); unlink $f; s3ReadFile ($file, $f, %options); ok readFile($f) eq $d; unlink $f; }
Zip the specified $source folder and write it to the named $target file on S3.
Parameter Description 1 $source Source folder 2 $target Target file on S3 3 %options S3 options
𝘀𝟯𝗭𝗶𝗽𝗙𝗼𝗹𝗱𝗲𝗿(q(home/phil/r/), q(s3://bucket/r.zip)); my %options = (profile => q(fmc)); s3DownloadFolder (q(s3://bucket/folder/), q(home/phil/s3/folder/), %options, delete=>1); 𝘀𝟯𝗭𝗶𝗽𝗙𝗼𝗹𝗱𝗲𝗿 ( q(home/phil/s3/folder/) => q(s3://bucket/folder/), %options); s3ZipFolders({q(home/phil/s3/folder/) => q(s3://bucket/folder/)}, %options); is_deeply {s3ListFilesAndSizes(q(s3://salesforce.dita/originals4/images), %options) }, {"s3://salesforce.dita/originals4/images/business_plan_sections.png" => ["originals4/images/business_plan_sections.png", 112525, "2019-08-13", "20:01:10", ], "s3://salesforce.dita/originals4/images/non-referenced.png" => ["originals4/images/non-referenced.png", 19076, "2019-08-20", "01:25:04", ], }; my $data = q(0123456789); my $file = q(s3://salesforce.dita/zzz/111.txt); if (1) { s3WriteString($file, $data, %options); my $r = s3ReadString($file, %options); ok $r eq $data; } if (1) {my @r = s3FileExists($file, %options); ok $r[0] eq "zzz/111.txt"; ok $r[1] == 10; } if (1) {my $d = $data x 2; my $f = writeFile(undef, $d); s3WriteFile($file, $f, %options); unlink $f; s3ReadFile ($file, $f, %options); ok readFile($f) eq $d; unlink $f; }
Zip local folders and upload them to S3 in parallel. $map maps source folder names on the local machine to target folders on S3. %options contains any additional Amazon Web Services cli options.
Parameter Description 1 $map Source folder to S3 mapping 2 %options S3 options
my %options = (profile => q(fmc)); s3DownloadFolder (q(s3://bucket/folder/), q(home/phil/s3/folder/), %options, delete=>1); s3ZipFolder ( q(home/phil/s3/folder/) => q(s3://bucket/folder/), %options); 𝘀𝟯𝗭𝗶𝗽𝗙𝗼𝗹𝗱𝗲𝗿𝘀({q(home/phil/s3/folder/) => q(s3://bucket/folder/)}, %options); is_deeply {s3ListFilesAndSizes(q(s3://salesforce.dita/originals4/images), %options) }, {"s3://salesforce.dita/originals4/images/business_plan_sections.png" => ["originals4/images/business_plan_sections.png", 112525, "2019-08-13", "20:01:10", ], "s3://salesforce.dita/originals4/images/non-referenced.png" => ["originals4/images/non-referenced.png", 19076, "2019-08-20", "01:25:04", ], }; my $data = q(0123456789); my $file = q(s3://salesforce.dita/zzz/111.txt); if (1) { s3WriteString($file, $data, %options); my $r = s3ReadString($file, %options); ok $r eq $data; } if (1) {my @r = s3FileExists($file, %options); ok $r[0] eq "zzz/111.txt"; ok $r[1] == 10; } if (1) {my $d = $data x 2; my $f = writeFile(undef, $d); s3WriteFile($file, $f, %options); unlink $f; s3ReadFile ($file, $f, %options); ok readFile($f) eq $d; unlink $f; }
Simple interactions with GitHub - for more complex interactions please use GitHub::Crud.
Get the contents of a public repo on GitHub and place them in a temporary folder whose name is returned to the caller or confess if no such repo exists.
Parameter Description 1 $user GitHub user 2 $repo GitHub repo
𝗱𝗼𝘄𝗻𝗹𝗼𝗮𝗱𝗚𝗶𝘁𝗛𝘂𝗯𝗣𝘂𝗯𝗹𝗶𝗰𝗥𝗲𝗽𝗼(q(philiprbrenan), q(psr));
Get the contents of a $user $repo $file from a public repo on GitHub and return them as a string.
Parameter Description 1 $user GitHub user 2 $repo GitHub repository 3 $file File name in repository
ok &𝗱𝗼𝘄𝗻𝗹𝗼𝗮𝗱𝗚𝗶𝘁𝗛𝘂𝗯𝗣𝘂𝗯𝗹𝗶𝗰𝗥𝗲𝗽𝗼𝗙𝗶𝗹𝗲(qw(philiprbrenan pleaseChangeDita index.html));
Start processes, wait for them to terminate and retrieve their results
Start new processes while the number of child processes recorded in %$pids is less than the specified $maximum. Use waitForAllStartedProcessesToFinish to wait for all these processes to finish.
Parameter Description 1 $sub Sub to start 2 $pids Hash in which to record the process ids 3 $maximum Maximum number of processes to run at a time
my %pids; sub{𝘀𝘁𝗮𝗿𝘁𝗣𝗿𝗼𝗰𝗲𝘀𝘀 {} %pids, 1; ok 1 >= keys %pids}->() for 1..8; waitForAllStartedProcessesToFinish(%pids); ok !keys(%pids)
Wait until all the processes started by startProcess have finished.
Parameter Description 1 $pids Hash of started process ids
my %pids; sub{startProcess {} %pids, 1; ok 1 >= keys %pids}->() for 1..8; 𝘄𝗮𝗶𝘁𝗙𝗼𝗿𝗔𝗹𝗹𝗦𝘁𝗮𝗿𝘁𝗲𝗱𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗲𝘀𝗧𝗼𝗙𝗶𝗻𝗶𝘀𝗵(%pids); ok !keys(%pids)
Create a new process starter with which to start parallel processes up to a specified $maximumNumberOfProcesses maximum number of parallel processes at a time, wait for all the started processes to finish and then optionally retrieve their saved results as an array from the folder named by $transferArea.
Parameter Description 1 $maximumNumberOfProcesses Maximum number of processes to start 2 %options Options
if (1) {my $N = 100; my $l = q(logFile.txt); unlink $l; my $s = 𝗻𝗲𝘄𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗦𝘁𝗮𝗿𝘁𝗲𝗿(4); $s->processingTitle = q(Test processes); $s->totalToBeStarted = $N; $s->processingLogFile = $l; for my $i(1..$N) {Data::Table::Text::Starter::start($s, sub{$i*$i}); } is_deeply [sort {$a <=> $b} Data::Table::Text::Starter::finish($s)], [map {$_**2} 1..$N]; ok readFile($l) =~ m(Finished $N processes for: Test processes)s; clearFolder($s->transferArea, 1e3); unlink $l; }
Start a new process to run the specified $sub.
Parameter Description 1 $starter Starter 2 $sub Sub to be run.
if (1) {my $N = 100; my $l = q(logFile.txt); unlink $l; my $s = newProcessStarter(4); $s->processingTitle = q(Test processes); $s->totalToBeStarted = $N; $s->processingLogFile = $l; for my $i(1..$N) {𝗗𝗮𝘁𝗮::𝗧𝗮𝗯𝗹𝗲::𝗧𝗲𝘅𝘁::𝗦𝘁𝗮𝗿𝘁𝗲𝗿::𝘀𝘁𝗮𝗿𝘁($s, sub{$i*$i}); } is_deeply [sort {$a <=> $b} Data::Table::Text::Starter::finish($s)], [map {$_**2} 1..$N]; ok readFile($l) =~ m(Finished $N processes for: Test processes)s; clearFolder($s->transferArea, 1e3); unlink $l; }
Wait for all started processes to finish and return their results as an array.
Parameter Description 1 $starter Starter
if (1) {my $N = 100; my $l = q(logFile.txt); unlink $l; my $s = newProcessStarter(4); $s->processingTitle = q(Test processes); $s->totalToBeStarted = $N; $s->processingLogFile = $l; for my $i(1..$N) {Data::Table::Text::Starter::start($s, sub{$i*$i}); } is_deeply [sort {$a <=> $b} 𝗗𝗮𝘁𝗮::𝗧𝗮𝗯𝗹𝗲::𝗧𝗲𝘅𝘁::𝗦𝘁𝗮𝗿𝘁𝗲𝗿::𝗳𝗶𝗻𝗶𝘀𝗵($s)], [map {$_**2} 1..$N]; ok readFile($l) =~ m(Finished $N processes for: Test processes)s; clearFolder($s->transferArea, 1e3); unlink $l; }
Create a two dimensional square array from a one dimensional linear array.
is_deeply [𝘀𝗾𝘂𝗮𝗿𝗲𝗔𝗿𝗿𝗮𝘆 @{[1..4]} ], [[1, 2], [3, 4]]; is_deeply [𝘀𝗾𝘂𝗮𝗿𝗲𝗔𝗿𝗿𝗮𝘆 @{[1..22]}], [[1 .. 5], [6 .. 10], [11 .. 15], [16 .. 20], [21, 22]]; is_deeply [1..$_], [deSquareArray 𝘀𝗾𝘂𝗮𝗿𝗲𝗔𝗿𝗿𝗮𝘆 @{[1..$_]}] for 1..22; ok $_ == countSquareArray 𝘀𝗾𝘂𝗮𝗿𝗲𝗔𝗿𝗿𝗮𝘆 @{[1..$_]} for 222; is_deeply [rectangularArray(3, 1..11)], [[1, 4, 7, 10], [2, 5, 8, 11], [3, 6, 9]]; is_deeply [rectangularArray(3, 1..12)], [[1, 4, 7, 10], [2, 5, 8, 11], [3, 6, 9, 12]]; is_deeply [rectangularArray(3, 1..13)], [[1, 4, 7, 10, 13], [2, 5, 8, 11], [3, 6, 9, 12]]; is_deeply [rectangularArray2(3, 1..5)], [[1, 2, 3], [4, 5]]; is_deeply [rectangularArray2(3, 1..6)], [[1, 2, 3], [4, 5, 6]]; is_deeply [rectangularArray2(3, 1..7)], [[1, 2, 3], [4, 5, 6], [7]];
Create a one dimensional array from a two dimensional array of arrays
Parameter Description 1 @square Array of arrays
is_deeply [squareArray @{[1..4]} ], [[1, 2], [3, 4]]; is_deeply [squareArray @{[1..22]}], [[1 .. 5], [6 .. 10], [11 .. 15], [16 .. 20], [21, 22]]; is_deeply [1..$_], [𝗱𝗲𝗦𝗾𝘂𝗮𝗿𝗲𝗔𝗿𝗿𝗮𝘆 squareArray @{[1..$_]}] for 1..22; ok $_ == countSquareArray squareArray @{[1..$_]} for 222; is_deeply [rectangularArray(3, 1..11)], [[1, 4, 7, 10], [2, 5, 8, 11], [3, 6, 9]]; is_deeply [rectangularArray(3, 1..12)], [[1, 4, 7, 10], [2, 5, 8, 11], [3, 6, 9, 12]]; is_deeply [rectangularArray(3, 1..13)], [[1, 4, 7, 10, 13], [2, 5, 8, 11], [3, 6, 9, 12]]; is_deeply [rectangularArray2(3, 1..5)], [[1, 2, 3], [4, 5]]; is_deeply [rectangularArray2(3, 1..6)], [[1, 2, 3], [4, 5, 6]]; is_deeply [rectangularArray2(3, 1..7)], [[1, 2, 3], [4, 5, 6], [7]];
Create a two dimensional rectangular array whose first dimension is $first from a one dimensional linear array.
Parameter Description 1 $first First dimension size 2 @array Array
is_deeply [squareArray @{[1..4]} ], [[1, 2], [3, 4]]; is_deeply [squareArray @{[1..22]}], [[1 .. 5], [6 .. 10], [11 .. 15], [16 .. 20], [21, 22]]; is_deeply [1..$_], [deSquareArray squareArray @{[1..$_]}] for 1..22; ok $_ == countSquareArray squareArray @{[1..$_]} for 222; is_deeply [𝗿𝗲𝗰𝘁𝗮𝗻𝗴𝘂𝗹𝗮𝗿𝗔𝗿𝗿𝗮𝘆(3, 1..11)], [[1, 4, 7, 10], [2, 5, 8, 11], [3, 6, 9]]; is_deeply [𝗿𝗲𝗰𝘁𝗮𝗻𝗴𝘂𝗹𝗮𝗿𝗔𝗿𝗿𝗮𝘆(3, 1..12)], [[1, 4, 7, 10], [2, 5, 8, 11], [3, 6, 9, 12]]; is_deeply [𝗿𝗲𝗰𝘁𝗮𝗻𝗴𝘂𝗹𝗮𝗿𝗔𝗿𝗿𝗮𝘆(3, 1..13)], [[1, 4, 7, 10, 13], [2, 5, 8, 11], [3, 6, 9, 12]]; is_deeply [rectangularArray2(3, 1..5)], [[1, 2, 3], [4, 5]]; is_deeply [rectangularArray2(3, 1..6)], [[1, 2, 3], [4, 5, 6]]; is_deeply [rectangularArray2(3, 1..7)], [[1, 2, 3], [4, 5, 6], [7]];
Create a two dimensional rectangular array whose second dimension is $second from a one dimensional linear array.
Parameter Description 1 $second Second dimension size 2 @array Array
is_deeply [squareArray @{[1..4]} ], [[1, 2], [3, 4]]; is_deeply [squareArray @{[1..22]}], [[1 .. 5], [6 .. 10], [11 .. 15], [16 .. 20], [21, 22]]; is_deeply [1..$_], [deSquareArray squareArray @{[1..$_]}] for 1..22; ok $_ == countSquareArray squareArray @{[1..$_]} for 222; is_deeply [rectangularArray(3, 1..11)], [[1, 4, 7, 10], [2, 5, 8, 11], [3, 6, 9]]; is_deeply [rectangularArray(3, 1..12)], [[1, 4, 7, 10], [2, 5, 8, 11], [3, 6, 9, 12]]; is_deeply [rectangularArray(3, 1..13)], [[1, 4, 7, 10, 13], [2, 5, 8, 11], [3, 6, 9, 12]]; is_deeply [𝗿𝗲𝗰𝘁𝗮𝗻𝗴𝘂𝗹𝗮𝗿𝗔𝗿𝗿𝗮𝘆𝟮(3, 1..5)], [[1, 2, 3], [4, 5]]; is_deeply [𝗿𝗲𝗰𝘁𝗮𝗻𝗴𝘂𝗹𝗮𝗿𝗔𝗿𝗿𝗮𝘆𝟮(3, 1..6)], [[1, 2, 3], [4, 5, 6]]; is_deeply [𝗿𝗲𝗰𝘁𝗮𝗻𝗴𝘂𝗹𝗮𝗿𝗔𝗿𝗿𝗮𝘆𝟮(3, 1..7)], [[1, 2, 3], [4, 5, 6], [7]];
Call a sub reference in parallel to avoid memory fragmentation and return its results.
my %a = (a=>1, b=>2); my %b = 𝗰𝗮𝗹𝗹𝗦𝘂𝗯𝗜𝗻𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹 {return %a}; is_deeply \%a, \%b; my $f = temporaryFile; ok -e $f; my $a = callSubInOverlappedParallel sub {$a{a}++; owf($f, "Hello World")}, sub {q(aaaa)}; ok $a =~ m(aaaa)i; ok $a{a} == 1; ok readFile($f) =~ m(Hello World)i;
Call the $child sub reference in parallel in a separate child process and ignore its results while calling the $parent sub reference in the parent process and returning its results.
Parameter Description 1 $child Sub reference to call in child process 2 $parent Sub reference to call in parent process
my %a = (a=>1, b=>2); my %b = callSubInParallel {return %a}; is_deeply \%a, \%b; my $f = temporaryFile; ok -e $f; my $a = 𝗰𝗮𝗹𝗹𝗦𝘂𝗯𝗜𝗻𝗢𝘃𝗲𝗿𝗹𝗮𝗽𝗽𝗲𝗱𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹 sub {$a{a}++; owf($f, "Hello World")}, sub {q(aaaa)}; ok $a =~ m(aaaa)i; ok $a{a} == 1; ok readFile($f) =~ m(Hello World)i;
Parameter Description 1 $maximumNumberOfProcesses Maximum number of processes 2 $parallel Parallel sub 3 $results Results sub 4 @array Array of items to process
my @N = 1..100; my $N = 100; my $R = 0; $R += $_*$_ for 1..$N; ok 338350 == $R; ok $R == runInSquareRootParallel (4, sub {my ($p) = @_; $p * $p}, sub {my $p = 0; $p += $_ for @_; $p}, @{[1..$N]} ); ok $R == 𝗿𝘂𝗻𝗜𝗻𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹 (4, sub {my ($p) = @_; $p * $p}, sub {my $p = 0; $p += $_ for @_; $p}, @{[1..$N]} );
Process the elements of an array in square root parallel using a maximum of $maximumNumberOfProcesses processes. sub &$parallel is forked to process each block of array elements in parallel. The results returned by the forked copies of &$parallel are presented as a single array to sub &$results which is run in series. @array contains the elements to be processed. Returns the result returned by &$results..
my @N = 1..100; my $N = 100; my $R = 0; $R += $_*$_ for 1..$N; ok 338350 == $R; ok $R == 𝗿𝘂𝗻𝗜𝗻𝗦𝗾𝘂𝗮𝗿𝗲𝗥𝗼𝗼𝘁𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹 (4, sub {my ($p) = @_; $p * $p}, sub {my $p = 0; $p += $_ for @_; $p}, @{[1..$N]} ); ok $R == runInParallel (4, sub {my ($p) = @_; $p * $p}, sub {my $p = 0; $p += $_ for @_; $p}, @{[1..$N]} );
Given $N buckets and a list @sizes of ([size of file, name of file]...) pack the file names into buckets so that each bucket contains approximately the same number of bytes. In general this is an NP problem. Packing largest first into emptiest bucket produces an N**2 heuristic if the buckets are scanned linearly, or N*log(N) if a binary tree is used. This solution is a compromise at N**3/2 which has the benefits of simple code yet good performance. Returns ([file names ...]).
Parameter Description 1 $N Number of buckets 2 @sizes Sizes
my $M = 7; my $N = 15; my @b = 𝗽𝗮𝗰𝗸𝗕𝘆𝗦𝗶𝘇𝗲($M, map {[$_, $_]} 1..$N); my @B; my $B = 0; for my $b(@b) {my $n = 0; for(@$b) {$n += $_; $B += $_; } push @B, $n; } ok $B == $N * ($N + 1) / 2; is_deeply [@B], [16, 20, 16, 18, 16, 18, 16];
Process items of known size in parallel using (8 * the number of CPUs) processes with the process each item is assigned to depending on the size of the item so that each process is loaded with approximately the same number of bytes of data in total from the items it processes.
Each item is processed by sub $parallel and the results of processing all items is processed by $results where the items are taken from @sizes. Each &$parallel() receives an item from @files. &$results() receives an array of all the results returned by &$parallel().
Parameter Description 1 $parallel Parallel sub 2 $results Results sub 3 @sizes Array of [size; item] to process by size
my $d = temporaryFolder; my @f = map {owf(fpe($d, $_, q(txt)), 'X' x ($_ ** 2 % 11))} 1..9; my $f = fileLargestSize(@f); ok fn($f) eq '3', 'aaa'; my $b = folderSize($d); ok $b > 0, 'bbb'; my $c = processFilesInParallel( sub {my ($file) = @_; [&fileSize($file), $file] }, sub {scalar @_; }, (@f) x 12); ok 108 == $c, 'cc11'; my $C = 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗦𝗶𝘇𝗲𝘀𝗜𝗻𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹 sub {my ($file) = @_; [&fileSize($file), $file] }, sub {scalar @_; }, map {[fileSize($_), $_]} (@f) x 12; ok 108 == $C, 'cc2'; my $J = processJavaFilesInParallel sub {my ($file) = @_; [&fileSize($file), $file] }, sub {scalar @_; }, (@f) x 12; ok 108 == $J, 'cc3'; clearFolder($d, 12);
Process files in parallel using (8 * the number of CPUs) processes with the process each file is assigned to depending on the size of the file so that each process is loaded with approximately the same number of bytes of data in total from the files it processes.
Each file is processed by sub $parallel and the results of processing all files is processed by $results where the files are taken from @files. Each &$parallel receives a file from @files. &$results receives an array of all the results returned by &$parallel.
Parameter Description 1 $parallel Parallel sub 2 $results Results sub 3 @files Array of files to process by size
my $d = temporaryFolder; my @f = map {owf(fpe($d, $_, q(txt)), 'X' x ($_ ** 2 % 11))} 1..9; my $f = fileLargestSize(@f); ok fn($f) eq '3', 'aaa'; my $b = folderSize($d); ok $b > 0, 'bbb'; my $c = 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗙𝗶𝗹𝗲𝘀𝗜𝗻𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹( sub {my ($file) = @_; [&fileSize($file), $file] }, sub {scalar @_; }, (@f) x 12); ok 108 == $c, 'cc11'; my $C = processSizesInParallel sub {my ($file) = @_; [&fileSize($file), $file] }, sub {scalar @_; }, map {[fileSize($_), $_]} (@f) x 12; ok 108 == $C, 'cc2'; my $J = processJavaFilesInParallel sub {my ($file) = @_; [&fileSize($file), $file] }, sub {scalar @_; }, (@f) x 12; ok 108 == $J, 'cc3'; clearFolder($d, 12);
Process java files of known size in parallel using (the number of CPUs) processes with the process each item is assigned to depending on the size of the java item so that each process is loaded with approximately the same number of bytes of data in total from the java files it processes.
Each java item is processed by sub $parallel and the results of processing all java files is processed by $results where the java files are taken from @sizes. Each &$parallel() receives a java item from @files. &$results() receives an array of all the results returned by &$parallel().
Parameter Description 1 $parallel Parallel sub 2 $results Results sub 3 @files Array of [size; java item] to process by size
my $d = temporaryFolder; my @f = map {owf(fpe($d, $_, q(txt)), 'X' x ($_ ** 2 % 11))} 1..9; my $f = fileLargestSize(@f); ok fn($f) eq '3', 'aaa'; my $b = folderSize($d); ok $b > 0, 'bbb'; my $c = processFilesInParallel( sub {my ($file) = @_; [&fileSize($file), $file] }, sub {scalar @_; }, (@f) x 12); ok 108 == $c, 'cc11'; my $C = processSizesInParallel sub {my ($file) = @_; [&fileSize($file), $file] }, sub {scalar @_; }, map {[fileSize($_), $_]} (@f) x 12; ok 108 == $C, 'cc2'; my $J = 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗝𝗮𝘃𝗮𝗙𝗶𝗹𝗲𝘀𝗜𝗻𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹 sub {my ($file) = @_; [&fileSize($file), $file] }, sub {scalar @_; }, (@f) x 12; ok 108 == $J, 'cc3'; clearFolder($d, 12);
Download from S3 by using "aws s3 sync --exclude '*' --include '...'" in parallel to sync collections of two or more files no greater then $maxSize or single files greater than $maxSize from the $source folder on S3 to the local folder $target using the specified $Profile and $options - then execute the entire command again without the --exclude and --include options in series which might now run faster due to the prior downloads.
Parameter Description 1 $maxSize The maximum collection size 2 $source The source folder on S3 3 $target The target folder locally 4 $Profile Aws cli profile 5 $options Aws cli options
if (0) {𝘀𝘆𝗻𝗰𝗙𝗿𝗼𝗺𝗦𝟯𝗜𝗻𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹 1e5, q(xxx/originals3/), q(/home/phil/xxx/), q(phil), q(--quiet); syncToS3InParallel 1e5, q(/home/phil/xxx/), q(xxx/originals3/), q(phil), q(--quiet); }
Upload to S3 by using "aws s3 sync --exclude '*' --include '...'" in parallel to sync collections of two or more files no greater then $maxSize or single files greater than $maxSize from the $source folder locally to the target folder $target on S3 using the specified $Profile and $options - then execute the entire command again without the --exclude and --include options in series which might now run faster due to the prior uploads.
Parameter Description 1 $maxSize The maximum collection size 2 $source The target folder locally 3 $target The source folder on S3 4 $Profile Aws cli profile 5 $options Aws cli options
if (0) {syncFromS3InParallel 1e5, q(xxx/originals3/), q(/home/phil/xxx/), q(phil), q(--quiet); 𝘀𝘆𝗻𝗰𝗧𝗼𝗦𝟯𝗜𝗻𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹 1e5, q(/home/phil/xxx/), q(xxx/originals3/), q(phil), q(--quiet); }
Recursively find the pids of all the sub processes of a $process and all their sub processes and so on returning the specified pid and all its child pids as a list.
Parameter Description 1 $p Process
is_deeply [𝗰𝗵𝗶𝗹𝗱𝗣𝗶𝗱𝘀(2702)], [2702..2705];
Create a new service incarnation to record the start up of a new instance of a service and return the description as a Data::Exchange::Service Definition hash.
Parameter Description 1 $service Service name 2 $file Optional details file
if (1) {my $s = 𝗻𝗲𝘄𝗦𝗲𝗿𝘃𝗶𝗰𝗲𝗜𝗻𝗰𝗮𝗿𝗻𝗮𝘁𝗶𝗼𝗻("aaa", q(bbb.txt)); is_deeply $s->check, $s; my $t = 𝗻𝗲𝘄𝗦𝗲𝗿𝘃𝗶𝗰𝗲𝗜𝗻𝗰𝗮𝗿𝗻𝗮𝘁𝗶𝗼𝗻("aaa", q(bbb.txt)); is_deeply $t->check, $t; ok $t->start >= $s->start+1; ok !$s->check(1); unlink q(bbb.txt); }
Check that we are the current incarnation of the named service with details obtained from newServiceIncarnation. If the optional $continue flag has been set then return the service details if this is the current service incarnation else undef. Otherwise if the $continue flag is false confess unless this is the current service incarnation thus bringing the earlier version of this service to an abrupt end.
Parameter Description 1 $service Current service details 2 $continue Return result if B<$continue> is true else confess if the service has been replaced
if (1) {my $s = newServiceIncarnation("aaa", q(bbb.txt)); is_deeply $s->check, $s; my $t = newServiceIncarnation("aaa", q(bbb.txt)); is_deeply $t->check, $t; ok $t->start >= $s->start+1; ok !$s->check(1); unlink q(bbb.txt); }
Extract, format and update documentation for a perl module.
Parse a dita reference $ref into its components (file name, topic id, id) . Optionally supply a base file name $File> to make the the file component absolute and/or a a default the topic id $TopicId to use if the topic id is not present in the reference.
Parameter Description 1 $ref Reference to parse 2 $File Default absolute file 3 $TopicId Default topic id
is_deeply [𝗽𝗮𝗿𝘀𝗲𝗗𝗶𝘁𝗮𝗥𝗲𝗳(q(a#b/c))], [qw(a b c)]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗗𝗶𝘁𝗮𝗥𝗲𝗳(q(a#./c))], [q(a), q(), q(c)]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗗𝗶𝘁𝗮𝗥𝗲𝗳(q(a#/c))], [q(a), q(), q(c)]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗗𝗶𝘁𝗮𝗥𝗲𝗳(q(a#c))], [q(a), q(), q(c)]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗗𝗶𝘁𝗮𝗥𝗲𝗳(q(#b/c))], [q(), qw(b c)]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗗𝗶𝘁𝗮𝗥𝗲𝗳(q(#b))], [q(), q(), q(b)]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗗𝗶𝘁𝗮𝗥𝗲𝗳(q(#./c))], [q(), q(), q(c)]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗗𝗶𝘁𝗮𝗥𝗲𝗳(q(#/c))], [q(), q(), q(c)]; is_deeply [𝗽𝗮𝗿𝘀𝗲𝗗𝗶𝘁𝗮𝗥𝗲𝗳(q(#c))], [q(), q(), q(c)];
Parse an Xml DOCTYPE and return a hash indicating its components
Parameter Description 1 $string String containing a DOCTYPE
if (1) {is_deeply 𝗽𝗮𝗿𝘀𝗲𝗫𝗺𝗹𝗗𝗼𝗰𝗧𝘆𝗽𝗲(<<END), <!DOCTYPE reference PUBLIC "-//OASIS//DTD DITA Reference//EN" "reference.dtd"> ... END {localDtd => "reference.dtd", public => 1, publicId => "-//OASIS//DTD DITA Reference//EN", root => "reference", }; is_deeply 𝗽𝗮𝗿𝘀𝗲𝗫𝗺𝗹𝗗𝗼𝗰𝗧𝘆𝗽𝗲(<<END), ... <!DOCTYPE concept PUBLIC "-//OASIS//DTD DITA Task//EN" "concept.dtd" []> ... )), END {localDtd => "concept.dtd", public => 1, publicId => "-//OASIS//DTD DITA Task//EN", root => "concept", }; }
Report the current values of parameterless subs in a $sourceFile that match \Asub\s+(\w+)\s*\{ and optionally write the report to $reportFile. Return the text of the report.
Parameter Description 1 $sourceFile Source file 2 $reportFile Optional report file
𝗿𝗲𝗽𝗼𝗿𝘁𝗦𝗲𝘁𝘁𝗶𝗻𝗴𝘀($0);
Report the attributes present in a $sourceFile
Parameter Description 1 $sourceFile Source file
my $d = temporaryFile; my $f = writeFile(undef, <<'END'.<<END2); #!perl -I/home/phil/perl/cpan/DataTableText/lib/ use Data::Table::Text qw(reportAttributeSettings); sub attribute {1} # An attribute sub replaceable($) #r A replaceable method {
Report the current values of the attribute methods in the calling file and optionally write the report to $reportFile. Return the text of the report.
Parameter Description 1 $reportFile Optional report file
my $d = temporaryFile; my $f = writeFile(undef, <<'END'.<<END2); #!perl -I/home/phil/perl/cpan/DataTableText/lib/ use Data::Table::Text qw(𝗿𝗲𝗽𝗼𝗿𝘁𝗔𝘁𝘁𝗿𝗶𝗯𝘂𝘁𝗲𝗦𝗲𝘁𝘁𝗶𝗻𝗴𝘀); sub attribute {1} # An attribute sub replaceable($) #r A replaceable method {
Report the replaceable methods marked with #r in a $sourceFile
my $d = temporaryFile; my $f = writeFile(undef, <<'END'.<<END2); #!perl -I/home/phil/perl/cpan/DataTableText/lib/ use Data::Table::Text qw(reportAttributeSettings); sub attribute {1} # An attribute sub replaceable($) #r A replaceable method { sub 𝗿𝗲𝗽𝗼𝗿𝘁𝗥𝗲𝗽𝗹𝗮𝗰𝗮𝗯𝗹𝗲𝗠𝗲𝘁𝗵𝗼𝗱𝘀($) {my ($sourceFile) = @_; # Source file my $s = readFile($sourceFile); my %s; for my $l(split / /, $s) # Find the attribute subs {if ($l =~ m(\Asub\s*(\w+).*?#\w*r\w*\s+(.*)\Z)) {$s{$1} = $2; } } \%s }
Report the exportable methods marked with #e in a $sourceFile
Generate a table of contents for some html.
Parameter Description 1 $replace Sub-string within the html to be replaced with the toc 2 $html String of html
ok nws(𝗵𝘁𝗺𝗹𝗧𝗼𝗰("XXXX", <<END)), '𝗵𝘁𝗺𝗹𝗧𝗼𝗰' <h1 id="1" otherprops="1">Chapter 1</h1> <h2 id="11" otherprops="11">Section 1</h1> <h1 id="2" otherprops="2">Chapter 2</h1> XXXX END eq nws(<<END); <h1 id="1" otherprops="1">Chapter 1</h1> <h2 id="11" otherprops="11">Section 1</h1> <h1 id="2" otherprops="2">Chapter 2</h1> <table cellspacing=10 border=0> <tr><td> <tr><td align=right>1<td> <a href="#1">Chapter 1</a> <tr><td align=right>2<td> <a href="#11">Section 1</a> <tr><td> <tr><td align=right>3<td> <a href="#2">Chapter 2</a> </table> END
Expand short url names found in a string in the format L<url-name> using the Perl POD syntax
Parameter Description 1 $string String containing url names to expand
ok expandWellKnownUrlsInDitaFormat(q(L[github])) eq q(<xref scope="external" format="html" href="https://github.com">GitHub</xref>); ok expandWellKnownUrlsInHtmlFormat(q(L[github])) eq q(<a format="html" href="https://github.com">GitHub</a>); ok 𝗲𝘅𝗽𝗮𝗻𝗱𝗪𝗲𝗹𝗹𝗞𝗻𝗼𝘄𝗻𝗨𝗿𝗹𝘀𝗜𝗻𝗣𝗲𝗿𝗹𝗙𝗼𝗿𝗺𝗮𝘁(q(L<GitHub|https://github.com>)) eq q(L<GitHub|https://github.com>); ok 𝗲𝘅𝗽𝗮𝗻𝗱𝗪𝗲𝗹𝗹𝗞𝗻𝗼𝘄𝗻𝗨𝗿𝗹𝘀𝗜𝗻𝗣𝗲𝗿𝗹𝗙𝗼𝗿𝗺𝗮𝘁(q(github)) eq q(github); ok expandWellKnownUrlsInHtmlFromPerl(q(L<GitHub|https://github.com>)) eq q(<a format="html" href="https://github.com">GitHub</a>); ok expandWellKnownUrlsInPod2Html(<<END) eq eval '"aaa
bbb "'; aaa GitHub bbb END
Expand short url names found in a string in the format L[url-name] using the html a tag.
ok expandWellKnownUrlsInDitaFormat(q(L[github])) eq q(<xref scope="external" format="html" href="https://github.com">GitHub</xref>); ok 𝗲𝘅𝗽𝗮𝗻𝗱𝗪𝗲𝗹𝗹𝗞𝗻𝗼𝘄𝗻𝗨𝗿𝗹𝘀𝗜𝗻𝗛𝘁𝗺𝗹𝗙𝗼𝗿𝗺𝗮𝘁(q(L[github])) eq q(<a format="html" href="https://github.com">GitHub</a>); ok expandWellKnownUrlsInPerlFormat(q(L<GitHub|https://github.com>)) eq q(L<GitHub|https://github.com>); ok expandWellKnownUrlsInPerlFormat(q(github)) eq q(github); ok expandWellKnownUrlsInHtmlFromPerl(q(L<GitHub|https://github.com>)) eq q(<a format="html" href="https://github.com">GitHub</a>); ok expandWellKnownUrlsInPod2Html(<<END) eq eval '"aaa
ok expandWellKnownUrlsInDitaFormat(q(L[github])) eq q(<xref scope="external" format="html" href="https://github.com">GitHub</xref>); ok expandWellKnownUrlsInHtmlFormat(q(L[github])) eq q(<a format="html" href="https://github.com">GitHub</a>); ok expandWellKnownUrlsInPerlFormat(q(L<GitHub|https://github.com>)) eq q(L<GitHub|https://github.com>); ok expandWellKnownUrlsInPerlFormat(q(github)) eq q(github); ok 𝗲𝘅𝗽𝗮𝗻𝗱𝗪𝗲𝗹𝗹𝗞𝗻𝗼𝘄𝗻𝗨𝗿𝗹𝘀𝗜𝗻𝗛𝘁𝗺𝗹𝗙𝗿𝗼𝗺𝗣𝗲𝗿𝗹(q(L<GitHub|https://github.com>)) eq q(<a format="html" href="https://github.com">GitHub</a>); ok expandWellKnownUrlsInPod2Html(<<END) eq eval '"aaa
Expand short url names found in a string in the format =begin html format
ok expandWellKnownUrlsInDitaFormat(q(L[github])) eq q(<xref scope="external" format="html" href="https://github.com">GitHub</xref>); ok expandWellKnownUrlsInHtmlFormat(q(L[github])) eq q(<a format="html" href="https://github.com">GitHub</a>); ok expandWellKnownUrlsInPerlFormat(q(L<GitHub|https://github.com>)) eq q(L<GitHub|https://github.com>); ok expandWellKnownUrlsInPerlFormat(q(github)) eq q(github); ok expandWellKnownUrlsInHtmlFromPerl(q(L<GitHub|https://github.com>)) eq q(<a format="html" href="https://github.com">GitHub</a>); ok 𝗲𝘅𝗽𝗮𝗻𝗱𝗪𝗲𝗹𝗹𝗞𝗻𝗼𝘄𝗻𝗨𝗿𝗹𝘀𝗜𝗻𝗣𝗼𝗱𝟮𝗛𝘁𝗺𝗹(<<END) eq eval '"aaa
Expand short url names found in a string in the format L[url-name] in the L[Dita] xrefformat.
ok 𝗲𝘅𝗽𝗮𝗻𝗱𝗪𝗲𝗹𝗹𝗞𝗻𝗼𝘄𝗻𝗨𝗿𝗹𝘀𝗜𝗻𝗗𝗶𝘁𝗮𝗙𝗼𝗿𝗺𝗮𝘁(q(L[github])) eq q(<xref scope="external" format="html" href="https://github.com">GitHub</xref>); ok expandWellKnownUrlsInHtmlFormat(q(L[github])) eq q(<a format="html" href="https://github.com">GitHub</a>); ok expandWellKnownUrlsInPerlFormat(q(L<GitHub|https://github.com>)) eq q(L<GitHub|https://github.com>); ok expandWellKnownUrlsInPerlFormat(q(github)) eq q(github); ok expandWellKnownUrlsInHtmlFromPerl(q(L<GitHub|https://github.com>)) eq q(<a format="html" href="https://github.com">GitHub</a>); ok expandWellKnownUrlsInPod2Html(<<END) eq eval '"aaa
Expand new lines in documentation, specifically for new line and
for two new lines. Parameter Description 1 $s String to be expanded
ok 𝗲𝘅𝗽𝗮𝗻𝗱𝗡𝗲𝘄𝗟𝗶𝗻𝗲𝘀𝗜𝗻𝗗𝗼𝗰𝘂𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻(q(a b c )) eq <<END; a b c END
Extract the block of code delimited by $comment, starting at qq($comment-begin), ending at qq($comment-end) from the named $file else the current Perl program $0 and return it as a string or confess if this is not possible.
Parameter Description 1 $comment Comment delimiting the block of code 2 $file File to read from if not $0
ok 𝗲𝘅𝘁𝗿𝗮𝗰𝘁𝗖𝗼𝗱𝗲𝗕𝗹𝗼𝗰𝗸(q(#CODEBLOCK), $INC{"Data/Table/Text.pm"}) eq <<'END'; my $a = 1; my $b = 2; END
Update documentation for a Perl module from the comments in its source code. Comments between the lines marked with:
#Dn title # description
and:
#D
where n is either 1, 2 or 3 indicating the heading level of the section and the # is in column 1.
Methods are formatted as:
sub name(signature) #FLAGS comment describing method {my ($parameters) = @_; # comments for each parameter separated by commas.
FLAGS can be chosen from:
method of interest to new users
private method
optionally replaceable method
required replaceable method
static method
die rather than received a returned undef result
Other flags will be handed to the method extractDocumentationFlags(flags to process, method name) found in the file being documented, this method should return [the additional documentation for the method, the code to implement the flag].
Text following 'Example:' in the comment (if present) will be placed after the parameters list as an example. Lines containing comments consisting of '#T'.methodName will also be aggregated and displayed as examples for that method.
Lines formatted as:
BEGIN{*source=*target}
starting in column 1 will define a synonym for a method.
#C emailAddress text
will be aggregated in the acknowledgments section at the end of the documentation.
The character sequence \n in the comment will be expanded to one new line, \m to two new lines and L<$_>,L<confess>,L<die>,L<eval>,L<lvalueMethod> to links to the perl documentation.
Search for '#D1': in https://metacpan.org/source/PRBRENAN/Data-Table-Text-20180810/lib/Data/Table/Text.pm to see more examples of such documentation in action - although it is quite difficult to see as it looks just like normal comments placed in the code.
Parameter Description 1 $perlModule Optional file name with caller's file being the default
{my $s = 𝘂𝗽𝗱𝗮𝘁𝗲𝗗𝗼𝗰𝘂𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻(<<'END' =~ s(#) (#)gsr =~ s(~) ()gsr); package Sample::Module; #D1 Samples # Sample methods. sub sample($@) #R Documentation for the: sample() method. See also L<Data::Table::Text::sample2|/Data::Table::Text::sample2>. #Tsample {my ($node, @context) = @_; # Node, optional context 1 } ~BEGIN{*smpl=*sample} sub Data::Table::Text::sample2(\&@) #PS Documentation for the sample2() method. {my ($sub, @context) = @_; # Sub to call, context. 1 } ok sample(undef, qw(a b c)) == 1; #Tsample if (1) #Tsample {ok sample(q(a), qw(a b c)) == 2; ok sample(undef, qw(a b c)) == 1; } ok sample(<<END2)) == 1; #Tsample sample data END2 ok $s =~ m/=head2 Data::Table::Text::sample2.\$sub, \@context/;
Service details.
file - The file in which the service start details is being recorded.
service - The name of the service.
start - The time this service was started time plus a minor hack to simplify testing.
Prices of selected aws ec2 instance types
cheapestInstance - The instance type that has the lowest CPU cost
pricePerCpu - The cost of the cheapest CPU In millidollars per hour
report - Report showing the cost of other selected instances
Process starter definition.
processingLogFile - Optional: name of a file to which process start and end information should be appended
processingTitle - Optional: title describing the processing being performed.
totalToBeStarted - Optionally: the total number of processes to be started - if this is supplied then an estimate of the finish time for this processing is printed to the log file every time a process starts or finishes.
autoRemoveTransferArea - If true then automatically clear the transfer area at the end of processing.
maximumNumberOfProcesses - The maximum number of processes to start in parallel at one time. If this limit is exceeded, the start of subsequent processes will be delayed until processes started earlier have finished.
pids - A hash of pids representing processes started but not yet completed.
processFinishTime - {pid} == time the process finished.
processStartTime - {pid} == time the process was started.
processingLogFileHandle - Handle for log file if a log file was supplied
resultsArray - Consolidated array of results.
startTime - Start time
transferArea - The name of the folder in which files transferring results from the child to the parent process will be stored.
Definition of a blessed hash.
a - Definition of attribute aa.
b - Definition of attribute bb.
Package name
headerLength - Length of fixed header which carries the length of the following message
serverAction - Server action sub, which receives a communicator every time a client creates a new connection. If this server is going to be started by systemd as a service with the specified serverName then this is the a actual text of the code that will be installed as a CGI script and run in response to an incoming transaction in a separate process with the userid set to serviceUser. It receives the text of the http request from the browser as parameter 1 and should return the text to be sent back to the browser.
serviceName - Service name for install by systemd
serviceUser - Userid for service
socketPath - Socket file
client - Client socket and connection socket
serverPid - Server pid which can be used to kill the server via kill q(kill), $pid
The following is a list of all the attributes in this package. A method coded with the same name in your package will over ride the method of the same name in this package and thus provide your value for the attribute in place of the default value supplied for this attribute by this package.
awsEc2DescribeInstancesCache awsIpFile nameFromStringMaximumLength
File in which to cache latest results from describe instances to avoid being throttled
File in which to save IP address of primary instance on Aws
Maximum length of a name generated from a string
Remove any trailing folder separator from a folder name.
Parameter Description 1 $name Folder name
Normalize a folder name by ensuring it has a single trailing directory separator.
Parameter Description 1 $name Name
Find all the files and folders under a folder.
Parameter Description 1 $folder Folder to start the search with
Read a file containing Unicode encoded in utf-16.
Set STDOUT and STDERR to accept utf8 without complaint.
𝗯𝗶𝗻𝗠𝗼𝗱𝗲𝗔𝗹𝗹𝗨𝘁𝗳𝟴;
Convert a $source image to a $target image in jpx format using versions of Imagemagick version 6.9.0 and above. The size in pixels of each jpx tile may be specified by the optional $Size parameter which defaults to 256. $Tiles optionally provides an upper limit on the number of each tiles in each dimension.
Parameter Description 1 $Source Source file 2 $target Target folder (as multiple files will be created) 3 $Size Optional size of each tile - defaults to 256 4 $Tiles Optional limit on the number of tiles in either dimension
Convert a $source image to a $target image in jpx format. The size in pixels of each jpx tile may be specified by the optional $Size parameter which defaults to 256. $Tiles optionally provides an upper limit on the number of each tiles in each dimension.
Parameter Description 1 $Source Source file 2 $target Target folder (as multiple files will be created) 3 $Size Optional size of each tile - defaults to 256 4 $Tiles Optional limit in either direction on the number of tiles
𝗰𝗼𝗻𝘃𝗲𝗿𝘁𝗜𝗺𝗮𝗴𝗲𝗧𝗼𝗝𝗽𝘅(fpe(qw(a image jpg)), fpe(qw(a image jpg)), 256);
Count the elements in sets @s represented as arrays of strings and/or the keys of hashes
Tabularize text that has new lines in it.
Parameter Description 1 $data Reference to an array of arrays of data to be formatted as a table 2 $separator Optional line separator to use instead of new line for each row.
Blank identical column values up and left
Parameter Description 1 $data Array of arrays
Tabularize an array of arrays.
Parameter Description 1 $data Data to be formatted 2 $title Reference to an array of titles 3 %options Options
ok formatTable ([[1,1,1],[1,1,2],[1,2,2],[1,2,3]], [], clearUpLeft=>1) eq <<END; # Clear matching columns 1 1 1 1 2 2 3 2 2 4 3 END
Tabularize a hash of arrays.
Parameter Description 1 $data Data to be formatted 2 $title Optional titles
Tabularize an array of hashes.
Parameter Description 1 $data Data to be formatted
Tabularize a hash of hashes.
Tabularize an array.
Parameter Description 1 $data Data to be formatted 2 $title Optional title
Tabularize a hash.
Options available for formatting tables
Parameter Description 1 $d Data structure 2 $progress Progress
Create a map of all the keys within all the hashes within a tower of data structures.
Parameter Description 1 $d Data structure 2 $keys Keys found 3 $progress Progress
Create a communicator - a means to communicate between processes on the same machine via Udsr::read and Udsr::write.
Create an instance-id from the specified %options
Create a profile keyword from the specified %options
Create a region keyword from the specified %options
Save source code.
Parameter Description 1 $aws Aws target file and keywords 2 $saveIntervalInSeconds Save internal
Test running on Amazon Web Services in parallel.
Parameter Description 1 $userData User data 2 $file File to process.
my $N = 2001; # Number of files to process my $options = q(region => q(us-east-2), profile=>q(fmc)); # Aws cli options my %options = eval "($options)"; for my $dir(q(/home/phil/perl/cpan/DataTableText/lib/Data/Table/), # Folders we will need on aws q(/home/phil/.aws/)) {awsParallelSpreadFolder($dir, %options); } my $d = temporaryFolder; # Create a temporary folder my $resultsFile = fpe($d, qw(results data)); # Save results in this temporary file if (my $r = execPerlOnRemote(join " ", # Execute some code on a server getCodeContext(\&𝗮𝘄𝘀𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗙𝗶𝗹𝗲𝘀𝗧𝗲𝘀𝘁𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹), # Get code context of the sub we want to call. <<SESSIONLEADER)) # Launch code on session leader use Data::Table::Text qw(:all); my \$r = awsParallelProcessFiles # Process files on multiple L<Amazon Web Services|http://aws.amazon.com> instances in parallel ({file=>4, time=>timeStamp}, # User data \\\&Data::Table::Text::𝗮𝘄𝘀𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗙𝗶𝗹𝗲𝘀𝗧𝗲𝘀𝘁𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹, # Reference to code to execute in parallel on each session instance \\\&Data::Table::Text::awsParallelProcessFilesTestResults, # Reference to code to execute in series to merge the results of each parallel computation [map {writeFile(fpe(q($d), \$_, qw(txt)), \$_)} 1..$N], # Files to process $options); # Aws cli options as we will be running on Aws storeFile(q($resultsFile), \$r); # Save results in a file SESSIONLEADER {copyFileFromRemote($resultsFile); # Retrieve user data my $userData = retrieveFile($resultsFile); # Recover user data my @i = awsParallelSecondaryIpAddresses(%options); # Ip addresses of secondary instances my @I = keys $userData->{ip}->%*; is_deeply [sort @i], [sort @I]; # Each secondary ip address was used ok $userData->{file} == 4; # Prove we can pass data in and get it back ok $userData->{merge} == 1 + @i, 'ii'; # Number of merges my %f; my %i; # Files processed on each ip for my $i(sort keys $userData->{ipFile}->%*) # Ip {for my $f(sort keys $userData->{ipFile}{$i}->%*) # File {$f{fn($f)}++; # Files processed $i{$i}++; # Count files on each ip } } is_deeply \%f, {map {$_=>1} 1..$N}; # Check each file was processed if (1) {my @rc; my @ra; # Range of number of files processed on each ip - computed, actually counted my $l = $N/@i-1; # Lower limit of number of files per IP address my $h = $N/@i+1; # Upper limit of number of files per IP address for my $i(keys %i) {my $nc = $i{$i}; # Number of files processed on this ip - computed my $na = $userData->{ip}{$i}; # Number of files processed on this ip - actually counted push @rc, ($nc >= $l and $nc <= $h) ? 1 : 0; # 1 - in range, 0 - out of range push @ra, ($na >= $l and $na <= $h) ? 1 : 0; # 1 - in range, 0 - out of range } ok @i == grep {$_} @ra; # Check each ip processed the expected number of files ok @i == grep {$_} @rc; } ok $userData->{files}{&fpe($d, qw(4 txt))} eq # Check the computed MD5 sum for the specified file q(a87ff679a2f3e71d9181a67b7542122c); }
Test results of running on Amazon Web Services in parallel.
Parameter Description 1 $userData User data from primary instance instance or process 2 @results Results from each parallel instance or process
my $N = 2001; # Number of files to process my $options = q(region => q(us-east-2), profile=>q(fmc)); # Aws cli options my %options = eval "($options)"; for my $dir(q(/home/phil/perl/cpan/DataTableText/lib/Data/Table/), # Folders we will need on aws q(/home/phil/.aws/)) {awsParallelSpreadFolder($dir, %options); } my $d = temporaryFolder; # Create a temporary folder my $resultsFile = fpe($d, qw(results data)); # Save results in this temporary file if (my $r = execPerlOnRemote(join " ", # Execute some code on a server getCodeContext(\&awsParallelProcessFilesTestParallel), # Get code context of the sub we want to call. <<SESSIONLEADER)) # Launch code on session leader use Data::Table::Text qw(:all); my \$r = awsParallelProcessFiles # Process files on multiple L<Amazon Web Services|http://aws.amazon.com> instances in parallel ({file=>4, time=>timeStamp}, # User data \\\&Data::Table::Text::awsParallelProcessFilesTestParallel, # Reference to code to execute in parallel on each session instance \\\&Data::Table::Text::𝗮𝘄𝘀𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗙𝗶𝗹𝗲𝘀𝗧𝗲𝘀𝘁𝗥𝗲𝘀𝘂𝗹𝘁𝘀, # Reference to code to execute in series to merge the results of each parallel computation [map {writeFile(fpe(q($d), \$_, qw(txt)), \$_)} 1..$N], # Files to process $options); # Aws cli options as we will be running on Aws storeFile(q($resultsFile), \$r); # Save results in a file SESSIONLEADER {copyFileFromRemote($resultsFile); # Retrieve user data my $userData = retrieveFile($resultsFile); # Recover user data my @i = awsParallelSecondaryIpAddresses(%options); # Ip addresses of secondary instances my @I = keys $userData->{ip}->%*; is_deeply [sort @i], [sort @I]; # Each secondary ip address was used ok $userData->{file} == 4; # Prove we can pass data in and get it back ok $userData->{merge} == 1 + @i, 'ii'; # Number of merges my %f; my %i; # Files processed on each ip for my $i(sort keys $userData->{ipFile}->%*) # Ip {for my $f(sort keys $userData->{ipFile}{$i}->%*) # File {$f{fn($f)}++; # Files processed $i{$i}++; # Count files on each ip } } is_deeply \%f, {map {$_=>1} 1..$N}; # Check each file was processed if (1) {my @rc; my @ra; # Range of number of files processed on each ip - computed, actually counted my $l = $N/@i-1; # Lower limit of number of files per IP address my $h = $N/@i+1; # Upper limit of number of files per IP address for my $i(keys %i) {my $nc = $i{$i}; # Number of files processed on this ip - computed my $na = $userData->{ip}{$i}; # Number of files processed on this ip - actually counted push @rc, ($nc >= $l and $nc <= $h) ? 1 : 0; # 1 - in range, 0 - out of range push @ra, ($na >= $l and $na <= $h) ? 1 : 0; # 1 - in range, 0 - out of range } ok @i == grep {$_} @ra; # Check each ip processed the expected number of files ok @i == grep {$_} @rc; } ok $userData->{files}{&fpe($d, qw(4 txt))} eq # Check the computed MD5 sum for the specified file q(a87ff679a2f3e71d9181a67b7542122c); }
Return an S3 profile keyword from an S3 option set
Return an S3 --delete keyword from an S3 option set
Create a log entry showing progress and eta.
Parameter Description 1 $starter Starter 2 $finish 0 - start; 1 - finish
Average elapsed time spent by each process
Write to the log file if it is available.
Parameter Description 1 $starter Starter 2 @message Text to write to log file.
Wait for at least one process to finish and consolidate its results.
Count the number of elements in a square array
Process items of known size in parallel using the specified number $N processes with the process each file is assigned to depending on the size of the file so that each process is loaded with approximately the same number of bytes of data in total from the files it processes.
Parameter Description 1 $N Number of processes 2 $parallel Parallel sub 3 $results Results sub 4 @sizes Array of [size; item] to process by size
Short names for some well known urls
Remove example markers from test code.
Parameter Description 1 $string String containing test line
Generate documentation for a method by calling the extractDocumentationFlags method in the package being documented, passing it the flags for a method and the name of the method. The called method should return the documentation to be inserted for the named method.
Parameter Description 1 $flags Flags 2 $perlModule File containing documentation 3 $package Package containing documentation 4 $name Name of method to be processed
Update the documentation in a $perlModule and display said documentation in a web browser.
Parameter Description 1 $perlModule File containing the code of the perl module
fpd is a synonym for filePathDir - Create a folder name from a list of names.
fpe is a synonym for filePathExt - Create a file name from a list of names the last of which is assumed to be the extension of the file name.
fpf is a synonym for filePath - Create a file name from a list of names.
owf is a synonym for overWriteFile - Write to a $file, after creating a path to the $file with makePath if necessary, a $string of Unicode content encoded as utf8.
temporaryDirectory is a synonym for temporaryFolder - Create a new, empty, temporary folder.
1 absFile - Return the name of the given file if it a fully qualified file name else returns undef.
2 absFromAbsPlusRel - Absolute file from an absolute file $a plus a relative file $r.
3 addCertificate - Add a certificate to the current ssh session.
4 addLValueScalarMethods - Generate lvalue method scalar methods in the current package if they do not already exist.
5 appendFile - Append to $file a $string of Unicode content encoded with utf8, creating the $file first if necessary.
6 arrayProduct - Find the product of any strings that look like numbers in an array.
7 arraySum - Find the sum of any strings that look like numbers in an array.
8 arrayTimes - Multiply by $multiplier each element of the array @a and return as the result.
9 arrayToHash - Create a hash from an array
10 asciiToHexString - Encode an Ascii string as a string of hexadecimal digits.
11 assertPackageRefs - Confirm that the specified references are to the specified package
12 assertRef - Confirm that the specified references are to the package into which this routine has been exported.
13 awsCurrentAvailabilityZone - Get the availability zone of the Amazon Web Services server we are currently running on if we are running on an Amazon Web Services server else return a blank string.
14 awsCurrentInstanceId - Get the instance id of the Amazon Web Services server we are currently running on if we are running on an Amazon Web Services server else return a blank string.
15 awsCurrentInstanceType - Get the instance type of the Amazon Web Services server if we are running on an Amazon Web Services server else return a blank string.
16 awsCurrentIp - Get the ip address of the AWS server we are currently running on if we are running on an Amazon Web Services server else return a blank string.
17 awsCurrentLinuxSpotPrices - Return {instance type} = cheapest spot price in dollars per hour for the given region
18 awsCurrentRegion - Get the region of the Amazon Web Services server we are currently running on if we are running on an Amazon Web Services server else return a blank string.
19 awsEc2CreateImage - Create an image snap shot with the specified $name of the AWS server we are currently running on if we are running on an AWS server else return false.
20 awsEc2DescribeImages - Describe images available.
21 awsEc2DescribeInstances - Describe the Amazon Web Services instances running in a $region.
22 awsEc2DescribeInstancesGetIPAddresses - Return a hash of {instanceId => public ip address} for all running instances on Amazon Web Services with ip addresses.
23 awsEc2DescribeInstanceType - Return details of the specified instance type.
24 awsEc2DescribeSpotInstances - Return a hash {spot instance request => spot instance details} describing the status of active spot instances.
25 awsEc2FindImagesWithTagValue - Find images with a tag that matches the specified regular expression $value.
26 awsEc2InstanceIpAddress - Return the IP address of a named instance on Amazon Web Services else return undef.
27 awsEc2ReportSpotInstancePrices - Report the prices of all the spot instances whose type matches a regular expression $instanceTypeRe.
28 awsEc2RequestSpotInstances - Request spot instances as long as they can be started within the next minute.
29 awsEc2Tag - Tag an Ec2 resource with the supplied tags.
30 awsExecCli - Execute an AWs command and return its response
31 awsExecCliJson - Execute an AWs command and decode the json so produced
32 awsInstanceId - Create an instance-id from the specified %options
33 awsIp - Get ip address of server at Amazon Web Services.
34 awsMetaData - Get an item of meta data for the Amazon Web Services server we are currently running on if we are running on an Amazon Web Services server else return a blank string.
35 awsParallelGatherFolder - On Amazon Web Services: merges all the files in the specified $folder on each secondary instance to the corresponding folder on the primary instance in parallel.
36 awsParallelIpAddresses - Return the IP addresses of all the Amazon Web Services session instances.
37 awsParallelPrimaryInstanceId - Return the instance id of the primary instance.
38 awsParallelPrimaryIpAddress - Return the IP addresses of any primary instance on Amazon Web Services.
39 awsParallelProcessFiles - Process files in parallel across multiple Amazon Web Services instances if available or in series if not.
40 awsParallelProcessFilesTestParallel - Test running on Amazon Web Services in parallel.
41 awsParallelProcessFilesTestResults - Test results of running on Amazon Web Services in parallel.
42 awsParallelSecondaryIpAddresses - Return a list containing the IP addresses of any secondary instances on Amazon Web Services.
43 awsParallelSpreadFolder - On Amazon Web Services: copies a specified $folder from the primary instance, see: awsParallelPrimaryInstanceId, in parallel, to all the secondary instances in the session.
44 awsProfile - Create a profile keyword from the specified %options
45 awsRegion - Create a region keyword from the specified %options
46 awsTranslateText - Translate $text from English to a specified $language using AWS Translate with the specified global $options and return the translated string.
47 binModeAllUtf8 - Set STDOUT and STDERR to accept utf8 without complaint.
48 boldString - Convert alphanumerics in a string to bold.
49 boldStringUndo - Undo alphanumerics in a string to bold.
50 call - Call the specified $sub in a separate child process, wait for it to complete, then copy back the named @our variables from the child process to the calling parent process effectively freeing any memory used during the call.
51 callSubInOverlappedParallel - Call the $child sub reference in parallel in a separate child process and ignore its results while calling the $parent sub reference in the parent process and returning its results.
52 callSubInParallel - Call a sub reference in parallel to avoid memory fragmentation and return its results.
53 checkFile - Return the name of the specified file if it exists, else confess the maximum extent of the path that does exist.
54 checkKeys - Check the keys in a hash conform to those $permitted.
55 childPids - Recursively find the pids of all the sub processes of a $process and all their sub processes and so on returning the specified pid and all its child pids as a list.
56 chooseStringAtRandom - Choose a string at random from the list of @strings supplied.
57 clearFolder - Remove all the files and folders under and including the specified $folder as long as the number of files to be removed is less than the specified $limitCount.
58 confirmHasCommandLineCommand - Check that the specified b<$cmd> is present on the current system.
59 containingFolderName - The name of a folder containing a file
60 containingPowerOfTwo - Find log two of the lowest power of two greater than or equal to a number $n.
61 contains - Returns the indices at which an $item matches elements of the specified @array.
62 convertDocxToFodt - Convert a docx $inputFile file to a fodt $outputFile using unoconv which must not be running elsewhere at the time.
63 convertImageToJpx - Convert a $source image to a $target image in jpx format.
64 convertImageToJpx690 - Convert a $source image to a $target image in jpx format using versions of Imagemagick version 6.
65 convertUnicodeToXml - Convert a $string with Unicode codepoints that are not directly representable in Ascii into string that replaces these code points with their representation in Xml making the string usable in Xml documents.
66 copyBinaryFile - Copy the binary file $source to a file named <%target> and return the target file name,
67 copyBinaryFileMd5Normalized - Normalize the name of the specified $source file to the md5 sum of its content, retaining its current extension, while placing the original file name in a companion file if the companion file does not already exist.
68 copyBinaryFileMd5NormalizedCreate - Create a file in the specified $folder whose name is constructed from the md5 sum of the specified $content, whose content is $content, whose extension is $extension and which has a companion file with the same name minus the extension which contains the specified $companionContent.
69 copyBinaryFileMd5NormalizedGetCompanionContent - Return the original name of the specified $source file after it has been normalized via copyBinaryFileMd5Normalized or copyBinaryFileMd5NormalizedCreate or return undef if the corresponding companion file does not exist.
70 copyFile - Copy the $source file encoded in utf8 to the specified $target file in and return $target.
71 copyFileFromRemote - Copy the specified $file from the server whose ip address is specified by $ip or returned by awsIp.
72 copyFileMd5Normalized - Normalize the name of the specified $source file to the md5 sum of its content, retaining its current extension, while placing the original file name in a companion file if the companion file does not already exist.
73 copyFileMd5NormalizedCreate - Create a file in the specified $folder whose name is constructed from the md5 sum of the specified $content, whose content is $content, whose extension is $extension and which has a companion file with the same name minus the extension which contains the specified $companionContent.
74 copyFileMd5NormalizedDelete - Delete a normalized and its companion file
75 copyFileMd5NormalizedGetCompanionContent - Return the content of the companion file to the specified $source file after it has been normalized via copyFileMd5Normalized or copyFileMd5NormalizedCreate or return undef if the corresponding companion file does not exist.
76 copyFileMd5NormalizedName - Name a file using the GB Standard
77 copyFileToFolder - Copy the file named in $source to the specified $targetFolder/ or if $targetFolder/ is in fact a file into the folder containing this file and return the target file name.
78 copyFileToRemote - Copy the specified local $file to the server whose ip address is specified by $ip or returned by awsIp.
79 copyFolder - Copy the $source folder to the $target folder after clearing the $target folder.
80 copyFolderToRemote - Copy the specified local $Source folder to the corresponding remote folder on the server whose ip address is specified by $ip or returned by awsIp.
81 countFileExtensions - Return a hash which counts the file extensions in and below the folders in the specified list.
82 countFileTypes - Return a hash which counts, in parallel with a maximum number of processes: $maximumNumberOfProcesses, the results of applying the file command to each file ina nd under the specified @folders.
83 countOccurencesInString - Returns the number of occurrences in $inString of $searchFor.
84 countSquareArray - Count the number of elements in a square array
85 createEmptyFile - Create an empty file unless the file already exists and return the name of the file else confess if the file cannot be created.
86 currentDirectory - Get the current working directory.
87 currentDirectoryAbove - Get the path to the folder above the current working folder.
88 cutOutImagesInFodtFile - Cut out the images embedded in a fodt file, perhaps produced via convertDocxToFodt, placing them in the specified folder and replacing them in the source file with:
89 Data::Exchange::Service::check - Check that we are the current incarnation of the named service with details obtained from newServiceIncarnation.
90 Data::Table::Text::Starter::averageProcessTime - Average elapsed time spent by each process
91 Data::Table::Text::Starter::finish - Wait for all started processes to finish and return their results as an array.
92 Data::Table::Text::Starter::logEntry - Create a log entry showing progress and eta.
93 Data::Table::Text::Starter::say - Write to the log file if it is available.
94 Data::Table::Text::Starter::start - Start a new process to run the specified $sub.
95 Data::Table::Text::Starter::waitOne - Wait for at least one process to finish and consolidate its results.
96 dateStamp - Year-monthName-day
97 dateTimeStamp - Year-monthNumber-day at hours:minute:seconds
98 dateTimeStampName - Date time stamp without white space.
99 ddd - Log debug messages with a time stamp and originating file and line number.
100 decodeBase64 - Decode an Ascii $string in base 64.
101 decodeJson - Convert a Json $string to Perl.
102 deduplicateSequentialWordsInString - Remove sequentially duplicate words in a string
103 denormalizeFolderName - Remove any trailing folder separator from a folder name.
104 deSquareArray - Create a one dimensional array from a two dimensional array of arrays
105 detagString - Remove html or Xml tags from a string
106 docUserFlags - Generate documentation for a method by calling the extractDocumentationFlags method in the package being documented, passing it the flags for a method and the name of the method.
107 downloadGitHubPublicRepo - Get the contents of a public repo on GitHub and place them in a temporary folder whose name is returned to the caller or confess if no such repo exists.
108 downloadGitHubPublicRepoFile - Get the contents of a $user $repo $file from a public repo on GitHub and return them as a string.
109 dumpFile - Dump to a $file the referenced data $structure.
110 dumpGZipFile - Write to a $file a data $structure through gzip.
111 enclosedReversedString - Convert alphanumerics in a string to enclosed reversed alphanumerics.
112 enclosedReversedStringUndo - Undo alphanumerics in a string to enclosed reversed alphanumerics.
113 enclosedString - Convert alphanumerics in a string to enclosed alphanumerics.
114 enclosedStringUndo - Undo alphanumerics in a string to enclosed alphanumerics.
115 encodeBase64 - Encode an Ascii $string in base 64.
116 encodeJson - Convert a Perl $string to Json.
117 evalFile - Read a file containing Unicode content represented as utf8, "eval" in perlfunc the content, confess to any errors and then return any result with lvalue method methods to access each hash element.
118 evalGZipFile - Read a file compressed with gzip containing Unicode content represented as utf8, "eval" in perlfunc the content, confess to any errors and then return any result with lvalue method methods to access each hash element.
119 execPerlOnRemote - Execute some Perl $code on the server whose ip address is specified by $ip or returned by awsIp.
120 expandNewLinesInDocumentation - Expand new lines in documentation, specifically for new line and
for two new lines.
121 expandWellKnownUrlsInDitaFormat - Expand short url names found in a string in the format L[url-name] in the L[Dita] xrefformat.
122 expandWellKnownUrlsInHtmlFormat - Expand short url names found in a string in the format L[url-name] using the html a tag.
123 expandWellKnownUrlsInHtmlFromPerl - Expand short url names found in a string in the format L[url-name] using the html a tag.
124 expandWellKnownUrlsInPerlFormat - Expand short url names found in a string in the format L<url-name> using the Perl POD syntax
125 expandWellKnownUrlsInPod2Html - Expand short url names found in a string in the format =begin html format
126 extractCodeBlock - Extract the block of code delimited by $comment, starting at qq($comment-begin), ending at qq($comment-end) from the named $file else the current Perl program $0 and return it as a string or confess if this is not possible.
127 extractTest - Remove example markers from test code.
128 fe - Get the extension of a file name.
129 fff - Confess a message with a line position and a file that Geany will jump to if clicked on.
130 fileInWindowsFormat - Convert a unix $file name to windows format
131 fileLargestSize - Return the largest $file.
132 fileList - Files that match a given search pattern interpreted by "bsd_glob" in perlfunc.
133 fileMd5Sum - Get the Md5 sum of the content of a $file.
134 fileModTime - Get the modified time of a $file as seconds since the epoch.
135 fileOutOfDate - Calls the specified sub $make for each source file that is missing and then again against the $target file if any of the @source files were missing or the $target file is older than any of the @source files or if the target does not exist.
136 filePath - Create a file name from a list of names.
137 filePathDir - Create a folder name from a list of names.
138 filePathExt - Create a file name from a list of names the last of which is assumed to be the extension of the file name.
139 fileSize - Get the size of a $file in bytes.
140 findAllFilesAndFolders - Find all the files and folders under a folder.
141 findDirs - Find all the folders under a $folder and optionally $filter the selected folders with a regular expression.
142 findFiles - Find all the files under a $folder and optionally $filter the selected files with a regular expression.
143 findFileWithExtension - Find the first file that exists with a path and name of $file and an extension drawn from <@ext>.
144 firstFileThatExists - Returns the name of the first file from @files that exists or undef if none of the named @files exist.
145 firstNChars - First N characters of a string.
146 flattenArrayAndHashValues - Flatten an array of scalars, array and hash references to make an array of scalars by flattening the array references and hash values.
147 fn - Remove the path and extension from a file name.
148 fne - Remove the path from a file name.
149 folderSize - Get the size of a $folder in bytes.
150 formatHtmlAndTextTables - Create text and html versions of a tabular report
151 formatHtmlAndTextTablesWaitPids - Wait on all table formatting pids to complete
152 formatHtmlTable - Format an array of arrays of scalars as an html table using the %options described in formatTableCheckKeys.
153 formatHtmlTablesIndex - Create an index of html reports.
154 formatString - Format the specified $string so it can be displayed in $width columns.
155 formatTable - Format various $data structures as a table with titles as specified by $columnTitles: either a reference to an array of column titles or a string each line of which contains the column title as the first word with the rest of the line describing that column.
156 formatTableA - Tabularize an array.
157 formatTableAA - Tabularize an array of arrays.
158 formatTableAH - Tabularize an array of hashes.
159 formatTableBasic - Tabularize an array of arrays of text.
160 formatTableCheckKeys - Options available for formatting tables
161 formatTableClearUpLeft - Blank identical column values up and left
162 formatTableH - Tabularize a hash.
163 formatTableHA - Tabularize a hash of arrays.
164 formatTableHH - Tabularize a hash of hashes.
165 formatTableMultiLine - Tabularize text that has new lines in it.
166 formattedTablesReport - Report of all the reports created.
167 fp - Get the path from a file name.
168 fpn - Remove the extension from a file name.
169 fullFileName - Full name of a file.
170 fullyQualifiedFile - Check whether a $file name is fully qualified or not and, optionally, whether it is fully qualified with a specified $prefix or not.
171 fullyQualifyFile - Return the fully qualified name of a file.
172 genHash - Return a $blessed hash with the specified $attributes accessible via lvalue method method calls.
173 genLValueArrayMethods - Generate lvalue method array methods in the current package.
174 genLValueHashMethods - Generate lvalue method hash methods in the current package.
175 genLValueScalarMethods - Generate lvalue method scalar methods in the current package, A method whose value has not yet been set will return a new scalar with value undef.
176 genLValueScalarMethodsWithDefaultValues - Generate lvalue method scalar methods with default values in the current package.
177 getCodeContext - Recreate the code context for a referenced sub
178 getSubName - Returns the (package, name, file, line) of a perl $sub reference.
179 guidFromMd5 - Create a guid from an md5 hash.
180 guidFromString - Create a guid representation of the md5 sum of the content of a string.
181 hashifyFolderStructure - Hashify a list of file names to get the corresponding folder structure.
182 hexToAsciiString - Decode a string of hexadecimal digits as an Ascii string.
183 hostName - The name of the host we are running on.
184 htmlToc - Generate a table of contents for some html.
185 imageSize - Return (width, height) of an $image.
186 indentString - Indent lines contained in a string or formatted table by the specified string.
187 indexOfMax - Find the index of the maximum number in a list of numbers confessing to any ill defined values.
188 indexOfMin - Find the index of the minimum number in a list of numbers confessing to any ill defined values.
189 intersectionOfHashesAsArrays - Form the intersection of the specified hashes @h as one hash whose values are an array of corresponding values from each hash
190 intersectionOfHashKeys - Form the intersection of the keys of the specified hashes @h as one hash whose keys represent the intersection.
191 invertHashOfHashes - Invert a hash of hashes: given {a}{b} = c return {b}{c} = c
192 ipAddressViaArp - Get the ip address of a server on the local network by hostname via arp
193 isBlank - Test whether a string is blank.
194 isFileUtf8 - Return the file name quoted if its contents are in utf8 else return undef
195 isSubInPackage - Test whether the specified $package contains the subroutine <$sub>.
196 javaPackage - Extract the package name from a java string or file.
197 javaPackageAsFileName - Extract the package name from a java string or file and convert it to a file name.
198 javaScriptExports - Extract the Javascript functions marked for export in a file or string.
199 keyCount - Count keys down to the specified level.
200 lll - Log messages with a time stamp and originating file and line number.
201 loadArrayArrayFromLines - Load an array of arrays from lines of text: each line is an array of words.
202 loadArrayFromLines - Load an array from lines of text in a string.
203 loadArrayHashFromLines - Load an array of hashes from lines of text: each line is a hash of words.
204 loadHash - Load the specified blessed $hash generated with genHash with %attributes.
205 loadHashArrayFromLines - Load a hash of arrays from lines of text: the first word of each line is the key, the remaining words are the array contents.
206 loadHashFromLines - Load a hash: first word of each line is the key and the rest is the value.
207 loadHashHashFromLines - Load a hash of hashes from lines of text: the first word of each line is the key, the remaining words are the sub hash contents.
208 makeDieConfess - Force die to confess where the death occurred
209 makePath - Make the path for the specified file name or folder on the local machine.
210 makePathRemote - Make the path for the specified $file or folder on the Amazon Web Services instance whose ip address is specified by $ip or returned by awsIp.
211 matchPath - Return the deepest folder that exists along a given file name path.
212 mathematicalBoldItalicString - Convert alphanumerics in a string to Unicode Mathematical Bold Italic.
213 mathematicalBoldItalicStringUndo - Undo alphanumerics in a string to Unicode Mathematical Bold Italic.
214 mathematicalBoldString - Convert alphanumerics in a string to Unicode Mathematical Bold.
215 mathematicalBoldStringUndo - Undo alphanumerics in a string to Unicode Mathematical Bold.
216 mathematicalMonoSpaceString - Convert alphanumerics in a string to Unicode Mathematical MonoSpace.
217 mathematicalMonoSpaceStringUndo - Undo alphanumerics in a string to Unicode Mathematical MonoSpace.
218 mathematicalSansSerifBoldItalicString - Convert alphanumerics in a string to Unicode Mathematical Sans Serif Bold Italic.
219 mathematicalSansSerifBoldItalicStringUndo - Undo alphanumerics in a string to Unicode Mathematical Sans Serif Bold Italic.
220 mathematicalSansSerifBoldString - Convert alphanumerics in a string to Unicode Mathematical Sans Serif Bold.
221 mathematicalSansSerifBoldStringUndo - Undo alphanumerics in a string to Unicode Mathematical Sans Serif Bold.
222 mathematicalSansSerifItalicString - Convert alphanumerics in a string to Unicode Mathematical Sans Serif Italic.
223 mathematicalSansSerifItalicStringUndo - Undo alphanumerics in a string to Unicode Mathematical Sans Serif Italic.
224 mathematicalSansSerifString - Convert alphanumerics in a string to Unicode Mathematical Sans Serif.
225 mathematicalSansSerifStringUndo - Undo alphanumerics in a string to Unicode Mathematical Sans Serif.
226 max - Find the maximum number in a list of numbers confessing to any ill defined values.
227 maximumLineLength - Find the longest line in a $string.
228 md5FromGuid - Recover an md5 sum from a guid.
229 mergeFolder - Copy the $source folder into the $target folder retaining any existing files not replaced by copied files.
230 mergeFolderFromRemote - Merge the specified $Source folder from the corresponding remote folder on the server whose ip address is specified by $ip or returned by awsIp.
231 mergeHashesBySummingValues - Merge a list of hashes @h by summing their values
232 microSecondsSinceEpoch - Micro seconds since unix epoch.
233 min - Find the minimum number in a list of numbers confessing to any ill defined values.
234 mmm - Log messages with a differential time in milliseconds and originating file and line number.
235 moveFileNoClobber - Rename the $source file, which must exist, to the $target file but only if the $target file does not exist already.
236 moveFileWithClobber - Rename the $source file, which must exist, to the $target file but only if the $target file does not exist already.
237 nameFromFolder - Create a name from the last folder in the path of a file name.
238 nameFromString - Create a readable name from an arbitrary string of text.
239 nameFromStringRestrictedToTitle - Create a readable name from a string of text that might contain a title tag - fall back to nameFromString if that is not possible.
240 newProcessStarter - Create a new process starter with which to start parallel processes up to a specified $maximumNumberOfProcesses maximum number of parallel processes at a time, wait for all the started processes to finish and then optionally retrieve their saved results as an array from the folder named by $transferArea.
241 newServiceIncarnation - Create a new service incarnation to record the start up of a new instance of a service and return the description as a Data::Exchange::Service Definition hash.
242 newUdsr - Create a communicator - a means to communicate between processes on the same machine via Udsr::read and Udsr::write.
243 newUdsrClient - Create a new communications client - a means to communicate between processes on the same machine via Udsr::read and Udsr::write.
244 newUdsrServer - Create a communications server - a means to communicate between processes on the same machine via Udsr::read and Udsr::write.
245 numberOfCpus - Number of cpus scaled by an optional factor - but only if you have nproc.
246 numberOfLinesInFile - Return the number of lines in a file.
247 numberOfLinesInString - The number of lines in a string.
248 nws - Normalize white space in a string to make comparisons easier.
249 onAws - Returns 1 if we are on AWS else return 0.
250 onAwsPrimary - Return 1 if we are on Amazon Web Services and we are on the primary session instance as defined by awsParallelPrimaryInstanceId, return 0 if we are on a secondary session instance, else return undef if we are not on Amazon Web Services.
251 onAwsSecondary - Return 1 if we are on Amazon Web Services but we are not on the primary session instance as defined by awsParallelPrimaryInstanceId, return 0 if we are on the primary session instance, else return undef if we are not on Amazon Web Services.
252 overrideAndReabsorbMethods - Override methods down the list of @packages then reabsorb any unused methods back up the list of packages so that all the packages have the same methods as the last package with methods from packages mentioned earlier overriding methods from packages mentioned later.
253 overrideMethods - For each method, if it exists in package $from then export it to package $to replacing any existing method in $to, otherwise export the method from package $to to package $from in order to merge the behavior of the $from and $to packages with respect to the named methods with duplicates resolved if favour of package $from.
254 overWriteBinaryFile - Write to $file, after creating a path to the file with makePath if necessary, the binary content in $string.
255 overWriteFile - Write to a $file, after creating a path to the $file with makePath if necessary, a $string of Unicode content encoded as utf8.
256 packBySize - Given $N buckets and a list @sizes of ([size of file, name of file].
257 pad - Pad the specified $string to a multiple of the specified $length with blanks or the specified padding character to a multiple of a specified length.
258 parseCommandLineArguments - Call the specified $sub after classifying the specified array of words in $args into positional and keyword parameters.
259 parseDitaRef - Parse a dita reference $ref into its components (file name, topic id, id) .
260 parseFileName - Parse a file name into (path, name, extension) considering .
261 parseIntoWordsAndStrings - Parse a $string into words and quoted strings with an optional $limit on the number of words and strings to parse out
262 parseS3BucketAndFolderName - Parse an S3 bucket/folder name into a bucket and a folder name removing any initial s3://.
263 parseXmlDocType - Parse an Xml DOCTYPE and return a hash indicating its components
264 partitionStringsOnPrefixBySize - Partition a hash of strings and associated sizes into partitions with either a maximum size $maxSize or only one element; the hash %Sizes consisting of a mapping {string=>size}; with each partition being named with the shortest string prefix that identifies just the strings in that partition.
265 perlPackage - Extract the package name from a perl string or file.
266 powerOfTwo - Test whether a number $n is a power of two, return the power if it is else undef.
267 printQw - Print an array of words in qw() format.
268 processFilesInParallel - Process files in parallel using (8 * the number of CPUs) processes with the process each file is assigned to depending on the size of the file so that each process is loaded with approximately the same number of bytes of data in total from the files it processes.
269 processJavaFilesInParallel - Process java files of known size in parallel using (the number of CPUs) processes with the process each item is assigned to depending on the size of the java item so that each process is loaded with approximately the same number of bytes of data in total from the java files it processes.
270 processSizesInParallel - Process items of known size in parallel using (8 * the number of CPUs) processes with the process each item is assigned to depending on the size of the item so that each process is loaded with approximately the same number of bytes of data in total from the items it processes.
271 processSizesInParallelN - Process items of known size in parallel using the specified number $N processes with the process each file is assigned to depending on the size of the file so that each process is loaded with approximately the same number of bytes of data in total from the files it processes.
272 quoteFile - Quote a file name.
273 readBinaryFile - Read a binary file on the local machine.
274 readFile - Return the content of a file residing on the local machine interpreting the content of the file as utf8.
275 readFileFromRemote - Copy and read a $file from the remote machine whose ip address is specified by $ip or returned by awsIp and return the content of $file interpreted as utf8 .
276 readFiles - Read all the files in the specified list of folders into a hash.
277 readGZipFile - Read the specified file containing compressed Unicode content represented as utf8 through gzip.
278 readUtf16File - Read a file containing Unicode encoded in utf-16.
279 rectangularArray - Create a two dimensional rectangular array whose first dimension is $first from a one dimensional linear array.
280 rectangularArray2 - Create a two dimensional rectangular array whose second dimension is $second from a one dimensional linear array.
281 relFromAbsAgainstAbs - Relative file from one absolute file $a against another $b.
282 reloadHashes - Ensures that all the hashes within a tower of data structures have LValue methods to get and set their current keys.
283 reloadHashes2 - Ensures that all the hashes within a tower of data structures have LValue methods to get and set their current keys.
284 removeDuplicatePrefixes - Remove duplicated leading directory names from a file name.
285 removeFilePathsFromStructure - Remove all file paths from a specified $structure to make said $structure testable with "is_deeply" in Test::More.
286 removeFilePrefix - Removes a file $prefix from an array of @files.
287 renormalizeFolderName - Normalize a folder name by ensuring it has a single trailing directory separator.
288 replaceStringWithString - Replace all instances in $string of $source with $target
289 reportAttributes - Report the attributes present in a $sourceFile
290 reportAttributeSettings - Report the current values of the attribute methods in the calling file and optionally write the report to $reportFile.
291 reportExportableMethods - Report the exportable methods marked with #e in a $sourceFile
292 reportReplacableMethods - Report the replaceable methods marked with #r in a $sourceFile
293 reportSettings - Report the current values of parameterless subs in a $sourceFile that match \Asub\s+(\w+)\s*\{ and optionally write the report to $reportFile.
294 retrieveFile - Retrieve a $file created via Storable.
295 runInParallel - Process the elements of an array in parallel using a maximum of $maximumNumberOfProcesses processes.
296 runInSquareRootParallel - Process the elements of an array in square root parallel using a maximum of $maximumNumberOfProcesses processes.
297 s3Delete - Return an S3 --delete keyword from an S3 option set
298 s3DownloadFolder - Download a specified $folder on S3 to a $local folder using the specified %options if any.
299 s3FileExists - Return (name, size, date, time) for a $file that exists on S3 else () using the specified %options if any.
300 s3ListFilesAndSizes - Return {file=>size} for all the files in a specified $folderOrFile on S3 using the specified %options if any.
301 s3Profile - Return an S3 profile keyword from an S3 option set
302 s3ReadFile - Read from a $file on S3 and write the contents to a local file $local using the specified %options if any.
303 s3ReadString - Read from a $file on S3 and return the contents as a string using specified %options if any.
304 s3WriteFile - Write to a file $fileS3 on S3 the contents of a local file $fileLocal using the specified %options if any.
305 s3WriteString - Write to a $file on S3 the contents of $string using the specified %options if any.
306 s3ZipFolder - Zip the specified $source folder and write it to the named $target file on S3.
307 s3ZipFolders - Zip local folders and upload them to S3 in parallel.
308 saveAwsIp - Make the server at Amazon Web Services with the given IP address the default primary server as used by all the methods whose names end in r or Remote.
309 saveCodeToS3 - Save source code every $saveCodeEvery seconds by zipping folder $folder to zip file $zipFileName then saving this zip file in the specified S3 $bucket using any additional S3 parameters in $S3Parms.
310 saveSourceToS3 - Save source code.
311 searchDirectoryTreesForMatchingFiles - Search the specified directory trees for the files (not folders) that match the specified extensions.
312 setCombination - Count the elements in sets @s represented as arrays of strings and/or the keys of hashes
313 setFileExtension - Given a $file, change its extension to $extension.
314 setIntersection - Intersection of sets @s represented as arrays of strings and/or the keys of hashes
315 setIntersectionOverUnion - Returns the size of the intersection over the size of the union of one or more sets @s represented as arrays and/or hashes
316 setPackageSearchOrder - Set a package search order for methods requested in the current package via AUTOLOAD.
317 setPartitionOnIntersectionOverUnion - Partition, at a level of $confidence between 0 and 1, a set of sets @sets so that within each partition the setIntersectionOverUnion of any two sets in the partition is never less than the specified level of $confidence**2
318 setPartitionOnIntersectionOverUnionOfHashStringSets - Partition, at a level of $confidence between 0 and 1, a set of sets $hashSet represented by a hash, each hash value being a string containing words and punctuation, each word possibly capitalized, so that within each partition the setPartitionOnIntersectionOverUnionOfSetsOfWords of any two sets of words in the partition is never less than the specified $confidence**2 and the partition entries are the hash keys of the string sets.
319 setPartitionOnIntersectionOverUnionOfHashStringSetsInParallel - Partition, at a level of $confidence between 0 and 1, a set of sets $hashSet represented by a hash, each hash value being a string containing words and punctuation, each word possibly capitalized, so that within each partition the setPartitionOnIntersectionOverUnionOfSetsOfWords of any two sets of words in the partition is never less than the specified $confidence**2 and the partition entries are the hash keys of the string sets.
320 setPartitionOnIntersectionOverUnionOfSetsOfWords - Partition, at a level of $confidence between 0 and 1, a set of sets @sets of words so that within each partition the setIntersectionOverUnion of any two sets of words in the partition is never less than the specified $confidence**2
321 setPartitionOnIntersectionOverUnionOfStringSets - Partition, at a level of $confidence between 0 and 1, a set of sets @strings, each set represented by a string containing words and punctuation, each word possibly capitalized, so that within each partition the setPartitionOnIntersectionOverUnionOfSetsOfWords of any two sets of words in the partition is never less than the specified $confidence**2
322 setPermissionsForFile - Apply chmod to a $file to set its $permissions.
323 setUnion - Union of sets @s represented as arrays of strings and/or the keys of hashes
324 showHashes - Create a map of all the keys within all the hashes within a tower of data structures.
325 showHashes2 - Create a map of all the keys within all the hashes within a tower of data structures.
326 squareArray - Create a two dimensional square array from a one dimensional linear array.
327 startProcess - Start new processes while the number of child processes recorded in %$pids is less than the specified $maximum.
328 storeFile - Store into a $file, after creating a path to the file with makePath if necessary, a data $structure via Storable.
329 stringMd5Sum - Get the Md5 sum of a $string that might contain utf8 code points.
330 stringsAreNotEqual - Return the common start followed by the two non equal tails of two non equal strings or an empty list if the strings are equal.
331 subScriptString - Convert alphanumerics in a string to sub scripts
332 subScriptStringUndo - Undo alphanumerics in a string to sub scripts
333 sumAbsAndRel - Combine zero or more absolute and relative names of @files starting at the current working folder to get an absolute file name.
334 summarizeColumn - Count the number of unique instances of each value a column in a table assumes.
335 superScriptString - Convert alphanumerics in a string to super scripts
336 superScriptStringUndo - Undo alphanumerics in a string to super scripts
337 swapFilePrefix - Swaps the start of a $file name from a $known name to a $new one if the file does in fact start with the $known name otherwise returns the original file name as it is.
338 swapFolderPrefix - Given a $file, swap the folder name of the $file from $known to $new if the file $file starts with the $known folder name else return the $file as it is.
339 syncFromS3InParallel - Download from S3 by using "aws s3 sync --exclude '*' --include '.
340 syncToS3InParallel - Upload to S3 by using "aws s3 sync --exclude '*' --include '.
341 temporaryFile - Create a new, empty, temporary file.
342 temporaryFolder - Create a new, empty, temporary folder.
343 timeStamp - hours:minute:seconds
344 trim - Remove any white space from the front and end of a string.
345 Udsr::kill - Kill a communications server.
346 Udsr::read - Read a message from the newUdsrServer or the newUdsrClient.
347 Udsr::webUser - Create a systemd installed server that processes http requests using a specified userid.
348 Udsr::write - Write a communications message to the newUdsrServer or the newUdsrClient.
349 unionOfHashesAsArrays - Form the union of the specified hashes @h as one hash whose values are a array of corresponding values from each hash
350 unionOfHashKeys - Form the union of the keys of the specified hashes @h as one hash whose keys represent the union.
351 uniqueNameFromFile - Create a unique name from a file name and the md5 sum of its content
352 updateDocumentation - Update documentation for a Perl module from the comments in its source code.
353 updatePerlModuleDocumentation - Update the documentation in a $perlModule and display said documentation in a web browser.
354 userId - Get or confirm the userid we are currently running under.
355 versionCode - YYYYmmdd-HHMMSS
356 versionCodeDashed - YYYY-mm-dd-HH:MM:SS
357 waitForAllStartedProcessesToFinish - Wait until all the processes started by startProcess have finished.
358 wellKnownUrls - Short names for some well known urls
359 writeBinaryFile - Write to a new $file, after creating a path to the file with makePath if necessary, the binary content in $string.
360 writeFile - Write to a new $file, after creating a path to the $file with makePath if necessary, a $string of Unicode content encoded as utf8.
361 writeFiles - Write the values of a $hash reference into files identified by the key of each value using overWriteFile optionally swapping the prefix of each file from $old to $new.
362 writeFileToRemote - Write to a new $file, after creating a path to the file with makePath if necessary, a $string of Unicode content encoded as utf8 then copy the $file to the remote server whose ip address is specified by $ip or returned by awsIp.
363 writeGZipFile - Write to a $file, after creating a path to the file with makePath if necessary, through gzip a $string whose content is encoded as utf8.
364 writeStructureTest - Write a test for a data $structure with file names in it.
365 wwwDecode - Percent decode a url $string per: https://en.
366 wwwEncode - Percent encode a url per: https://en.
367 xxx - Execute a shell command optionally checking its response.
368 xxxr - Execute a command $cmd via bash on the server whose ip address is specified by $ip or returned by awsIp.
369 yyy - Execute a block of shell commands line by line after removing comments - stop if there is a non zero return code from any command.
370 zzz - Execute lines of commands after replacing new lines with && then check that the pipeline execution results in a return code of zero and that the execution results match the optional regular expression if one has been supplied; confess() to an error if either check fails.
This module is written in 100% Pure Perl and, thus, it is easy to read, comprehend, use, modify and install via cpan:
sudo cpan install Data::Table::Text
philiprbrenan@gmail.com
http://www.appaapps.com
Copyright (c) 2016-2019 Philip R Brenan.
This module is free software. It may be used, redistributed and/or modified under the same terms as Perl itself.
Thanks to the following people for their help with this module:
Testing on windows
To install Data::Table::Text, copy and paste the appropriate command in to your terminal.
cpanm
cpanm Data::Table::Text
CPAN shell
perl -MCPAN -e shell install Data::Table::Text
For more information on module installation, please visit the detailed CPAN module installation guide.