The London Perl and Raku Workshop takes place on 26th Oct 2024. If your company depends on Perl, please consider sponsoring and/or attending.

NAME

td - Manipulate table data

VERSION

This document describes version 0.111 of td (from Perl distribution App-td), released on 2023-02-11.

SYNOPSIS

td --help (or -h, -?)

td --version (or -v)

td [--case-insensitive|-i] [--detail|-l|--no-detail|--nodetail] [(--exclude-column=str)+|--exclude-columns-json=json|(-E=str)+] [--format=name|--json] [(--include-column=str)+|--include-columns-json=json|(-I=str)+] [--lines=str|-n=str] [--(no)naked-res] [--no-header-column] [--page-result[=program]|--view-result[=program]] [--repeated|-r|--no-repeated|--norepeated] [--weight-column=str] -- <action> [argv] ...

DESCRIPTION

What is td?

td receives table data from standard input and performs an action on it. It has functionality similar to some Unix commands like head, tail, wc, cut, sort except that it operates on table rows/columns instead of lines/characters. This is convenient to use with CLI scripts that output table data.

What is table data?

A table data is JSON-encoded data in the form of either: hos (hash of scalars, which is viewed as a two-column table where the columns are key and value), aos (array of scalars, which is viewed as a 1-column array where the column is elem), aoaos (array of arrays of scalars), or aohos (array of hashes of scalars).

The input can also be an enveloped table data, where the envelope is an array: [status, message, content, meta] and content is the actual table data. This kind of data is produced by Perinci::CmdLine-based scripts and can contain more detailed table specification in the meta hash, which td can parse.

What scripts/modules output table data?

CLI scripts that are written using Perinci::CmdLine framework output enveloped table data. There are at least hundreds of such scripts on CPAN. Some examples include: lcpan (from App::lcpan), pmlist (from App::PMUtils), and bencher (from Bencher).

TableData::* modules contain table data. They can easily be output to CLI using the tabledata utility (from App::TableDataUtils).

CSV output from any module/script can be easily converted to table data using the csv2td utility:

 % csv2td YOUR.csv | td ...
 % program-that-outputs-csv | csv2td - | td ...

Table data can also be converted from several other formats e.g. JSON, YAML, XLS/XLSX/ODS.

What scripts/modules accept table data?

This td script, for one, accepts table data.

If a module/script expects CSV, you can feed it table data and convert the table data to CSV using td2csv utility.

Several other formats can also be converted to table data, e.g. JSON, YAML, XLS/XLSX/ODS.

Using td

First you might want to use the info action to see if the input is a table data:

 % osnames -l --json | td info

If input is not valid JSON, a JSON parse error will be displayed. If input is valid JSON but not a table data, another error will be displayed. Otherwise, information about the table will be displayed (form, number of columns, column names, number of rows, and so on).

Next, you can use these actions:

 # List available actions
 % td actions
 
 # Convert table data (which might be hash, aos, or aohos) to aoaos form
 % list-files -l --json | td as-aoaos
 
 # Convert table data (which might be hash, aos, or aoaos) to aohos form
 % list-files -l --json | td as-aohos
 
 # Display table data on the browser using datatables (to allow interactive sorting and filtering)
 % osnames -l | td cat --format html+datatables
 
 # Convert table data to CSV
 % list-files -l --json | td as-csv
 
 # Calculate arithmetic average of numeric columns
 % list-files -l --json | td avg
 
 # Append a row at the end containing arithmetic average of number columns
 % list-files -l --json | td avg-row
 
 # Count number of columns
 % osnames -l --json | td colcount
 
 # Append a single-column row at the end containing number of columns
 % osnames -l --json | td colcount-row
 
 # Return the column names only
 % lcpan related-mods Perinci::CmdLine | td colnames
 
 # append a row containing column names
 % lcpan related-mods Perinci::CmdLine | td colnames-row
 
 # Only show first 5 rows
 % osnames -l --json | td head -n5
 
 # Show all but the last 5 rows
 % osnames -l --json | td head -n -5
 
 # Check if input is table data and show information about the table
 % osnames -l --json | td info
 
 # Count number of rows
 % osnames -l --json | td rowcount
 % osnames -l --json | td wc            ;# shorter alias
 
 # Append a single-column row containing row count
 % osnames -l --json | td rowcount-row
 % osnames -l --json | td wc-row        ;# shorter alias
 
 # Add a row number column (1, 2, 3, ...)
 % list-files -l --json | td rownum-col
 
 # Select some columns
 % osnames -l --json | td select value description
 
 # Select all columns but some
 % osnames -l --json | td select '*' -E value -E description
 
 # Return the rows in a random order
 % osnames -l --json | td shuf
 
 # Pick 5 random rows from input
 % osnames -l --json | td shuf -n5
 % osnames -l --json | td pick -n5  ;# synonym for 'shuf'
 
 # Sort by column(s) (add "-" prefix to for descending order)
 % osnames -l --json | td sort value tags
 % osnames -l --json | td sort -- -value
 
 # Return sum of all numeric columns
 % list-files -l --json | td sum
 
 # Append a sum row
 % list-files -l --json | td sum-row
 
 # Only show last 5 rows
 % osnames -l --json | td tail -n5
 
 # Show rows from the row 5 onwards
 % osnames -l --json | td tail -n +5
 
 # Remove adjacent duplicate rows:
 % command ... | td uniq
 % command ... | td uniq -i ;# case-insensitive
 % command ... | td uniq --repeated ;# only shows the duplicate rows
 % command ... | td uniq -i C1 -i C2 ;# only use columns C1 & C2 to check uniqueness
 % command ... | td uniq -E C5 -E C6 ;# use all columns but C5 & C6 to check uniqueness
 
 # Remove non-adjacent duplicate rows:
 % command ... | td nauniq
 % command ... | td nauniq -i ;# case-insensitive
 % command ... | td nauniq --repeated ;# only shows the duplicate rows
 % command ... | td nauniq -i C1 -i C2 ;# only use columns C1 & C2 to check uniqueness
 % command ... | td nauniq -E C5 -E C6 ;# use all columns but C5 & C6 to check uniqueness
 
 # Transpose table (make first column of rows as column names in the
 # transposed table)
 
 % osnames -l --json | td transpose
 
 # Transpose table (make columns named 'row1', 'row2', 'row3', ... in the
 # transposed table)
 
 % osnames -l --json | td transpose --no-header-column
 
 # Use Perl code to filter rows. Perl code gets row in $row or $_
 # (scalar/aos/hos) or $rowhash (always a hos) or $rowarray (always aos).
 # There are also $rownum (integer, starts at 0) and $td (table data object).
 # Perl code is eval'ed in the 'main' package with strict/warnings turned
 # off. The example below selects videos that are larger than 480p.
 
 % media-info *.mp4 | td grep 'use List::Util qw(min); min($_->{video_height}, $_->{video_width}) > 480'
 
 # Use Perl code to filter columns. Perl code gets column name in $colname or
 # $_. There's also $colidx (column index, from 1) and $td (table data
 # object). If table data form is 'hash' or 'aos', it will be transformed
 # into 'aoaos'. The example below only select even columns that match
 # /col/i. Note that most of the time, 'td select' is better. But when you
 # have a lot of columns and want to select them programmatically, you have
 # grep-col.
 
 % somecd --json | td grep-col '$colidx % 2 == 0 && /col/i'
 
 # Use Perl code to transform row. Perl code gets row in $row or $_
 # (scalar/hash/array) and is supposed to return the new row. As in 'grep',
 # $rowhash, $rowarray, $rownum, $td are also available as helper. The
 # example below adds a field called 'is_landscape'.
 
 % media-info *.jpg | td map '$_->{is_landscape} = $_->{video_height} < $_->{video_width} ? 1:0; $_'
 
 # Use perl code to sort rows. Perl sorter code gets row in $a & $b or $_[0]
 # & $_[1] (hash/array). Sorter code, like in Perl's standard sort(), is
 # expected to return -1/0/1. The example belows sort videos by height,
 # descendingly then by width, descendingly.
 
 % media-info *.mp4 | td psort '$b->{video_height} <=> $a->{video_height} || $b->{video_width} <=> $b->{video_width}'

OPTIONS

* marks required options.

Main options

--action=s*

Action to perform on input table.

Valid values:

 ["actions","as-aoaos","as-aohos","as-csv","avg","avg-row","cat","colcount","colcount-row","colnames","colnames-row","grep","grep-col","grep-row","head","info","map","map-row","nauniq","pick","psort","rowcount","rowcount-row","rownum-col","select","shuf","sort","sum","sum-row","tail","transpose","uniq","wc","wc-row"]

Can also be specified as the 1st command-line argument.

--argv-json=s

Arguments (JSON-encoded).

See --argv.

Can also be specified as the 2nd command-line argument and onwards.

--argv=s@

Arguments.

Default value:

 []

Can also be specified as the 2nd command-line argument and onwards.

Can be specified multiple times.

Actions action options

--detail, -l

(No description)

Head action options

--lines=s, -n

(No description)

Nauniq action options

--case-insensitive, -i

(No description)

--exclude-column=s@, -E

(No description)

Can be specified multiple times.

--exclude-columns-json=s

See --exclude-column.

--include-column=s@, -I

(No description)

Can be specified multiple times.

--include-columns-json=s

See --include-column.

--repeated, -r

Allow/show duplicates.

For shuf/pick actions, setting this option means sampling with replacement which makes a single row can be sampled/picked multiple times. The default is to sample without replacement.

For uniq/nauniq actions, setting this option means instructing to return duplicate rows instead of the unique rows.

Output options

--format=s

Choose output format, e.g. json, text.

Default value:

 undef

Output can be displayed in multiple formats, and a suitable default format is chosen depending on the application and/or whether output destination is interactive terminal (i.e. whether output is piped). This option specifically chooses an output format.

--json

Set output format to json.

--naked-res

When outputing as JSON, strip result envelope.

Default value:

 0

By default, when outputing as JSON, the full enveloped result is returned, e.g.:

 [200,"OK",[1,2,3],{"func.extra"=>4}]

The reason is so you can get the status (1st element), status message (2nd element) as well as result metadata/extra result (4th element) instead of just the result (3rd element). However, sometimes you want just the result, e.g. when you want to pipe the result for more post-processing. In this case you can use --naked-res so you just get:

 [1,2,3]
--page-result

Filter output through a pager.

This option will pipe the output to a specified pager program. If pager program is not specified, a suitable default e.g. less is chosen.

--view-result

View output using a viewer.

This option will first save the output to a temporary file, then open a viewer program to view the temporary file. If a viewer program is not chosen, a suitable default, e.g. the browser, is chosen.

Pick action options

--lines=s, -n

(No description)

--repeated, -r

Allow/show duplicates.

For shuf/pick actions, setting this option means sampling with replacement which makes a single row can be sampled/picked multiple times. The default is to sample without replacement.

For uniq/nauniq actions, setting this option means instructing to return duplicate rows instead of the unique rows.

--weight-column=s

Select a column that contains weight.

Select action options

--exclude-column=s@, -E

(No description)

Can be specified multiple times.

--exclude-columns-json=s

See --exclude-column.

--include-column=s@, -I

(No description)

Can be specified multiple times.

--include-columns-json=s

See --include-column.

Shuf action options

--repeated, -r

Allow/show duplicates.

For shuf/pick actions, setting this option means sampling with replacement which makes a single row can be sampled/picked multiple times. The default is to sample without replacement.

For uniq/nauniq actions, setting this option means instructing to return duplicate rows instead of the unique rows.

--weight-column=s

Select a column that contains weight.

Tail action options

--lines=s, -n

(No description)

Transpose action options

--no-header-column

Don't make the first column as column names of the transposed table; instead create column named 'row1', 'row2', ....

Uniq action options

--case-insensitive, -i

(No description)

--exclude-column=s@, -E

(No description)

Can be specified multiple times.

--exclude-columns-json=s

See --exclude-column.

--include-column=s@, -I

(No description)

Can be specified multiple times.

--include-columns-json=s

See --include-column.

--repeated, -r

Allow/show duplicates.

For shuf/pick actions, setting this option means sampling with replacement which makes a single row can be sampled/picked multiple times. The default is to sample without replacement.

For uniq/nauniq actions, setting this option means instructing to return duplicate rows instead of the unique rows.

Other options

--help, -h, -?

Display help message and exit.

--version, -v

Display program's version and exit.

COMPLETION

This script has shell tab completion capability with support for several shells.

bash

To activate bash completion for this script, put:

 complete -C td td

in your bash startup (e.g. ~/.bashrc). Your next shell session will then recognize tab completion for the command. Or, you can also directly execute the line above in your shell to activate immediately.

It is recommended, however, that you install modules using cpanm-shcompgen which can activate shell completion for scripts immediately.

tcsh

To activate tcsh completion for this script, put:

 complete td 'p/*/`td`/'

in your tcsh startup (e.g. ~/.tcshrc). Your next shell session will then recognize tab completion for the command. Or, you can also directly execute the line above in your shell to activate immediately.

It is also recommended to install shcompgen (see above).

other shells

For fish and zsh, install shcompgen as described above.

HOMEPAGE

Please visit the project's homepage at https://metacpan.org/release/App-td.

SOURCE

Source repository is at https://github.com/perlancar/perl-App-td.

AUTHOR

perlancar <perlancar@cpan.org>

CONTRIBUTING

To contribute, you can send patches by email/via RT, or send pull requests on GitHub.

Most of the time, you don't need to build the distribution yourself. You can simply modify the code, then test via:

 % prove -l

If you want to build the distribution (e.g. to try to install it locally on your system), you can install Dist::Zilla, Dist::Zilla::PluginBundle::Author::PERLANCAR, Pod::Weaver::PluginBundle::Author::PERLANCAR, and sometimes one or two other Dist::Zilla- and/or Pod::Weaver plugins. Any additional steps required beyond that are considered a bug and can be reported to me.

COPYRIGHT AND LICENSE

This software is copyright (c) 2023, 2022, 2021, 2020, 2019, 2017, 2016, 2015 by perlancar <perlancar@cpan.org>.

This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.

BUGS

Please report any bugs or feature requests on the bugtracker website https://rt.cpan.org/Public/Dist/Display.html?Name=App-td

When submitting a bug or request, please include a test-file or a patch to an existing test-file that illustrates the bug or desired feature.