MarpaX::Grammar::Parser - Converts a Marpa grammar into a tree using Tree::DAG_Node
MarpaX::Grammar::Parser
use MarpaX::Grammar::Parser; my(%option) = ( # Inputs: marpa_bnf_file => 'share/metag.bnf', user_bnf_file => 'share/stringparser.bnf', # Outputs: cooked_tree_file => 'share/stringparser.cooked.tree', raw_tree_file => 'share/stringparser.raw.tree', ); MarpaX::Grammar::Parser -> new(%option) -> run;
For more help, run:
perl scripts/bnf2tree.pl -h
See share/*.bnf for input files and share/*.tree for output files.
Installation includes copying all files from the share/ directory, into a dir chosen by File::ShareDir. Run scripts/find.grammars.pl to display the name of that dir.
MarpaX::Grammar::Parser uses Marpa::R2 to convert a user's BNF into a tree of Marpa-style attributes, (see "raw_tree()"), and then post-processes that (see "compress_tree()") to create another tree, this time containing just the original grammar (see "cooked_tree()").
The nature of these trees is discussed in the "FAQ". The trees are managed by Tree::DAG_Node.
Lastly, the major purpose of the cooked tree is to serve as input to MarpaX::Grammar::GraphViz2, which graphs such cooked trees. That module has its own demo page.
Install MarpaX::Grammar::Parser as you would for any Perl module:
Perl
Run:
cpanm MarpaX::Grammar::Parser
or run:
sudo cpan MarpaX::Grammar::Parser
or unpack the distro, and then either:
perl Build.PL ./Build ./Build test sudo ./Build install
or:
perl Makefile.PL make (or dmake or nmake) make test make install
new() is called as my($parser) = MarpaX::Grammar::Parser -> new(k1 => v1, k2 => v2, ...).
new()
my($parser) = MarpaX::Grammar::Parser -> new(k1 => v1, k2 => v2, ...)
It returns a new object of type MarpaX::Grammar::Parser.
Key-value pairs accepted in the parameter list (see also the corresponding methods [e.g. "marpa_bnf_file([$bnf_file_name])"]):
Include (1) or exclude (0) attributes in the tree file(s) output.
Default: 0.
The name of the text file to write containing the grammar as a cooked tree.
If '', the file is not written.
Default: ''.
Note: The bind_attributes option/method affects the output.
By default, an object of type Log::Handler is created which prints to STDOUT.
See maxlevel and minlevel below.
maxlevel
minlevel
Set logger to '' (the empty string) to stop a logger being created.
logger
Default: undef.
Specify the name of Marpa's own BNF file. This distro ships it as share/metag.bnf.
This option is mandatory.
This option is only used if this module creates an object of type Log::Handler.
See Log::Handler::Levels.
Nothing is printed by default.
Default: 'notice'.
This option affects Log::Handler objects.
See the Log::Handler::Levels docs.
Default: 'error'.
No lower levels are used.
The name of the text file to write containing the grammar as a raw tree.
Specify the name of the file containing your Marpa::R2-style grammar.
See share/stringparser.bnf for a sample.
Here, the [] indicate an optional parameter.
Get or set the option which includes (1) or excludes (0) node attributes from the output cooked_tree_file and raw_tree_file.
cooked_tree_file
raw_tree_file
Note: bind_attributes is a parameter to new().
bind_attributes
Called automatically by "run()".
Converts the raw tree into the cooked tree.
Output is the tree returned by "cooked_tree()".
Returns the root node, of type Tree::DAG_Node, of the cooked tree of items in the user's grammar.
By cooked tree, I mean as post-processed from the raw tree so as to include just the original user's BNF tokens.
The cooked tree is optionally written to the file name given by "cooked_tree_file([$output_file_name])".
The nature of this tree is discussed in the "FAQ".
See also "raw_tree()".
Get or set the name of the file to which the cooked tree form of the user's grammar will be written.
If no output file is supplied, nothing is written.
See share/stringparser.cooked.tree for the output of post-processing Marpa's analysis of share/stringparser.bnf.
This latter file is the grammar used in MarpaX::Demo::StringParser.
Note: cooked_tree_file is a parameter to new().
Calls $self -> logger -> log($level => $s) if ($self -> logger).
Get or set the logger object.
To disable logging, just set logger to the empty string.
Note: logger is a parameter to new().
Get or set the name of the file to read Marpa's grammar from.
Note: marpa_bnf_file is a parameter to new().
marpa_bnf_file
Get or set the value used by the logger object.
This option is only used if an object of type Log::Handler is created. See Log::Handler::Levels.
Note: maxlevel is a parameter to new().
Note: minlevel is a parameter to new().
The constructor. See "Constructor and Initialization".
Returns the root node, of type Tree::DAG_Node, of the raw tree of items in the user's grammar.
By raw tree, I mean as derived directly from Marpa.
The raw tree is optionally written to the file name given by "raw_tree_file([$output_file_name])".
See also "cooked_tree()".
Get or set the name of the file to which the raw tree form of the user's grammar will be written.
See share/stringparser.raw.tree for the output of Marpa's analysis of share/stringparser.bnf.
Note: raw_tree_file is a parameter to new().
The method which does all the work.
See "Synopsis" and scripts/bnf2tree.pl for sample code.
run() returns 0 for success and 1 for failure.
Get or set the name of the file to read the user's grammar's BNF from. The whole file is slurped in as a single string.
See share/stringparser.bnf for a sample. It is the grammar used in MarpaX::Demo::StringParser.
Note: user_bnf_file is a parameter to new().
user_bnf_file
This is part of MarpaX::Languages::C::AST, by Jean-Damien Durand. It's 1,883 lines long.
The outputs are share/c.ast.cooked.tree and share/c.ast.raw.tree.
This is the output from post-processing Marpa's analysis of share/c.ast.bnf.
The command to generate this file is:
scripts/bnf2tree.sh c.ast
This is the output from processing Marpa's analysis of share/c.ast.bnf. It's 86,057 lines long, which indicates the complexity of Jean-Damien's grammar for C.
It is part of MarpaX::Demo::JSONParser, written as a gist by Peter Stuifzand.
See https://gist.github.com/pstuifzand/4447349.
The command to process this file is:
scripts/bnf2tree.sh json.1
The outputs are share/json.1.cooked.tree and share/json.1.raw.tree.
It also is part of MarpaX::Demo::JSONParser, written by Jeffrey Kegler as a reply to the gist above from Peter.
scripts/bnf2tree.sh json.2
The outputs are share/json.2.cooked.tree and share/json.2.raw.tree.
The is yet another JSON grammar written by Jeffrey Kegler.
scripts/bnf2tree.sh json.3
The outputs are share/json.3.cooked.tree and share/json.3.raw.tree.
This is a copy of Marpa::R2's BNF. That is, it's the file which Marpa uses to validate both its own metag.bnf (self-reflexively), and any user's BNF file.
See "marpa_bnf_file([$bnf_file_name])" above.
scripts/bnf2tree.sh metag
The outputs are share/metag.cooked.tree and share/metag.raw.tree.
This BNF was extracted from MarpaX::Demo::SampleScripts's examples/ambiguous.grammar.01.pl.
It helped me debug the handling of '|' and '||' between right-hand-side alternatives.
This is a copy of MarpaX::Demo::StringParser's BNF.
See "user_bnf_file([$bnf_file_name])" above.
scripts/bnf2tree.sh stringparser
The outputs are share/stringparser.cooked.tree and share/stringparser.raw.tree.
It is part of MarpaX::Database::Terminfo, written by Jean-Damien Durand.
scripts/bnf2tree.sh termcap.info
The outputs are share/termcap.info.cooked.tree and share/termcap.info.raw.tree.
These scripts are all in the scripts/ directory.
This is a neat way of using this module. For help, run:
Of course you are also encouraged to include the module directly in your own code.
This is a quick way for me to run bnf2tree.pl.
This prints the path to a grammar file. After installation of the module, run it with any of these parameters:
scripts/find.grammars.pl (Defaults to json.1.bnf) scripts/find.grammars.pl c.ast.bnf scripts/find.grammars.pl json.1.bnf scripts/find.grammars.pl json.2.bnf scripts/find.grammars.pl json.3.bnf scripts/find.grammars.pl stringparser.bnf scripts/find.grammars.pl termcap.inf.bnf
It will print the name of the path to given grammar file.
This is Jeffrey Kegler's code. See the "FAQ" for more.
This lets me quickly proof-read edits to the docs.
Marpa's grammars are written in what we call a SLIF-DSL. Here, SLIF stands for Marpa's Scanless Interface, and DSL is Domain-specific Language.
Many programmers will have heard of BNF. Well, Marpa's SLIF-DSL is an extended BNF. That is, it includes special tokens which only make sense within the context of a Marpa grammar. Hence the 'Domain Specific' part of the name.
In practice, this means you express your grammar in a string, and Marpa treats that as a set of rules as to how you want Marpa to process your input stream.
Marpa's docs for its SLIF-DSL are here.
The raw tree is generated by processing the output of Marpa's parse of the user's grammar file. It contains Marpa's view of that grammar. This raw tree is output by Tree::DAG_Node.
The cooked tree is generated by post-processing the raw tree, to extract just the user's grammar's tokens. It contains the user's view of their grammar. This cooked tree is output by this module.
And yes, the cooked tree can be used to reproduce (apart from formatting details) the user's BNF file.
The cooked tree can be graphed with MarpaX::Grammar::GraphViz2. That module has its own demo page.
The following items explain this in more detail.
Under the root (whose name is 'Cooked tree'), there are a set of nodes:
Each of these $n1 nodes has the name 'statement', and each also has a sub-tree of $n2 daughter nodes of its own.
So, each 'statement' node is the root of a sub-tree describing that statement (rule).
These sub-trees' nodes are:
So, this node's name is one of: '=' '::=' or '~'.
These nodes' names are the tokens themselves.
So, for a rule like:
array ::= ('[' ']') | ('[') elements (']') action => ::first
The nodes will be (see share/json.2.cooked.tree):
: |--- statement. Attributes: {token => "statement"} | |--- lhs. Attributes: {token => "array"} | |--- parenthesized_rhs_primary_list. Attributes: {token => "("} | | |--- rhs. Attributes: {token => "'['"} | | |--- rhs. Attributes: {token => "']'"} | |--- parenthesized_rhs_primary_list. Attributes: {token => ")"} | |--- alternative. Attributes: {token => "|"} | |--- parenthesized_rhs_primary_list. Attributes: {token => "("} | | |--- rhs. Attributes: {token => "'['"} | |--- parenthesized_rhs_primary_list. Attributes: {token => ")"} | |--- rhs. Attributes: {token => "elements"} | |--- parenthesized_rhs_primary_list. Attributes: {token => "("} | | |--- rhs. Attributes: {token => "']'"} | |--- parenthesized_rhs_primary_list. Attributes: {token => ")"} | |--- action. Attributes: {token => "action"} | |--- reserved_action_name. Attributes: {token => "::first"} :
Firstly, strip off the first 2 daughters. They are the rule name and the separator.
Clearly, to process the remaining daughters (if any) you must start by examining them from the end, looking for triplets of the form ($a, '=>', $b). $a will be a reserved word (an adverb). This then is the adverb list.
What's left, if anything, is a '|' or '||' separated list of right-hand side alternatives.
See share/json.2.cooked.tree, or any file share/*.cooked.tree.
E.g.: Parsing share/metag.bnf produces cases like this:
: |--- statement | |--- <start rule> | |--- ::= | |--- ( | |--- ':start' | |--- <op declare bnf> | |--- ) | |--- symbol |--- statement | |--- <start rule> | |--- ::= | |--- ( | |--- 'start' | |--- 'symbol' | |--- 'is' | |--- ) | |--- symbol :
See share/metag.cooked.tree.
: |--- statement | |--- <event initializer> | |--- ::= |--- statement :
It is an alias for 'latm' (Longest Acceptable Token Match), so this module always outputs 'latm'.
This is deemed to be a feature.
The first few nodes are:
Marpa value() |--- Class = MarpaX::Grammar::Parser::statements [BLESS 1] |--- 0 = [] [ARRAY 2] |--- 0 = 0 [SCALAR 3] |--- 1 = 2460 [SCALAR 4]
This says the input text offsets are from 0 to 2460. I.e. share/stringparser.bnf is 2461 bytes long.
After this there are a set of nodes like this, one per statement:
|--- Class = MarpaX::Grammar::Parser::statement [BLESS 5] | |--- 2 = [] [ARRAY 6] | |--- 0 = 0 [SCALAR 7] | |--- 1 = 34 [SCALAR 8] | |--- Class = MarpaX::Grammar::Parser::default_rule [BLESS 9] : :
For complex statements, these node can be nested to considerable depth.
This says the first statement in the BNF is at offsets 0 .. 34, and happens to be the default rule (':default ::= action => [values]').
See share/stringparser.raw.tree, or any file share/*.raw.tree.
Jeffrey Kegler wrote it, and posted it on the Google Group dedicated to Marpa, on 2013-07-22, in the thread 'Low-hanging fruit'. I modified it slightly for a module context.
The original code is shipped as scripts/metag.pl.
It offered the output which was most easily parsed of the modules I tested. The others were Data::TreeDump, Data::Dumper, Data::TreeDraw, Data::TreeDumper and Data::Printer.
http://savage.net.au/Marpa.html.
MarpaX::Demo::JSONParser.
MarpaX::Demo::SampleScripts.
MarpaX::Demo::StringParser.
MarpaX::Grammar::GraphViz2.
MarpaX::Languages::C::AST.
MarpaX::Languages::Perl::PackUnpack.
MarpaX::Languages::SVG::Parser.
Data::RenderAsTree.
Log::Handler.
The file Changes was converted into Changelog.ini by Module::Metadata::Changes.
Version numbers < 1.00 represent development versions. From 1.00 up, they are production versions.
https://github.com/ronsavage/MarpaX-Grammar-Parser
Email the author, or log a bug on RT:
https://rt.cpan.org/Public/Dist/Display.html?Name=MarpaX::Grammar::Parser.
MarpaX::Grammar::Parser was written by Ron Savage <ron@savage.net.au> in 2013.
Marpa's homepage: http://savage.net.au/Marpa.html.
Homepage: http://savage.net.au/.
Australian copyright (c) 2013, Ron Savage.
All Programs of mine are 'OSI Certified Open Source Software'; you can redistribute them and/or modify them under the terms of The Artistic License 2.0, a copy of which is available at: http://www.opensource.org/licenses/index.html
To install MarpaX::Grammar::Parser, copy and paste the appropriate command in to your terminal.
cpanm
CPAN shell
perl -MCPAN -e shell install MarpaX::Grammar::Parser
For more information on module installation, please visit the detailed CPAN module installation guide.