The London Perl and Raku Workshop takes place on 26th Oct 2024. If your company depends on Perl, please consider sponsoring and/or attending.

NAME

String::Tokenizer - A simple string tokenizer.

SYNOPSIS

  use String::Tokenizer;
  
  # create the tokenizer and tokenize input
  my $tokenizer = String::Tokenizer->new("((5+5) * 10)", '+*()');
  
  # create tokenizer
  my $tokenizer = String::Tokenizer->new();
  # ... then tokenize the string
  $tokenizer->tokenize("((5 + 5) - 10)", '()');
  
  # will print '(, (, 5, +, 5, ), -, 10, )'
  print join ", " => $tokenizer->getTokens();

DESCRIPTION

A simple string tokenizer which takes a string and splits it on whitespace. It also optionally takes a string of characters to use as delimiters, and returns them with the token set as well. This allows for splitting the string in many different ways.

This is a very basic tokenizer, so more complex needs should be either addressed with a custom written tokenizer or post-processing of the output generated by this module. Basically, this will not fill everyones needs, but it spans a gap between simple split / /, $string and the other options that involve much larger and complex modules.

Also note that this is not a lexical analyser. Many people confuse tokenization with lexical analysis. A tokenizer mearly splits its input into specific chunks, a lexical analyzer classifies those chunks. Sometimes these two steps are combined, but not here.

METHODS

new ($string, $delimiters)

If you do not supply any parameters, nothing happens, the instance is just created. But if you do supply parameters, they are passed on to the tokenize method and that method is run. For information about those arguments, see tokenize below.

tokenize ($string, $delimiters)

Takes a $string to tokenize, and optionally a set of $delimiter characters to facilitate the tokenization. The $string parameter is obvious, the $delimiter parameter is not as transparent. $delimiter is a string of characters, these characters are then seperated into individual characters and are used to split the $string with. So given this string:

  (5 + (100 * (20 - 35)) + 4)

The tokenize method without a $delimiter parameter would return the following comma seperated list of tokens:

  (5, +, (100, *, (20, -, 35)), +, 4)

However, if you were to pass the following set of delimiters [(, )] to tokenize, you would get the following comma seperated list of tokens:

  (, 5, +, (, 100, *, (, 20, -, 35, ), ), +, 4, )

We now can differentiate the parens from the numbers, and no globbing occurs. If you wanted to allow for optionally leaving out the whitespace in the expression, like this:

  (5+(100*(20-35))+4)

as some languages do. Then you would give this delimiter [+*-()] to arrive at the same result.

getTokens

Simply returns the array of tokens. It returns an array-ref in scalar context.

TO DO

This module is very simple, which is much of what I am going for with it. But still it could use some other features.

One is a better means of token iteration, once they are generated they are available as an array, which is good, but a custom token iterator or set of iterators might be nice. Ones specicially desinged to serve the lexical-analysis and parsing needs. String::Tokeniser actually provides something like that I am talking about.

Another possible feature is the ability to subsitute regular expressions as delimiters. I had given this a lot of thought originally, but had opted not too, since it had the potential to greatly incresed the complexity of things, and I was really striving for simplicity. But as they say, easy things should be easy, and hard thing possible, so we shall see.

BUGS

None that I am aware of. Of course, if you find a bug, let me know, and I will be sure to fix it.

CODE COVERAGE

I use Devel::Cover to test the code coverage of my tests, below is the Devel::Cover report on this module's test suite.

 ---------------------------- ------ ------ ------ ------ ------ ------ ------
 File                           stmt branch   cond    sub    pod   time  total
 ---------------------------- ------ ------ ------ ------ ------ ------ ------
 /String/Tokenizer.pm          100.0  100.0   50.0  100.0  100.0   27.7   94.9
 t/10_String_Tokenizer_test.t  100.0    n/a    n/a    n/a    n/a  100.0  100.0
 ---------------------------- ------ ------ ------ ------ ------ ------ ------
 Total                         100.0  100.0   50.0  100.0  100.0  100.0   95.9
 ---------------------------- ------ ------ ------ ------ ------ ------ ------

SEE ALSO

The interface and workings of this module are based largely on the StringTokenizer class from the Java standard library.

Below is a short list of other modules that might be considered similar to this one. I put this here for 2 reasons. One is that this module might be to simple for your usage, and I provide here a list of others that might be of use to you instead. And secondly because I feel that people tend to be hasty in declaring something "CPAN pollution" and "reinventing the wheel". There are many good modules out there, but they don't always fit peoples needs. Some are abandoned, and no longer maintained, others lack good test suites, and still more have just grown too complex with features and such to be useful in some contexts. With this module I aim to provide a simple clean String Tokenizer, based largely on the one found in the Java standard library.

String::Tokeniser

This module looks as if it hasnt been updated from 0.01 and that was uploaded in since 2002. So my guess is it has been abandonded. Along with being a tokenizer, it also provides a means of moving through the resulting tokens, allowing for skipping of tokens and such. These are nice features I must admit, and possibly these (or similar) features may make it into String::Tokenizer in future releases.

Parse::Tokens

This one hasn't been touched since 2001, although it did get up to version 0.27. It looks to lean over more towards the parser side than a basic tokenizer.

Text::Tokenizer

This one looks more up to date (updated as recently as March 2004), but is both a lexical analyzer and a tokenizer. It also uses XS, mine is Pure Perl. This is something maybe to look into if you were to need a more beefy solution that what String::Tokenizer provides.

AUTHOR

stevan little, <stevan@iinteractive.com>

COPYRIGHT AND LICENSE

Copyright 2004 by Infinity Interactive, Inc.

http://www.iinteractive.com

This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.