The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.

NAME

Math::Macopt - A wrapper for macopt++, which is a conjugate gradient library.

INSTALLATION

The package can be installed by the standard PERL module installation procedure:

  perl Makefile.PL
  make
  make test
  make install

Please noted that the original "macopt++" C++ source code is included in this PERL package. The static linking avoids the possible conflict to any pre-installed version of "macopt++".

SYNOPSIS

  use strict;
  use Math::Macopt;
  
  &main();
  
  sub main
  {
        # Some settings
        my $N = 10;
        my $epsilon = 0.001;
  
        # Initialize the Macopt 
        my $macopt = new Math::Macopt::Base($N, 0);
  
        # Setup the function and its gradient
        my $func = sub {
                my $x = shift;
  
                my $size = $macopt->size();
                my $sum = 0;
                foreach my $i (0..$size-1) {
                        $sum += ($x->[$i]-$i)**2;
                }
                
                return $sum;
        };
        my $dfunc = sub {
                my $x = shift;
  
                my $size = $macopt->size();
                my $g = ();
                foreach my $i (0..$size-1) {
                        $g->[$i] = 2*($x->[$i]-$i); 
                }
   
                return $g;
        };
        $macopt->setFunc(\&$func);
        $macopt->setDfunc(\&$dfunc);
  
        # Optimizer using macopt 
        my $x = [(1)x($N)];
        $macopt->maccheckgrad($x, $N, $epsilon, 0) ;
        $macopt->macoptII($x, $N);

        # Display the result
        printf "[%s]\n", join(',', @$x);
  }

DESCRIPTION

Overview

The Math::Macopt provides a PERL interface for the macopt++ conjugate gradient library, which is developed by David Mackay in C++. http://www.inference.phy.cam.ac.uk/mackay/c/macopt.html

The API is generated by SWIG (http://www.swig.org) to interact the native C codes of macopt++.

Class Hierarchy

  Math::Macopt
  +- Math::Macopt::Base

Constants

nil.

Member Variables

nil.

Constructor and initialization

new

Same as the original C++ source code.

Arguments:

  • 0: (Integer) n -- The dimension of the vector.

  • 1: (Boolean) verbose -- Whether the verbose messages are displayed.

  • 2: (Double) tolerance -- Optimization convergence.

  • 3: (Boolean) rich -- Whether to do extra gradient evaluation.

Returns:

  • The blessed object.

Please refer to the original macopt code for details.

Class and Object methods

size

Arguments:

  • nil.

Returns:

  • (Integer) The number of dimensions.

macoptII

Optimize (minimize) the vector based on the function and its gradient.

Arguments:

  • 0: (ARRAY) x -- Starting vector.

  • 1: (Integer) N -- The number of dimensions.

Returns:

  • nil.

Please notice that the optimal results will be put in the input vector "x" after called.

maccheckgrad

Examines objective function and d_objective function to see if they agree for a step of size epsilon.

Arguments:

  • 0: (ARRAY) x -- Starting vector.

  • 1: (Integer) N -- The number of dimensions.

  • 2: (Double) eplison -- Step size.

  • 3 (Boolean) stopat -- Stop at this component. If 0, do the lot.

Returns:

  • nil.

setFunc
setDfunc

Set the function and its gradient function as PERL callbacks.

Arguments:

  • 0: (SV*) callback -- The PERL callback.

Returns:

  • nil.

OTHER ISSUES

Future Plans

  • Support on MS Windows (e.g., SFU or native)

  • Support on Java language (e.g., use SWIG for Java)

BUGS

No bug found yet.

RELATED MODULES

nil.

AUTHOR(S)

Tom Chau <tom@cpan.org>

CREDIT(S)

Cluster Technology Limited http://www.clustertech.com