The London Perl and Raku Workshop takes place on 26th Oct 2024. If your company depends on Perl, please consider sponsoring and/or attending.

NAME

AI::PSO - Module for running the Particle Swarm Optimization algorithm

SYNOPSIS

use AI::PSO;

my %params = (
    numParticles   => 4,     # total number of particles involved in search 
    numNeighbors   => 3,     # number of particles with which each particle will share its progress
    maxIterations  => 1000,  # maximum number of iterations before exiting with no solution found
    dimensions     => 4,     # number of parameters you want to optimize
    deltaMin       => -4.0,  # minimum change in velocity during PSO update
    deltaMax       =>  4.0,  # maximum change in velocity during PSO update
    meWeight       => 2.0,   # 'individuality' weighting constant (higher means more individuality)
    meMin          => 0.0,   # 'individuality' minimum random weight
    meMax          => 1.0,   # 'individuality' maximum random weight
    themWeight     => 2.0,   # 'social' weighting constant (higher means trust group more)
    themMin        => 0.0,   # 'social' minimum random weight 
    themMax        => 1.0,   # 'social' maximum random weight
    exitFitness    => 0.9,   # minimum fitness to achieve before exiting
    verbose        => 0,     # 0 prints solution
                             # 1 prints (Y|N):particle:fitness at each iteration
                             # 2 dumps each particle (+1)
    psoRandomRange => 4.0,   # setting this enables the original PSO algorithm and
                             # also subsequently ignores the  me*/them* parameters
);


sub custom_fitness_function(@input) {	
      # this is a callback function.  
      # @input will be passed to this, you do not need to worry about setting it...
      # ... do something with @input which is an array of floats
      # return a value in [0,1] with 0 being the worst and 1 being the best
}

pso_set_params(\%params);
pso_register_fitness_function('custom_fitness_function');
pso_optimize();
my @solutionArray = pso_get_solution_array();

General Guidelines

1. Sociality versus individuality
I suggest that meWeight and themWeight add up up to 4.0, or that 
psoRandomRange = 4.0.  Also, you should also be setting meMin 
and themMin to 0, and meMin and themMax to 1 unless you really 
know what you are doing.
2. Search space coverage
If you have a large search space, increasing deltaMin and deltaMax 
and delta max can help cover more area. Conversely, if you have a 
small search space, then decreasing them will fine tune the search.
3. Swarm Topology
I've personally found that using a global (fully connected) topology 
where each particle is neighbors with all other particles 
(numNeighbors == numParticles - 1) converges more quickly.  However, 
this will drastically increase the number of calls to your fitness 
function.  So, if your fitness function is the bottleneck, then you 
should tune this value for the appropriate time/accuracy trade-off.  
Also, I highly suggest you implement a simple fitness cache so you 
don't end up recomputing fitness values.  This can easily be done 
with a perl hash that is keyed on the string concatenation of the 
array values passed to your fitness function.  Note that these are 
floating point values, so determine how significant the values are 
and you can use sprintf to essentially limit the precision of the 
particle positions.
4. Number of particles
The number of particles increases cooperation and search space 
coverage at the expense of compute.  Typical applications should 
suffice using 20-40 particles.
  • NOTE:

    I force people to define all parameters, but guidelines 1-4 are 
    standard and pretty safe.

DESCRIPTION OF ALGORITHM

Particle Swarm Optimization is an optimization algorithm designed by 
Russell Eberhart and James Kennedy from Purdue University.  The 
algorithm itself is based off of the emergent behavior among societal 
groups ranging from marching of ants, to flocking of birds, to 
swarming of bees.

PSO is a cooperative approach to optimization rather than an 
evolutionary approach which kills off unsuccessful members of the 
search team.  In the swarm framework each particle, is a relatively 
unintelligent search agent.  It is in the collective sharing of 
knowledge that solutions are found.  Each particle simply shares its 
information with its neighboring particles.  So, if one particle is 
not doing to well (has a low fitness), then it looks to its neighbors 
for help and tries to be more like them while still maintaining a 
sense of individuality.

A particle is defined by its position and velocity.  The parameters a 
user wants to optimize define the dimensionality of the problem 
hyperspace.  So, if you want to optimize three variables, a particle 
will be three dimensional and will have 3 values that devine its 
position 3 values that define its velocity.  The position of a 
particle determines how good it is by a user-defined fitness function.  
The velocity of a particle determines how quickly it changes location.  
Larger velocities provide more coverage of hyperspace at the cost of 
solution precision.  With large velocities, a particle may come close 
to a maxima but over-shoot it because it is moving too quickly.  With 
smaller velocities, particles can really hone in on a local solution 
and find the best position but they may be missing another, possibly 
even more optimal, solution because a full search of the hyperspace 
was not conducted.  Techniques such as simulated annealing can be 
applied in certain areas so that the closer a partcle gets to a 
solution, the smaller its velocity will be so that in bad areas of 
the hyperspace, the particles move quickly, but in good areas, they 
spend some extra time looking around.

In general, particles fly around the problem hyperspace looking for 
local/global maxima.  At each position, a particle computes its 
fitness.  If it does not meet the exit criteria then it gets 
information from neighboring particles about how well they are doing.  
If a neighboring particle is doing better, then the current particle 
tries to move closer to its neighbor by adjusting its position.  As 
mentioned, the velocity controls how quickly a particle changes 
location in the problem hyperspace.  There are also some stochastic 
weights involved in the positional updates so that each particle is 
truly independent and can take its own search path while still 
incorporating good information from other particles.  In this 
particluar perl module, the user is able to choose from two 
implementations of the algorithm.  One is the original implementation 
from I<Swarm Intelligence> which requires the definition of a 
'random range' to which the two stochastic weights are required to 
sum.  The other implementation allows the user to define the weighting
of how much a particle follows its own path versus following its 
peers.  In both cases there is an element of randomness.

Solution convergence is quite fast once one particle becomes close to 
a local maxima.  Having more particles active means there is more of 
a chance that you will not be stuck in a local maxima.  Often times 
different neighborhoods (when not configured in a global neighborhood 
fashion) will converge to different maxima.  It is quite interesting 
to watch graphically.  If the fitness function is expensive to 
compute, then it is often useful to start out with a small number of
particles first and get a feel for how the algorithm converges.

The algorithm implemented in this module is taken from the book 
I<Swarm Intelligence> by Russell Eberhart and James Kennedy.  
I highly suggest you read the book if you are interested in this 
sort of thing.  

EXPORTED FUNCTIONS

pso_set_params()
Sets the particle swarm configuration parameters to use for the search.
pso_register_fitness_function()
Sets the user defined fitness function to call.  The fitness function 
should return a value between 0 and 1.  Users may want to look into 
the sigmoid function [1 / (1+e^(-x))] and it's variants to implement 
this.  Also, you may want to take a look at either t/PSO.t for the 
simple test or examples/NeuralNetwork/pso_ann.pl for an example on 
how to train a simple 3-layer feed forward neural network.  (Note 
that a real training application would have a real dataset with many 
input-output pairs...pso_ann.pl is a _very_ simple example.  Also note 
that the neural network exmaple requires g++.  Type 'make run' in the 
examples/NeuralNetwork directory to run the example.  Lastly, the 
neural network c++ code is in a very different coding style.  I did 
indeed write this, but it was many years ago when I was striving to 
make my code nicely formatted and good looking :)).
pso_optimize()
Runs the particle swarm optimization algorithm.  This consists of 
running iterations of search and many calls to the fitness function 
you registered with pso_register_fitness_function()
pso_get_solution_array()
By default, pso_optimize() will print out to STDERR the first 
solution, or the best solution so far if the max iterations were 
reached.  This function will simply return an array of the winning 
(or best so far) position of the entire swarm system.  It is an 
array of floats to be used how you wish (like weights in a 
neural network!).

EXAMPLES

examples/NeuralNet/pso_ann.pl
t/PSO.t

SEE ALSO

1. Swarm intelligence by James Kennedy and Russell C. Eberhart. ISBN 1-55860-595-9

2. A Hybrid Particle Swarm and Neural Network Approach for Reactive Power Control AI-PSO-0.86/extradocs/ReactivePower-PSO-wks.pdf http://webapps.calvin.edu/~pribeiro/courses/engr302/Samples/ReactivePower-PSO-wks.pdf

AUTHOR

W. Kyle Schlansker kylesch@gmail.com

COPYRIGHT AND LICENSE

Copyright (C) 2006 by W. Kyle Schlansker

This code is released under the Mozilla Public License Version 1.1. A copy of this license may be found along with this module or at: http://www.mozilla.org/MPL/MPL-1.1.txt