The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.


AI::NNEasy - Define, learn and use easy Neural Networks of different types using a portable code in Perl and XS.


The main purpose of this module is to create easy Neural Networks with Perl.

The module was designed to can be extended to multiple network types, learning algorithms and activation functions. This architecture was 1st based in the module AI::NNFlex, than I have rewrited it to fix some serialization bugs, and have otimized the code and added some XS functions to get speed in the learning process. Finally I have added an intuitive inteface to create and use the NN, and added a winner algorithm to the output.

I have writed this module because after test different NN module on Perl I can't find one that is portable through Linux and Windows, easy to use and the most important, one that really works in a reall problem.

With this module you don't need to learn much about NN to be able to construct one, you just define the construction of the NN, learn your set of inputs, and use it.


Here's an example of a NN to compute XOR:

  use AI::NNEasy ;
  ## Our maximal error for the output calculation.
  my $ERR_OK = 0.1 ;

  ## Create the NN:
  my $nn = AI::NNEasy->new(
  'xor.nne' , ## file to save the NN.
  [0,1] ,     ## Output types of the NN.
  $ERR_OK ,   ## Maximal error for output.
  2 ,         ## Number of inputs.
  1 ,         ## Number of outputs.
  [3] ,       ## Hidden layers. (this is setting 1 hidden layer with 3 nodes).
  ) ;
  ## Our set of inputs and outputs to learn:
  my @set = (
  [0,0] => [0],
  [0,1] => [1],
  [1,0] => [1],
  [1,1] => [0],
  ## Calculate the actual error for the set:
  my $set_err = $nn->get_set_error(\@set) ;
  ## If set error is bigger than maximal error lest's learn this set:
  if ( $set_err > $ERR_OK ) {
    $nn->learn_set( \@set ) ;
    ## Save the NN:
    $nn->save ;
  ## Use the NN:
  my $out = $nn->run_get_winner([0,0]) ;
  print "0 0 => @$out\n" ; ## 0 0 => 0
  my $out = $nn->run_get_winner([0,1]) ;
  print "0 1 => @$out\n" ; ## 0 1 => 1
  my $out = $nn->run_get_winner([1,0]) ;
  print "1 0 => @$out\n" ; ## 1 0 => 1
  my $out = $nn->run_get_winner([1,1]) ;
  print "1 1 => @$out\n" ; ## 1 1 => 0
  ## or just interate through the @set:
  for (my $i = 0 ; $i < @set ; $i+=2) {
    my $out = $nn->run_get_winner($set[$i]) ;
    print "@{$set[$i]}) => @$out\n" ;




The file path to save the NN. Default: 'nneasy.nne'.


An array of outputs that the NN can have, so the NN can find the nearest number in this list to give your the right output.


The maximal error of the calculated output.

If not defined ERROR_OK will be calculated by the minimal difference between 2 types at @OUTPUT_TYPES dived by 2:

  @OUTPUT_TYPES = [0 , 0.5 , 1] ;
  ERROR_OK = (1 - 0.5) / 2 = 0.25 ;

The input size (number of nodes in the inpute layer).


The output size (number of nodes in the output layer).


A list of size of hidden layers. By default we have 1 hidden layer, and the size is calculated by (IN_SIZE + OUT_SIZE). So, for a NN of 2 inputs and 1 output the hidden layer have 3 nodes.


Conf can be used to define special parameters of the NN:


 {networktype=>'feedforward' , random_weights=>1 , learning_algorithm=>'backprop' , learning_rate=>0.1 , bias=>1}



The type of the NN. For now only accepts 'feedforward'.


Maximum value for initial weight.


Algorithm to train the NN. Accepts 'backprop' and 'reinforce'.


Rate used in the learning_algorithm.


If true will create a BIAS node. Usefull when you have NULL inputs, like [0,0].

Here's a completly example of use:

  my $nn = AI::NNEasy->new(
  'xor.nne' , ## file to save the NN.
  [0,1] ,     ## Output types of the NN.
  0.1 ,       ## Maximal error for output.
  2 ,         ## Number of inputs.
  1 ,         ## Number of outputs.
  [3] ,       ## Hidden layers. (this is setting 1 hidden layer with 3 nodes).
  {random_connections=>0 , networktype=>'feedforward' , random_weights=>1 , learning_algorithm=>'backprop' , learning_rate=>0.1 , bias=>1} ,
  ) ;

And a simple example that will create a NN equal of the above:

  my $nn = AI::NNEasy->new('xor.nne' , [0,1] , 0.1 , 2 , 1 ) ;


Load the NN if it was previously saved.


Save the NN to a file using Storable.

learn (@IN , @OUT , N)

Learn the input.


The values of one input.


The values of the output for the input above.


Number of times that this input should be learned. Default: 100


  $nn->learn( [0,1] , [1] , 10 ) ;


Learn a set of inputs until get the right error for the outputs.


A list of inputs and outputs.


Minimal number of outputs that should be OK when calculating the erros.

By default OK_OUTPUTS should have the same size of number of different inouts in the @SET.


Limit of interations when learning. Default: 30000


If TRUE turn verbose method ON when learning.

get_set_error (@SET , OK_OUTPUTS)

Get the actual error of a set in the NN. If the returned error is bigger than ERROR_OK defined on new() you should learn or relearn the set.

run (@INPUT)

Run a input and return the output calculated by the NN based in what the NN already have learned.

run_get_winner (@INPUT)

Same of run(), but the output will return the nearest output value based in the @OUTPUT_TYPES defined at new().

For example an input [0,1] learned that have the output [1], actually will return something like 0.98324 as output and not 1, since the error never should be 0. So, with run_get_winner() we get the output of run(), let's say that is 0.98324, and find what output is near of this number, that in this case should be 1. An output [0], will return by run() something like 0.078964, and run_get_winner() return 0.


Inside the release sources you can find the directory ./samples where you have some examples of code using this module.


Some functions of this module have Inline functions writed in C.

I have made a C version only for the functions that are wild called, like:




What give to us the speed that we need to learn fast the inputs, but at the same time be able to create flexible NN.


I have used Class::HPLOO to write fast the module, specially the XS support.

Class::HPLOO enables this kind of syntax for Perl classes:

  class Foo {
    sub bar($x , $y) {
      $this->add($x , $y) ;
    sub[C] int add( int x , int y ) {
      int res = x + y ;
      return res ;

What make possible to write the module in 2 days! ;-P

Basics of a Neural Network

- This is just a simple text for lay pleople, to try to make them to understand what is a Neural Network and how it works without need to read a lot of books -.

A NN is based in nodes/neurons and layers, where we have the input layer, the hidden layers and the output layer.

For example, here we have a NN with 2 inputs, 1 hidden layer, and 2 outputs:

         Input  Hidden  Output
 input1  ---->n1\    /---->n4---> output1
                 \  /
                 /  \
 input2  ---->n2/    \---->n5---> output2

Basically, when we have an input, let's say [0,1], it will active n2, that will active n3 and n3 will active n4 and n5, but the link between n3 and n4 has a weight, and between n3 and n5 another weight. The idea is to find the weights between the nodes that can give to us an output near the real output. So, if the output of [0,1] is [1,1], the nodes output1 and output2 should give to us a number near 1, let's say 0.98654. And if the output for [0,0] is [0,0], output1 and output2 should give to us a number near 0, let's say 0.078875.

What is hard in a NN is to find this weights. By default AI::NNEasy uses backprop as learning algorithm. With backprop it pastes the inputs through the Neural Network and adjust the weights using random numbers until we find a set of weights that give to us the right output.

The secret of a NN is the number of hidden layers and nodes/neurons for each layer. Basically the best way to define the hidden layers is 1 layer of (INPUT_NODES+OUTPUT_NODES). So, a layer of 2 input nodes and 1 output node, should have 3 nodes in the hidden layer. This definition exists because the number of inputs define the maximal variability of the inputs (N**2 for bollean inputs), and the output defines if the variability is reduced by some logic restriction, like int the XOR example, where we have 2 inputs and 1 output, so, hidden is 3. And as we can see in the logic we have 3 groups of inputs:

  0 0 => 0 # false
  0 1 => 1 # or
  1 0 => 1 # or
  1 1 => 1 # true

Actually this is not the real explanation, but is the easiest way to understand that you need to have a number of nodes/neuros in the hidden layer that can give the right output for your problem.

Other inportant step of a NN is the learning fase. Where we get a set of inputs and paste them through the NN until we have the right output. This process basically will adjust the nodes weights until we have an output near the real output that we want.

Other important concept is that the inputs and outputs in the NN should be from 0 to 1. So, you can define sets like:

  0 0      => 0
  0 0.5    => 0.5
  0.5 0.5  => 1
  1 0.5    => 0
  1 1      => 1

But what is really recomended is to always use bollean values, just 0 or 1, for inputs and outputs, since the learning fase will be faster and works better for complex problems.


AI::NNFlex, AI::NeuralNet::Simple, Class::HPLOO, Inline.


Graciliano M. P. <>

I will appreciate any type of feedback (include your opinions and/or suggestions). ;-P

Thanks a lot to Charles Colbourn <charlesc at>, that is the author of AI::NNFlex, that 1st wrote it, since NNFlex was my starting point to do this NN work, and 2nd to be in touch with the development of AI::NNEasy.


This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.