AnyEvent::MP - erlang-style multi-processing/message-passing framework


   use AnyEvent::MP;

   $NODE      # contains this node's node ID
   NODE       # returns this node's node ID

   $SELF      # receiving/own port id in rcv callbacks

   # initialise the node so it can send/receive messages

   # ports are message destinations

   # sending messages
   snd $port, type => data...;
   snd $port, @msg;
   snd @msg_with_first_element_being_a_port;

   # creating/using ports, the simple way
   my $simple_port = port { my @msg = @_ };

   # creating/using ports, tagged message matching
   my $port = port;
   rcv $port, ping => sub { snd $_[0], "pong" };
   rcv $port, pong => sub { warn "pong received\n" };

   # create a port on another node
   my $port = spawn $node, $initfunc, @initdata;

   # destroy a port again
   kil $port;  # "normal" kill
   kil $port, my_error => "everything is broken"; # error kill

   # monitoring
   mon $port, $cb->(@msg)      # callback is invoked on death
   mon $port, $localport       # kill localport on abnormal death
   mon $port, $localport, @msg # send message on death

   # temporarily execute code in port context
   peval $port, sub { die "kill the port!" };

   # execute callbacks in $SELF port context
   my $timer = AE::timer 1, 0, psub {
      die "kill the port, delayed";

   # distributed database - modification
   db_set $family => $subkey [=> $value]  # add a subkey
   db_del $family => $subkey...           # delete one or more subkeys
   db_reg $family => $port [=> $value]    # register a port

   # distributed database - queries
   db_family $family => $cb->(\%familyhash)
   db_keys   $family => $cb->(\@keys)
   db_values $family => $cb->(\@values)

   # distributed database - monitoring a family
   db_mon $family => $cb->(\%familyhash, \@added, \@changed, \@deleted)


This module (-family) implements a simple message passing framework.

Despite its simplicity, you can securely message other processes running on the same or other hosts, and you can supervise entities remotely.

For an introduction to this module family, see the AnyEvent::MP::Intro manual page and the examples under eg/.



Not to be confused with a TCP port, a "port" is something you can send messages to (with the snd function).

Ports allow you to register rcv handlers that can match all or just some messages. Messages send to ports will not be queued, regardless of anything was listening for them or not.

Ports are represented by (printable) strings called "port IDs".

port ID - nodeid#portname

A port ID is the concatenation of a node ID, a hash-mark (#) as separator, and a port name (a printable string of unspecified format created by AnyEvent::MP).


A node is a single process containing at least one port - the node port, which enables nodes to manage each other remotely, and to create new ports.

Nodes are either public (have one or more listening ports) or private (no listening ports). Private nodes cannot talk to other private nodes currently, but all nodes can talk to public nodes.

Nodes is represented by (printable) strings called "node IDs".

node ID - [A-Za-z0-9_\-.:]*

A node ID is a string that uniquely identifies the node within a network. Depending on the configuration used, node IDs can look like a hostname, a hostname and a port, or a random string. AnyEvent::MP itself doesn't interpret node IDs in any way except to uniquely identify a node.

binds - ip:port

Nodes can only talk to each other by creating some kind of connection to each other. To do this, nodes should listen on one or more local transport endpoints - binds.

Currently, only standard ip:port specifications can be used, which specify TCP ports to listen on. So a bind is basically just a tcp socket in listening mode that accepts connections from other nodes.

seed nodes

When a node starts, it knows nothing about the network it is in - it needs to connect to at least one other node that is already in the network. These other nodes are called "seed nodes".

Seed nodes themselves are not special - they are seed nodes only because some other node uses them as such, but any node can be used as seed node for other nodes, and eahc node can use a different set of seed nodes.

In addition to discovering the network, seed nodes are also used to maintain the network - all nodes using the same seed node are part of the same network. If a network is split into multiple subnets because e.g. the network link between the parts goes down, then using the same seed nodes for all nodes ensures that eventually the subnets get merged again.

Seed nodes are expected to be long-running, and at least one seed node should always be available. They should also be relatively responsive - a seed node that blocks for long periods will slow down everybody else.

For small networks, it's best if every node uses the same set of seed nodes. For large networks, it can be useful to specify "regional" seed nodes for most nodes in an area, and use all seed nodes as seed nodes for each other. What's important is that all seed nodes connections form a complete graph, so that the network cannot split into separate subnets forever.

Seed nodes are represented by seed IDs.

seed IDs - host:port

Seed IDs are transport endpoint(s) (usually a hostname/IP address and a TCP port) of nodes that should be used as seed nodes.

global nodes

An AEMP network needs a discovery service - nodes need to know how to connect to other nodes they only know by name. In addition, AEMP offers a distributed "group database", which maps group names to a list of strings - for example, to register worker ports.

A network needs at least one global node to work, and allows every node to be a global node.

Any node that loads the AnyEvent::MP::Global module becomes a global node and tries to keep connections to all other nodes. So while it can make sense to make every node "global" in small networks, it usually makes sense to only make seed nodes into global nodes in large networks (nodes keep connections to seed nodes and global nodes, so making them the same reduces overhead).


$thisnode = NODE / $NODE

The NODE function returns, and the $NODE variable contains, the node ID of the node running in the current process. This value is initialised by a call to configure.

$nodeid = node_of $port

Extracts and returns the node ID from a port ID or a node ID.

$is_local = port_is_local $port

Returns true iff the port is a local port.

configure $profile, key => value...
configure key => value...

Before a node can talk to other nodes on the network (i.e. enter "distributed mode") it has to configure itself - the minimum a node needs to know is its own name, and optionally it should know the addresses of some other nodes in the network to discover other nodes.

This function configures a node - it must be called exactly once (or never) before calling other AnyEvent::MP functions.

The key/value pairs are basically the same ones as documented for the aemp command line utility (sans the set/del prefix), with these additions:

norc => $boolean (default false)

If true, then the rc file (e.g. ~/.perl-anyevent-mp) will not be consulted - all configuration options must be specified in the configure call.

force => $boolean (default false)

IF true, then the values specified in the configure will take precedence over any values configured via the rc file. The default is for the rc file to override any options specified in the program.

step 1, gathering configuration from profiles

The function first looks up a profile in the aemp configuration (see the aemp commandline utility). The profile name can be specified via the named profile parameter or can simply be the first parameter). If it is missing, then the nodename (uname -n) will be used as profile name.

The profile data is then gathered as follows:

First, all remaining key => value pairs (all of which are conveniently undocumented at the moment) will be interpreted as configuration data. Then they will be overwritten by any values specified in the global default configuration (see the aemp utility), then the chain of profiles chosen by the profile name (and any parent attributes).

That means that the values specified in the profile have highest priority and the values specified directly via configure have lowest priority, and can only be used to specify defaults.

If the profile specifies a node ID, then this will become the node ID of this process. If not, then the profile name will be used as node ID, with a unique randoms tring (/%u) appended.

The node ID can contain some % sequences that are expanded: %n is expanded to the local nodename, %u is replaced by a random strign to make the node unique. For example, the aemp commandline utility uses aemp/%n/%u as nodename, which might expand to aemp/cerebro/ZQDGSIkRhEZQDGSIkRhE.

step 2, bind listener sockets

The next step is to look up the binds in the profile, followed by binding aemp protocol listeners on all binds specified (it is possible and valid to have no binds, meaning that the node cannot be contacted from the outside. This means the node cannot talk to other nodes that also have no binds, but it can still talk to all "normal" nodes).

If the profile does not specify a binds list, then a default of * is used, meaning the node will bind on a dynamically-assigned port on every local IP address it finds.

step 3, connect to seed nodes

As the last step, the seed ID list from the profile is passed to the AnyEvent::MP::Global module, which will then use it to keep connectivity with at least one node at any point in time.

Example: become a distributed node using the local node name as profile. This should be the most common form of invocation for "daemon"-type nodes.


Example: become a semi-anonymous node. This form is often used for commandline clients.

   configure nodeid => "myscript/%n/%u";

Example: configure a node using a profile called seed, which is suitable for a seed node as it binds on all local addresses on a fixed port (4040, customary for aemp).

   # use the aemp commandline utility
   # aemp profile seed binds '*:4040'

   # then use it
   configure profile => "seed";

   # or simply use aemp from the shell again:
   # aemp run profile seed

   # or provide a nicer-to-remember nodeid
   # aemp run profile seed nodeid "$(hostname)"

Contains the current port id while executing rcv callbacks or psub blocks.


Due to some quirks in how perl exports variables, it is impossible to just export $SELF, all the symbols named SELF are exported by this module, but only $SELF is currently used.

snd $port, type => @data
snd $port, @msg

Send the given message to the given port, which can identify either a local or a remote port, and must be a port ID.

While the message can be almost anything, it is highly recommended to use a string as first element (a port ID, or some word that indicates a request type etc.) and to consist if only simple perl values (scalars, arrays, hashes) - if you think you need to pass an object, think again.

The message data logically becomes read-only after a call to this function: modifying any argument (or values referenced by them) is forbidden, as there can be considerable time between the call to snd and the time the message is actually being serialised - in fact, it might never be copied as within the same process it is simply handed to the receiving port.

The type of data you can transfer depends on the transport protocol: when JSON is used, then only strings, numbers and arrays and hashes consisting of those are allowed (no objects). When Storable is used, then anything that Storable can serialise and deserialise is allowed, and for the local node, anything can be passed. Best rely only on the common denominator of these.

$local_port = port

Create a new local port object and returns its port ID. Initially it has no callbacks set and will throw an error when it receives messages.

$local_port = port { my @msg = @_ }

Creates a new local port, and returns its ID. Semantically the same as creating a port and calling rcv $port, $callback on it.

The block will be called for every message received on the port, with the global variable $SELF set to the port ID. Runtime errors will cause the port to be kiled. The message will be passed as-is, no extra argument (i.e. no port ID) will be passed to the callback.

If you want to stop/destroy the port, simply kil it:

   my $port = port {
      my @msg = @_;
      kil $SELF;
rcv $local_port, $callback->(@msg)

Replaces the default callback on the specified port. There is no way to remove the default callback: use sub { } to disable it, or better kil the port when it is no longer needed.

The global $SELF (exported by this module) contains $port while executing the callback. Runtime errors during callback execution will result in the port being kiled.

The default callback receives all messages not matched by a more specific tag match.

rcv $local_port, tag => $callback->(@msg_without_tag), ...

Register (or replace) callbacks to be called on messages starting with the given tag on the given port (and return the port), or unregister it (when $callback is $undef or missing). There can only be one callback registered for each tag.

The original message will be passed to the callback, after the first element (the tag) has been removed. The callback will use the same environment as the default callback (see above).

Example: create a port and bind receivers on it in one go.

  my $port = rcv port,
     msg1 => sub { ... },
     msg2 => sub { ... },

Example: create a port, bind receivers and send it in a message elsewhere in one go:

   snd $otherport, reply =>
      rcv port,
         msg1 => sub { ... },

Example: temporarily register a rcv callback for a tag matching some port (e.g. for an rpc reply) and unregister it after a message was received.

   rcv $port, $otherport => sub {
      my @reply = @_;

      rcv $SELF, $otherport;
peval $port, $coderef[, @args]

Evaluates the given $codref within the context of $port, that is, when the code throws an exception the $port will be killed.

Any remaining args will be passed to the callback. Any return values will be returned to the caller.

This is useful when you temporarily want to execute code in the context of a port.

Example: create a port and run some initialisation code in it's context.

   my $port = port { ... };

   peval $port, sub {
         or die "unable to init";
$closure = psub { BLOCK }

Remembers $SELF and creates a closure out of the BLOCK. When the closure is executed, sets up the environment in the same way as in rcv callbacks, i.e. runtime errors will cause the port to get kiled.

The effect is basically as if it returned sub { peval $SELF, sub { BLOCK }, @_ }.

This is useful when you register callbacks from rcv callbacks:

   rcv delayed_reply => sub {
      my ($delay, @reply) = @_;
      my $timer = AE::timer $delay, 0, psub {
         snd @reply, $SELF;
$guard = mon $port, $rcvport # kill $rcvport when $port dies
$guard = mon $port # kill $SELF when $port dies
$guard = mon $port, $cb->(@reason) # call $cb when $port dies
$guard = mon $port, $rcvport, @msg # send a message when $port dies

Monitor the given port and do something when the port is killed or messages to it were lost, and optionally return a guard that can be used to stop monitoring again.

The first two forms distinguish between "normal" and "abnormal" kil's:

In the first form (another port given), if the $port is kil'ed with a non-empty reason, the other port ($rcvport) will be kil'ed with the same reason. That is, on "normal" kil's nothing happens, while under all other conditions, the other port is killed with the same reason.

The second form (kill self) is the same as the first form, except that $rvport defaults to $SELF.

The remaining forms don't distinguish between "normal" and "abnormal" kil's - it's up to the callback or receiver to check whether the @reason is empty and act accordingly.

In the third form (callback), the callback is simply called with any number of @reason elements (empty @reason means that the port was deleted "normally"). Note also that the callback must never die, so use eval if unsure.

In the last form (message), a message of the form $rcvport, @msg, @reason will be snd.

Monitoring-actions are one-shot: once messages are lost (and a monitoring alert was raised), they are removed and will not trigger again, even if it turns out that the port is still alive.

As a rule of thumb, monitoring requests should always monitor a remote port locally (using a local $rcvport or a callback). The reason is that kill messages might get lost, just like any other message. Another less obvious reason is that even monitoring requests can get lost (for example, when the connection to the other node goes down permanently). When monitoring a port locally these problems do not exist.

mon effectively guarantees that, in the absence of hardware failures, after starting the monitor, either all messages sent to the port will arrive, or the monitoring action will be invoked after possible message loss has been detected. No messages will be lost "in between" (after the first lost message no further messages will be received by the port). After the monitoring action was invoked, further messages might get delivered again.

Inter-host-connection timeouts and monitoring depend on the transport used. The only transport currently implemented is TCP, and AnyEvent::MP relies on TCP to detect node-downs (this can take 10-15 minutes on a non-idle connection, and usually around two hours for idle connections).

This means that monitoring is good for program errors and cleaning up stuff eventually, but they are no replacement for a timeout when you need to ensure some maximum latency.

Example: call a given callback when $port is killed.

   mon $port, sub { warn "port died because of <@_>\n" };

Example: kill ourselves when $port is killed abnormally.

   mon $port;

Example: send us a restart message when another $port is killed.

   mon $port, $self => "restart";
$guard = mon_guard $port, $ref, $ref...

Monitors the given $port and keeps the passed references. When the port is killed, the references will be freed.

Optionally returns a guard that will stop the monitoring.

This function is useful when you create e.g. timers or other watchers and want to free them when the port gets killed (note the use of psub):

  $port->rcv (start => sub {
     my $timer; $timer = mon_guard $port, AE::timer 1, 1, psub {
        undef $timer if 0.9 < rand;
kil $port[, @reason]

Kill the specified port with the given @reason.

If no @reason is specified, then the port is killed "normally" - monitor callback will be invoked, but the kil will not cause linked ports (mon $mport, $lport form) to get killed.

If a @reason is specified, then linked ports (mon $mport, $lport form) get killed with the same reason.

Runtime errors while evaluating rcv callbacks or inside psub blocks will be reported as reason die => $@.

Transport/communication errors are reported as transport_error => $message.

Common idioms:

   # silently remove yourself, do not kill linked ports
   kil $SELF;

   # report a failure in some detail
   kil $SELF, failure_mode_1 => "it failed with too high temperature";

   # do not waste much time with killing, just die when something goes wrong
   open my $fh, "<file"
      or die "file: $!";
$port = spawn $node, $initfunc[, @initdata]

Creates a port on the node $node (which can also be a port ID, in which case it's the node where that port resides).

The port ID of the newly created port is returned immediately, and it is possible to immediately start sending messages or to monitor the port.

After the port has been created, the init function is called on the remote node, in the same context as a rcv callback. This function must be a fully-qualified function name (e.g. MyApp::Chat::Server::init). To specify a function in the main program, use ::name.

If the function doesn't exist, then the node tries to require the package, then the package above the package and so on (e.g. MyApp::Chat::Server, MyApp::Chat, MyApp) until the function exists or it runs out of package names.

The init function is then called with the newly-created port as context object ($SELF) and the @initdata values as arguments. It must call one of the rcv functions to set callbacks on $SELF, otherwise the port might not get created.

A common idiom is to pass a local port, immediately monitor the spawned port, and in the remote init function, immediately monitor the passed local port. This two-way monitoring ensures that both ports get cleaned up when there is a problem.

spawn guarantees that the $initfunc has no visible effects on the caller before spawn returns (by delaying invocation when spawn is called for the local node).

Example: spawn a chat server port on $othernode.

   # this node, executed from within a port context:
   my $server = spawn $othernode, "MyApp::Chat::Server::connect", $SELF;
   mon $server;

   # init function on C<$othernode>
   sub connect {
      my ($srcport) = @_;

      mon $srcport;

      rcv $SELF, sub {
after $timeout, @msg
after $timeout, $callback

Either sends the given message, or call the given callback, after the specified number of seconds.

This is simply a utility function that comes in handy at times - the AnyEvent::MP author is not convinced of the wisdom of having it, though, so it may go away in the future.

cal $port, @msg, $callback[, $timeout]

A simple form of RPC - sends a message to the given $port with the given contents (@msg), but adds a reply port to the message.

The reply port is created temporarily just for the purpose of receiving the reply, and will be kiled when no longer needed.

A reply message sent to the port is passed to the $callback as-is.

If an optional time-out (in seconds) is given and it is not undef, then the callback will be called without any arguments after the time-out elapsed and the port is kiled.

If no time-out is given (or it is undef), then the local port will monitor the remote port instead, so it eventually gets cleaned-up.

Currently this function returns the temporary port, but this "feature" might go in future versions unless you can make a convincing case that this is indeed useful for something.


AnyEvent::MP comes with a simple distributed database. The database will be mirrored asynchronously on all global nodes. Other nodes bind to one of the global nodes for their needs. Every node has a "local database" which contains all the values that are set locally. All local databases are merged together to form the global database, which can be queried.

The database structure is that of a two-level hash - the database hash contains hashes which contain values, similarly to a perl hash of hashes, i.e.:

  $DATABASE{$family}{$subkey} = $value

The top level hash key is called "family", and the second-level hash key is called "subkey" or simply "key".

The family must be alphanumeric, i.e. start with a letter and consist of letters, digits, underscores and colons ([A-Za-z][A-Za-z0-9_:]*, pretty much like Perl module names.

As the family namespace is global, it is recommended to prefix family names with the name of the application or module using it.

The subkeys must be non-empty strings, with no further restrictions.

The values should preferably be strings, but other perl scalars should work as well (such as undef, arrays and hashes).

Every database entry is owned by one node - adding the same family/subkey combination on multiple nodes will not cause discomfort for AnyEvent::MP, but the result might be nondeterministic, i.e. the key might have different values on different nodes.

Different subkeys in the same family can be owned by different nodes without problems, and in fact, this is the common method to create worker pools. For example, a worker port for image scaling might do this:

   db_set my_image_scalers => $port;

And clients looking for an image scaler will want to get the my_image_scalers keys from time to time:

   db_keys my_image_scalers => sub {
      @ports = @{ $_[0] };

Or better yet, they want to monitor the database family, so they always have a reasonable up-to-date copy:

   db_mon my_image_scalers => sub {
      @ports = keys %{ $_[0] };

In general, you can set or delete single subkeys, but query and monitor whole families only.

If you feel the need to monitor or query a single subkey, try giving it it's own family.

$guard = db_set $family => $subkey [=> $value]

Sets (or replaces) a key to the database - if $value is omitted, undef is used instead.

When called in non-void context, db_set returns a guard that automatically calls db_del when it is destroyed.

db_del $family => $subkey...

Deletes one or more subkeys from the database family.

$guard = db_reg $family => $port => $value
$guard = db_reg $family => $port
$guard = db_reg $family

Registers a port in the given family and optionally returns a guard to remove it.

This function basically does the same as:

   db_set $family => $port => $value

Except that the port is monitored and automatically removed from the database family when it is kil'ed.

If $value is missing, undef is used. If $port is missing, then $SELF is used.

This function is most useful to register a port in some port group (which is just another name for a database family), and have it removed when the port is gone. This works best when the port is a local port.

db_family $family => $cb->(\%familyhash)

Queries the named database $family and call the callback with the family represented as a hash. You can keep and freely modify the hash.

db_keys $family => $cb->(\@keys)

Same as db_family, except it only queries the family subkeys and passes them as array reference to the callback.

db_values $family => $cb->(\@values)

Same as db_family, except it only queries the family values and passes them as array reference to the callback.

$guard = db_mon $family => $cb->(\%familyhash, \@added, \@changed, \@deleted)

Creates a monitor on the given database family. Each time a key is set or is deleted the callback is called with a hash containing the database family and three lists of added, changed and deleted subkeys, respectively. If no keys have changed then the array reference might be undef or even missing.

If not called in void context, a guard object is returned that, when destroyed, stops the monitor.

The family hash reference and the key arrays belong to AnyEvent::MP and must not be modified or stored by the callback. When in doubt, make a copy.

As soon as possible after the monitoring starts, the callback will be called with the intiial contents of the family, even if it is empty, i.e. there will always be a timely call to the callback with the current contents.

It is possible that the callback is called with a change event even though the subkey is already present and the value has not changed.

The monitoring stops when the guard object is destroyed.

Example: on every change to the family "mygroup", print out all keys.

   my $guard = db_mon mygroup => sub {
      my ($family, $a, $c, $d) = @_;
      print "mygroup members: ", (join " ", keys %$family), "\n";

Exmaple: wait until the family "My::Module::workers" is non-empty.

   my $guard; $guard = db_mon My::Module::workers => sub {
      my ($family, $a, $c, $d) = @_;
      return unless %$family;
      undef $guard;
      print "My::Module::workers now nonempty\n";

Example: print all changes to the family "AnyEvent::Fantasy::Module".

   my $guard = db_mon AnyEvent::Fantasy::Module => sub {
      my ($family, $a, $c, $d) = @_;

      print "+$_=$family->{$_}\n" for @$a;
      print "*$_=$family->{$_}\n" for @$c;
      print "-$_=$family->{$_}\n" for @$d;

AnyEvent::MP vs. Distributed Erlang

AnyEvent::MP got lots of its ideas from distributed Erlang (Erlang node == aemp node, Erlang process == aemp port), so many of the documents and programming techniques employed by Erlang apply to AnyEvent::MP. Here is a sample: # chapters 3 and 4      # chapters 5 and 6  # chapters 4 and 5

Despite the similarities, there are also some important differences:

  • Node IDs are arbitrary strings in AEMP.

    Erlang relies on special naming and DNS to work everywhere in the same way. AEMP relies on each node somehow knowing its own address(es) (e.g. by configuration or DNS), and possibly the addresses of some seed nodes, but will otherwise discover other nodes (and their IDs) itself.

  • Erlang has a "remote ports are like local ports" philosophy, AEMP uses "local ports are like remote ports".

    The failure modes for local ports are quite different (runtime errors only) then for remote ports - when a local port dies, you know it dies, when a connection to another node dies, you know nothing about the other port.

    Erlang pretends remote ports are as reliable as local ports, even when they are not.

    AEMP encourages a "treat remote ports differently" philosophy, with local ports being the special case/exception, where transport errors cannot occur.

  • Erlang uses processes and a mailbox, AEMP does not queue.

    Erlang uses processes that selectively receive messages out of order, and therefore needs a queue. AEMP is event based, queuing messages would serve no useful purpose. For the same reason the pattern-matching abilities of AnyEvent::MP are more limited, as there is little need to be able to filter messages without dequeuing them.

    This is not a philosophical difference, but simply stems from AnyEvent::MP being event-based, while Erlang is process-based.

    You can have a look at Coro::MP for a more Erlang-like process model on top of AEMP and Coro threads.

  • Erlang sends are synchronous, AEMP sends are asynchronous.

    Sending messages in Erlang is synchronous and blocks the process until a connection has been established and the message sent (and so does not need a queue that can overflow). AEMP sends return immediately, connection establishment is handled in the background.

  • Erlang suffers from silent message loss, AEMP does not.

    Erlang implements few guarantees on messages delivery - messages can get lost without any of the processes realising it (i.e. you send messages a, b, and c, and the other side only receives messages a and c).

    AEMP guarantees (modulo hardware errors) correct ordering, and the guarantee that after one message is lost, all following ones sent to the same port are lost as well, until monitoring raises an error, so there are no silent "holes" in the message sequence.

    If you want your software to be very reliable, you have to cope with corrupted and even out-of-order messages in both Erlang and AEMP. AEMP simply tries to work better in common error cases, such as when a network link goes down.

  • Erlang can send messages to the wrong port, AEMP does not.

    In Erlang it is quite likely that a node that restarts reuses an Erlang process ID known to other nodes for a completely different process, causing messages destined for that process to end up in an unrelated process.

    AEMP does not reuse port IDs, so old messages or old port IDs floating around in the network will not be sent to an unrelated port.

  • Erlang uses unprotected connections, AEMP uses secure authentication and can use TLS.

    AEMP can use a proven protocol - TLS - to protect connections and securely authenticate nodes.

  • The AEMP protocol is optimised for both text-based and binary communications.

    The AEMP protocol, unlike the Erlang protocol, supports both programming language independent text-only protocols (good for debugging), and binary, language-specific serialisers (e.g. Storable). By default, unless TLS is used, the protocol is actually completely text-based.

    It has also been carefully designed to be implementable in other languages with a minimum of work while gracefully degrading functionality to make the protocol simple.

  • AEMP has more flexible monitoring options than Erlang.

    In Erlang, you can chose to receive all exit signals as messages or none, there is no in-between, so monitoring single Erlang processes is difficult to implement.

    Monitoring in AEMP is more flexible than in Erlang, as one can choose between automatic kill, exit message or callback on a per-port basis.

  • Erlang tries to hide remote/local connections, AEMP does not.

    Monitoring in Erlang is not an indicator of process death/crashes, in the same way as linking is (except linking is unreliable in Erlang).

    In AEMP, you don't "look up" registered port names or send to named ports that might or might not be persistent. Instead, you normally spawn a port on the remote node. The init function monitors you, and you monitor the remote port. Since both monitors are local to the node, they are much more reliable (no need for spawn_link).

    This also saves round-trips and avoids sending messages to the wrong port (hard to do in Erlang).


Why strings for port and node IDs, why not objects?

We considered "objects", but found that the actual number of methods that can be called are quite low. Since port and node IDs travel over the network frequently, the serialising/deserialising would add lots of overhead, as well as having to keep a proxy object everywhere.

Strings can easily be printed, easily serialised etc. and need no special procedures to be "valid".

And as a result, a port with just a default receiver consists of a single code reference stored in a global hash - it can't become much cheaper.

Why favour JSON, why not a real serialising format such as Storable?

In fact, any AnyEvent::MP node will happily accept Storable as framing format, but currently there is no way to make a node use Storable by default (although all nodes will accept it).

The default framing protocol is JSON because a) JSON::XS is many times faster for small messages and b) most importantly, after years of experience we found that object serialisation is causing more problems than it solves: Just like function calls, objects simply do not travel easily over the network, mostly because they will always be a copy, so you always have to re-think your design.

Keeping your messages simple, concentrating on data structures rather than objects, will keep your messages clean, tidy and efficient.


AEMP version 2 has a few major incompatible changes compared to version 1:

AnyEvent::MP::Global no longer has group management functions.

At least not officially - the grp_* functions are still exported and might work, but they will be removed in some later release.

AnyEvent::MP now comes with a distributed database that is more powerful. Its database families map closely to port groups, but the API has changed (the functions are also now exported by AnyEvent::MP). Here is a rough porting guide:

  grp_reg $group, $port                      # old
  db_reg $group, $port                       # new

  $list = grp_get $group                     # old
  db_keys $group, sub { my $list = shift }   # new

  grp_mon $group, $cb->(\@ports, $add, $del) # old
  db_mon $group, $cb->(\%ports, $add, $change, $del) # new

grp_reg is a no-brainer (just replace by db_reg), but grp_get is no longer instant, because the local node might not have a copy of the group. You can either modify your code to allow for a callback, or use db_mon to keep an updated copy of the group:

  my $local_group_copy;
  db_mon $group => sub { $local_group_copy = $_[0] };

  # now "keys %$local_group_copy" always returns the most up-to-date
  # list of ports in the group.

grp_mon can be replaced by db_mon with minor changes - db_mon passes a hash as first argument, and an extra $chg argument that can be ignored:

  db_mon $group => sub {
     my ($ports, $add, $chg, $del) = @_;
     $ports = [keys %$ports];

     # now $ports, $add and $del are the same as
     # were originally passed by grp_mon.
Nodes not longer connect to all other nodes.

In AEMP 1.x, every node automatically loads the AnyEvent::MP::Global module, which in turn would create connections to all other nodes in the network (helped by the seed nodes).

In version 2.x, global nodes still connect to all other global nodes, but other nodes don't - now every node either is a global node itself, or attaches itself to another global node.

If a node isn't a global node itself, then it attaches itself to one of its seed nodes. If that seed node isn't a global node yet, it will automatically be upgraded to a global node.

So in many cases, nothing needs to be changed - one just has to make sure that all seed nodes are meshed together with the other seed nodes (as with AEMP 1.x), and other nodes specify them as seed nodes. This is most easily achieved by specifying the same set of seed nodes for all nodes in the network.

Not opening a connection to every other node is usually an advantage, except when you need the lower latency of an already established connection. To ensure a node establishes a connection to another node, you can monitor the node port (mon $node, ...), which will attempt to create the connection (and notify you when the connection fails).

Listener-less nodes (nodes without binds) are gone.

And are not coming back, at least not in their old form. If no binds are specified for a node, AnyEvent::MP assumes a default of *:*.

There are vague plans to implement some form of routing domains, which might or might not bring back listener-less nodes, but don't count on it.

The fact that most connections are now optional somewhat mitigates this, as a node can be effectively unreachable from the outside without any problems, as long as it isn't a global node and only reaches out to other nodes (as opposed to being contacted from other nodes).

$AnyEvent::MP::Kernel::WARN has gone.

AnyEvent has acquired a logging framework (AnyEvent::Log), and AEMP now uses this, and so should your programs.

Every module now documents what kinds of messages it generates, with AnyEvent::MP acting as a catch all.

On the positive side, this means that instead of setting PERL_ANYEVENT_MP_WARNLEVEL, you can get away by setting AE_VERBOSE - much less to type.


AnyEvent::MP does not normally log anything by itself, but since it is the root of the context hierarchy for AnyEvent::MP modules, it will receive all log messages by submodules.


AnyEvent::MP::Intro - a gentle introduction.

AnyEvent::MP::Kernel - more, lower-level, stuff.

AnyEvent::MP::Global - network maintenance and port groups, to find your applications.

AnyEvent::MP::DataConn - establish data connections between nodes.

AnyEvent::MP::LogCatcher - simple service to display log messages from all nodes.



 Marc Lehmann <>