Object::Remote - Call methods on objects in other processes or on other hosts


Creating a connection:

  use Object::Remote;

  my $conn = Object::Remote->connect('myserver'); # invokes ssh

Calling a subroutine:

  my $capture = IPC::System::Simple->can::on($conn, 'capture');

  warn $capture->('uptime');

Using an object:

  my $eval = Eval::WithLexicals->new::on($conn);

  $eval->eval(q{my $x = `uptime`});

  warn $eval->eval(q{$x});

Importantly: 'myserver' only requires perl 5.8+ - no non-core modules need to be installed on the far side, Object::Remote takes care of it for you!


Object::Remote allows you to create an object in another process - usually one running on another machine you can connect to via ssh, although there are other connection mechanisms available.

The idea here is that in many cases one wants to be able to run a piece of code on another machine, or perhaps many other machines - but without having to install anything on the far side.



The "main" API, which provides the "connect" method to create a connection to a remote process/host, "new::on" to create an object on a connection, and "can::on" to retrieve a subref over a connection.


The object representing a connection, which provides the "remote_object" in Object::Remote::Connection and "remote_sub" in Object::Remote::Connection methods that are used by "new::on" and "can::on" to return proxies for objects and subroutines on the far side.


Code for dealing with asynchronous operations, which provides the "start::method" in Object::Remote::Future syntax for calling a possibly asynchronous method without blocking, and "await_future" in Object::Remote::Future and "await_all" in Object::Remote::Future to block until an asynchronous call completes or fails.



  my $conn = Object::Remote->connect('-'); # fork()ed connection

  my $conn = Object::Remote->connect('myserver'); # connection over ssh

  my $conn = Object::Remote->connect('user@myserver'); # connection over ssh

  my $conn = Object::Remote->connect('root@'); # connection over sudo


  my $eval = Eval::WithLexicals->new::on($conn);

  my $eval = Eval::WithLexicals->new::on('myserver'); # implicit connect

  my $obj = Some::Class->new::on($conn, %args); # with constructor arguments


  my $hostname = Sys::Hostname->can::on($conn, 'hostname');

  my $hostname = Sys::Hostname->can::on('myserver', 'hostname');



When starting a new Perl interpreter the contents of this environment variable will be used as the path to the executable. If the variable is not set the path is 'perl'


Setting this environment variable will enable logging and send all log messages at the specfied level or higher to STDERR. Valid level names are: trace debug verbose info warn error fatal


The format of the logging output is configurable. By setting this environment variable the format can be controlled via printf style position variables. See Object::Remote::Logging::Logger.


Forward log events from remote connections to the local Perl interpreter. Set to 1 to enable this feature which is disabled by default. See Object::Remote::Logging.


Space seperated list of class names to display logs for if logging output is enabled. Default value is "Object::Remote::Logging" which selects all logs generated by Object::Remote. See Object::Remote::Logging.


Large data structures

Object::Remote communication is encapsalated with JSON and values passed to remote objects will be serialized with it. When sending large data structures or data structures with a lot of deep complexity (hashes in arrays in hashes in arrays) the processor time and memory requirements for serialization and deserialization can be either painful or unworkable. During times of serialization the local or remote nodes will be blocked potentially causing all remote interpreters to block as well under worse case conditions.

To help deal with this issue it is possible to configure resource ulimits for a Perl interpreter that is executed by Object::Remote. See Object::Remote::Role::Connector::PerlInterpreter for details on the perl_command attribute.

User can starve run loop of execution opportunities

The Object::Remote run loop is responsible for performing I/O and managing timers in a cooperative multitasing way but it can only do these tasks when the user has given control to Object::Remote. There are times when Object::Remote must wait for the user to return control to the run loop and during these times no I/O can be performed and no timers can be executed.

As an end user of Object::Remote if you depend on connection timeouts, the watch dog or timely results from remote objects then be sure to hand control back to Object::Remote as soon as you can.

Run loop favors certain filehandles/connections
High levels of load can starve timers of execution opportunities

These are issues that only become a problem at large scales. The end result of these two issues is quite similiar: some remote objects may block while the local run loop is either busy servicing a different connection or is not executing because control has not yet been returned to it. For the same reasons timers may not get an opportunity to execute in a timely way.

Internally Object::Remote uses timers managed by the run loop for control tasks. Under high load the timers can be preempted by servicing I/O on the filehandles and execution can be severely delayed. This can lead to connection watchdogs not being updated or connection timeouts taking longer than configured.


Deadlocks can happen quite easily because of flaws in programs that use Object::Remote or Object::Remote itself so the Object::Remote::WatchDog is available. When used the run loop will periodically update the watch dog object on the remote Perl interpreter. If the watch dog goes longer than the configured interval with out being updated then it will terminate the Perl process. The watch dog will terminate the process even if a deadlock condition has occured.

Log forwarding at scale can starve timers of execution opportunities

Currently log forwarding can be problematic at large scales. When there is a large amount of log events the load produced by log forwarding can be high enough that it starves the timers and the remote object watch dogs (if in use) don't get updated in timely way causing them to erroneously terminate the Perl process. If the watch dog is not in use then connection timeouts can be delayed but will execute when load settles down enough.

Because of the load related issues Object::Remote disables log forwarding by default. See Object::Remote::Logging for information on log forwarding.


IRC: #web-simple on


mst - Matt S. Trout (cpan:MSTROUT) <>


bfwg - Colin Newell (cpan:NEWELLC) <>

phaylon - Robert Sedlacek (cpan:PHAYLON) <>

triddle - Tyler Riddle (cpan:TRIDDLE) <>


Parts of this code were paid for by

  Socialflow L<>

  Shadowcat Systems L<>


Copyright (c) 2012 the Object::Remote "AUTHOR", "CONTRIBUTORS" and "SPONSORS" as listed above.


This library is free software and may be distributed under the same terms as perl itself.