The London Perl and Raku Workshop takes place on 26th Oct 2024. If your company depends on Perl, please consider sponsoring and/or attending.

NAME

IO::AIO - Asynchronous Input/Output

SYNOPSIS

 use IO::AIO;

 aio_open "/etc/passwd", O_RDONLY, 0, sub {
    my ($fh) = @_;
    ...
 };

 aio_unlink "/tmp/file", sub { };

 aio_read $fh, 30000, 1024, $buffer, 0, sub {
    $_[0] > 0 or die "read error: $!";
 };

 # version 2+ has request and group objects
 use IO::AIO 2;

 aioreq_pri 4; # give next request a very high priority
 my $req = aio_unlink "/tmp/file", sub { };
 $req->cancel; # cancel request if still in queue

 my $grp = aio_group sub { print "all stats done\n" };
 add $grp aio_stat "..." for ...;

 # AnyEvent integration
 open my $fh, "<&=" . IO::AIO::poll_fileno or die "$!";
 my $w = AnyEvent->io (fh => $fh, poll => 'r', cb => sub { IO::AIO::poll_cb });

 # Event integration
 Event->io (fd => IO::AIO::poll_fileno,
            poll => 'r',
            cb => \&IO::AIO::poll_cb);

 # Glib/Gtk2 integration
 add_watch Glib::IO IO::AIO::poll_fileno,
           in => sub { IO::AIO::poll_cb; 1 };

 # Tk integration
 Tk::Event::IO->fileevent (IO::AIO::poll_fileno, "",
                           readable => \&IO::AIO::poll_cb);

 # Danga::Socket integration
 Danga::Socket->AddOtherFds (IO::AIO::poll_fileno =>
                             \&IO::AIO::poll_cb);

DESCRIPTION

This module implements asynchronous I/O using whatever means your operating system supports.

In this version, a number of threads are started that execute your requests and signal their completion. You don't need thread support in perl, and the threads created by this module will not be visible to perl. In the future, this module might make use of the native aio functions available on many operating systems. However, they are often not well-supported or restricted (Linux doesn't allow them on normal files currently, for example), and they would only support aio_read and aio_write, so the remaining functionality would have to be implemented using threads anyway.

Although the module will work with in the presence of other (Perl-) threads, it is currently not reentrant in any way, so use appropriate locking yourself, always call poll_cb from within the same thread, or never call poll_cb (or other aio_ functions) recursively.

REQUEST ANATOMY AND LIFETIME

Every aio_* function creates a request. which is a C data structure not directly visible to Perl.

If called in non-void context, every request function returns a Perl object representing the request. In void context, nothing is returned, which saves a bit of memory.

The perl object is a fairly standard ref-to-hash object. The hash contents are not used by IO::AIO so you are free to store anything you like in it.

During their existance, aio requests travel through the following states, in order:

ready

Immediately after a request is created it is put into the ready state, waiting for a thread to execute it.

execute

A thread has accepted the request for processing and is currently executing it (e.g. blocking in read).

pending

The request has been executed and is waiting for result processing.

While request submission and execution is fully asynchronous, result processing is not and relies on the perl interpreter calling poll_cb (or another function with the same effect).

result

The request results are processed synchronously by poll_cb.

The poll_cb function will process all outstanding aio requests by calling their callbacks, freeing memory associated with them and managing any groups they are contained in.

done

Request has reached the end of its lifetime and holds no resources anymore (except possibly for the Perl object, but its connection to the actual aio request is severed and calling its methods will either do nothing or result in a runtime error).

FUNCTIONS

AIO FUNCTIONS

All the aio_* calls are more or less thin wrappers around the syscall with the same name (sans aio_). The arguments are similar or identical, and they all accept an additional (and optional) $callback argument which must be a code reference. This code reference will get called with the syscall return code (e.g. most syscalls return -1 on error, unlike perl, which usually delivers "false") as it's sole argument when the given syscall has been executed asynchronously.

All functions expecting a filehandle keep a copy of the filehandle internally until the request has finished.

All requests return objects of type IO::AIO::REQ that allow further manipulation of those requests while they are in-flight.

The pathnames you pass to these routines must be absolute and encoded in byte form. The reason for the former is that at the time the request is being executed, the current working directory could have changed. Alternatively, you can make sure that you never change the current working directory.

To encode pathnames to byte form, either make sure you either: a) always pass in filenames you got from outside (command line, readdir etc.), b) are ASCII or ISO 8859-1, c) use the Encode module and encode your pathnames to the locale (or other) encoding in effect in the user environment, d) use Glib::filename_from_unicode on unicode filenames or e) use something else.

$prev_pri = aioreq_pri [$pri]

Returns the priority value that would be used for the next request and, if $pri is given, sets the priority for the next aio request.

The default priority is 0, the minimum and maximum priorities are -4 and 4, respectively. Requests with higher priority will be serviced first.

The priority will be reset to 0 after each call to one of the aio_* functions.

Example: open a file with low priority, then read something from it with higher priority so the read request is serviced before other low priority open requests (potentially spamming the cache):

   aioreq_pri -3;
   aio_open ..., sub {
      return unless $_[0];

      aioreq_pri -2;
      aio_read $_[0], ..., sub {
         ...
      };
   };
aioreq_nice $pri_adjust

Similar to aioreq_pri, but subtracts the given value from the current priority, so effects are cumulative.

aio_open $pathname, $flags, $mode, $callback->($fh)

Asynchronously open or create a file and call the callback with a newly created filehandle for the file.

The pathname passed to aio_open must be absolute. See API NOTES, above, for an explanation.

The $flags argument is a bitmask. See the Fcntl module for a list. They are the same as used by sysopen.

Likewise, $mode specifies the mode of the newly created file, if it didn't exist and O_CREAT has been given, just like perl's sysopen, except that it is mandatory (i.e. use 0 if you don't create new files, and 0666 or 0777 if you do).

Example:

   aio_open "/etc/passwd", O_RDONLY, 0, sub {
      if ($_[0]) {
         print "open successful, fh is $_[0]\n";
         ...
      } else {
         die "open failed: $!\n";
      }
   };
aio_close $fh, $callback->($status)

Asynchronously close a file and call the callback with the result code. WARNING: although accepted, you should not pass in a perl filehandle here, as perl will likely close the file descriptor another time when the filehandle is destroyed. Normally, you can safely call perls close or just let filehandles go out of scope.

This is supposed to be a bug in the API, so that might change. It's therefore best to avoid this function.

aio_read $fh,$offset,$length, $data,$dataoffset, $callback->($retval)
aio_write $fh,$offset,$length, $data,$dataoffset, $callback->($retval)

Reads or writes length bytes from the specified fh and offset into the scalar given by data and offset dataoffset and calls the callback without the actual number of bytes read (or -1 on error, just like the syscall).

The $data scalar MUST NOT be modified in any way while the request is outstanding. Modifying it can result in segfaults or WW3 (if the necessary/optional hardware is installed).

Example: Read 15 bytes at offset 7 into scalar $buffer, starting at offset 0 within the scalar:

   aio_read $fh, 7, 15, $buffer, 0, sub {
      $_[0] > 0 or die "read error: $!";
      print "read $_[0] bytes: <$buffer>\n";
   };
aio_move $srcpath, $dstpath, $callback->($status)

Try to move the file (directories not supported as either source or destination) from $srcpath to $dstpath and call the callback with the 0 (error) or -1 ok.

This is a composite request that tries to rename(2) the file first. If rename files with EXDEV, it creates the destination file with mode 0200 and copies the contents of the source file into it using aio_sendfile, followed by restoring atime, mtime, access mode and uid/gid, in that order, and unlinking the $srcpath.

If an error occurs, the partial destination file will be unlinked, if possible, except when setting atime, mtime, access mode and uid/gid, where errors are being ignored.

aio_sendfile $out_fh, $in_fh, $in_offset, $length, $callback->($retval)

Tries to copy $length bytes from $in_fh to $out_fh. It starts reading at byte offset $in_offset, and starts writing at the current file offset of $out_fh. Because of that, it is not safe to issue more than one aio_sendfile per $out_fh, as they will interfere with each other.

This call tries to make use of a native sendfile syscall to provide zero-copy operation. For this to work, $out_fh should refer to a socket, and $in_fh should refer to mmap'able file.

If the native sendfile call fails or is not implemented, it will be emulated, so you can call aio_sendfile on any type of filehandle regardless of the limitations of the operating system.

Please note, however, that aio_sendfile can read more bytes from $in_fh than are written, and there is no way to find out how many bytes have been read from aio_sendfile alone, as aio_sendfile only provides the number of bytes written to $out_fh. Only if the result value equals $length one can assume that $length bytes have been read.

aio_readahead $fh,$offset,$length, $callback->($retval)

aio_readahead populates the page cache with data from a file so that subsequent reads from that file will not block on disk I/O. The $offset argument specifies the starting point from which data is to be read and $length specifies the number of bytes to be read. I/O is performed in whole pages, so that offset is effectively rounded down to a page boundary and bytes are read up to the next page boundary greater than or equal to (off-set+length). aio_readahead does not read beyond the end of the file. The current file offset of the file is left unchanged.

If that syscall doesn't exist (likely if your OS isn't Linux) it will be emulated by simply reading the data, which would have a similar effect.

aio_stat $fh_or_path, $callback->($status)
aio_lstat $fh, $callback->($status)

Works like perl's stat or lstat in void context. The callback will be called after the stat and the results will be available using stat _ or -s _ etc...

The pathname passed to aio_stat must be absolute. See API NOTES, above, for an explanation.

Currently, the stats are always 64-bit-stats, i.e. instead of returning an error when stat'ing a large file, the results will be silently truncated unless perl itself is compiled with large file support.

Example: Print the length of /etc/passwd:

   aio_stat "/etc/passwd", sub {
      $_[0] and die "stat failed: $!";
      print "size is ", -s _, "\n";
   };

Asynchronously unlink (delete) a file and call the callback with the result code.

Asynchronously create a new link to the existing object at $srcpath at the path $dstpath and call the callback with the result code.

Asynchronously create a new symbolic link to the existing object at $srcpath at the path $dstpath and call the callback with the result code.

aio_rename $srcpath, $dstpath, $callback->($status)

Asynchronously rename the object at $srcpath to $dstpath, just as rename(2) and call the callback with the result code.

aio_rmdir $pathname, $callback->($status)

Asynchronously rmdir (delete) a directory and call the callback with the result code.

aio_readdir $pathname, $callback->($entries)

Unlike the POSIX call of the same name, aio_readdir reads an entire directory (i.e. opendir + readdir + closedir). The entries will not be sorted, and will NOT include the . and .. entries.

The callback a single argument which is either undef or an array-ref with the filenames.

aio_scandir $path, $maxreq, $callback->($dirs, $nondirs)

Scans a directory (similar to aio_readdir) but additionally tries to efficiently separate the entries of directory $path into two sets of names, directories you can recurse into (directories), and ones you cannot recurse into (everything else, including symlinks to directories).

aio_scandir is a composite request that creates of many sub requests_ $maxreq specifies the maximum number of outstanding aio requests that this function generates. If it is <= 0, then a suitable default will be chosen (currently 6).

On error, the callback is called without arguments, otherwise it receives two array-refs with path-relative entry names.

Example:

   aio_scandir $dir, 0, sub {
      my ($dirs, $nondirs) = @_;
      print "real directories: @$dirs\n";
      print "everything else: @$nondirs\n";
   };

Implementation notes.

The aio_readdir cannot be avoided, but stat()'ing every entry can.

After reading the directory, the modification time, size etc. of the directory before and after the readdir is checked, and if they match (and isn't the current time), the link count will be used to decide how many entries are directories (if >= 2). Otherwise, no knowledge of the number of subdirectories will be assumed.

Then entries will be sorted into likely directories (everything without a non-initial dot currently) and likely non-directories (everything else). Then every entry plus an appended /. will be stat'ed, likely directories first. If that succeeds, it assumes that the entry is a directory or a symlink to directory (which will be checked seperately). This is often faster than stat'ing the entry itself because filesystems might detect the type of the entry without reading the inode data (e.g. ext2fs filetype feature).

If the known number of directories (link count - 2) has been reached, the rest of the entries is assumed to be non-directories.

This only works with certainty on POSIX (= UNIX) filesystems, which fortunately are the vast majority of filesystems around.

It will also likely work on non-POSIX filesystems with reduced efficiency as those tend to return 0 or 1 as link counts, which disables the directory counting heuristic.

aio_fsync $fh, $callback->($status)

Asynchronously call fsync on the given filehandle and call the callback with the fsync result code.

aio_fdatasync $fh, $callback->($status)

Asynchronously call fdatasync on the given filehandle and call the callback with the fdatasync result code.

If this call isn't available because your OS lacks it or it couldn't be detected, it will be emulated by calling fsync instead.

aio_group $callback->(...)

This is a very special aio request: Instead of doing something, it is a container for other aio requests, which is useful if you want to bundle many requests into a single, composite, request with a definite callback and the ability to cancel the whole request with its subrequests.

Returns an object of class IO::AIO::GRP. See its documentation below for more info.

Example:

   my $grp = aio_group sub {
      print "all stats done\n";
   };

   add $grp
      (aio_stat ...),
      (aio_stat ...),
      ...;
aio_nop $callback->()

This is a special request - it does nothing in itself and is only used for side effects, such as when you want to add a dummy request to a group so that finishing the requests in the group depends on executing the given code.

While this request does nothing, it still goes through the execution phase and still requires a worker thread. Thus, the callback will not be executed immediately but only after other requests in the queue have entered their execution phase. This can be used to measure request latency.

IO::AIO::aio_busy $fractional_seconds, $callback->() *NOT EXPORTED*

Mainly used for debugging and benchmarking, this aio request puts one of the request workers to sleep for the given time.

While it is theoretically handy to have simple I/O scheduling requests like sleep and file handle readable/writable, the overhead this creates is immense (it blocks a thread for a long time) so do not use this function except to put your application under artificial I/O pressure.

IO::AIO::REQ CLASS

All non-aggregate aio_* functions return an object of this class when called in non-void context.

cancel $req

Cancels the request, if possible. Has the effect of skipping execution when entering the execute state and skipping calling the callback when entering the the result state, but will leave the request otherwise untouched. That means that requests that currently execute will not be stopped and resources held by the request will not be freed prematurely.

cb $req $callback->(...)

Replace (or simply set) the callback registered to the request.

IO::AIO::GRP CLASS

This class is a subclass of IO::AIO::REQ, so all its methods apply to objects of this class, too.

A IO::AIO::GRP object is a special request that can contain multiple other aio requests.

You create one by calling the aio_group constructing function with a callback that will be called when all contained requests have entered the done state:

   my $grp = aio_group sub {
      print "all requests are done\n";
   };

You add requests by calling the add method with one or more IO::AIO::REQ objects:

   $grp->add (aio_unlink "...");

   add $grp aio_stat "...", sub {
      $_[0] or return $grp->result ("error");

      # add another request dynamically, if first succeeded
      add $grp aio_open "...", sub {
         $grp->result ("ok");
      };
   };

This makes it very easy to create composite requests (see the source of aio_move for an application) that work and feel like simple requests.

  • The IO::AIO::GRP objects will be cleaned up during calls to IO::AIO::poll_cb, just like any other request.

  • They can be canceled like any other request. Canceling will cancel not only the request itself, but also all requests it contains.

  • They can also can also be added to other IO::AIO::GRP objects.

  • You must not add requests to a group from within the group callback (or any later time).

Their lifetime, simplified, looks like this: when they are empty, they will finish very quickly. If they contain only requests that are in the done state, they will also finish. Otherwise they will continue to exist.

That means after creating a group you have some time to add requests. And in the callbacks of those requests, you can add further requests to the group. And only when all those requests have finished will the the group itself finish.

add $grp ...
$grp->add (...)

Add one or more requests to the group. Any type of IO::AIO::REQ can be added, including other groups, as long as you do not create circular dependencies.

Returns all its arguments.

$grp->cancel_subs

Cancel all subrequests and clears any feeder, but not the group request itself. Useful when you queued a lot of events but got a result early.

$grp->result (...)

Set the result value(s) that will be passed to the group callback when all subrequests have finished and set thre groups errno to the current value of errno (just like calling errno without an error number). By default, no argument will be passed and errno is zero.

$grp->errno ([$errno])

Sets the group errno value to $errno, or the current value of errno when the argument is missing.

Every aio request has an associated errno value that is restored when the callback is invoked. This method lets you change this value from its default (0).

Calling result will also set errno, so make sure you either set $! before the call to result, or call c<errno> after it.

feed $grp $callback->($grp)

Sets a feeder/generator on this group: every group can have an attached generator that generates requests if idle. The idea behind this is that, although you could just queue as many requests as you want in a group, this might starve other requests for a potentially long time. For example, aio_scandir might generate hundreds of thousands aio_stat requests, delaying any later requests for a long time.

To avoid this, and allow incremental generation of requests, you can instead a group and set a feeder on it that generates those requests. The feed callback will be called whenever there are few enough (see limit, below) requests active in the group itself and is expected to queue more requests.

The feed callback can queue as many requests as it likes (i.e. add does not impose any limits).

If the feed does not queue more requests when called, it will be automatically removed from the group.

If the feed limit is 0, it will be set to 2 automatically.

Example:

   # stat all files in @files, but only ever use four aio requests concurrently:

   my $grp = aio_group sub { print "finished\n" };
   limit $grp 4;
   feed $grp sub {
      my $file = pop @files
         or return;

      add $grp aio_stat $file, sub { ... };
   };
limit $grp $num

Sets the feeder limit for the group: The feeder will be called whenever the group contains less than this many requests.

Setting the limit to 0 will pause the feeding process.

SUPPORT FUNCTIONS

$fileno = IO::AIO::poll_fileno

Return the request result pipe file descriptor. This filehandle must be polled for reading by some mechanism outside this module (e.g. Event or select, see below or the SYNOPSIS). If the pipe becomes readable you have to call poll_cb to check the results.

See poll_cb for an example.

IO::AIO::poll_cb

Process all outstanding events on the result pipe. You have to call this regularly. Returns the number of events processed. Returns immediately when no events are outstanding.

If not all requests were processed for whatever reason, the filehandle will still be ready when poll_cb returns.

Example: Install an Event watcher that automatically calls IO::AIO::poll_cb with high priority:

   Event->io (fd => IO::AIO::poll_fileno,
              poll => 'r', async => 1,
              cb => \&IO::AIO::poll_cb);
IO::AIO::poll_some $max_requests

Similar to poll_cb, but only processes up to $max_requests requests at a time.

Useful if you want to ensure some level of interactiveness when perl is not fast enough to process all requests in time.

Example: Install an Event watcher that automatically calls IO::AIO::poll_some with low priority, to ensure that other parts of the program get the CPU sometimes even under high AIO load.

   Event->io (fd => IO::AIO::poll_fileno,
              poll => 'r', nice => 1,
              cb => sub { IO::AIO::poll_some 256 });
IO::AIO::poll_wait

Wait till the result filehandle becomes ready for reading (simply does a select on the filehandle. This is useful if you want to synchronously wait for some requests to finish).

See nreqs for an example.

IO::AIO::nreqs

Returns the number of requests currently in the ready, execute or pending states (i.e. for which their callback has not been invoked yet).

Example: wait till there are no outstanding requests anymore:

   IO::AIO::poll_wait, IO::AIO::poll_cb
      while IO::AIO::nreqs;
IO::AIO::nready

Returns the number of requests currently in the ready state (not yet executed).

IO::AIO::npending

Returns the number of requests currently in the pending state (executed, but not yet processed by poll_cb).

IO::AIO::flush

Wait till all outstanding AIO requests have been handled.

Strictly equivalent to:

   IO::AIO::poll_wait, IO::AIO::poll_cb
      while IO::AIO::nreqs;
IO::AIO::poll

Waits until some requests have been handled.

Strictly equivalent to:

   IO::AIO::poll_wait, IO::AIO::poll_cb
      if IO::AIO::nreqs;
IO::AIO::min_parallel $nthreads

Set the minimum number of AIO threads to $nthreads. The current default is 8, which means eight asynchronous operations can execute concurrently at any one time (the number of outstanding requests, however, is unlimited).

IO::AIO starts threads only on demand, when an AIO request is queued and no free thread exists.

It is recommended to keep the number of threads relatively low, as some Linux kernel versions will scale negatively with the number of threads (higher parallelity => MUCH higher latency). With current Linux 2.6 versions, 4-32 threads should be fine.

Under most circumstances you don't need to call this function, as the module selects a default that is suitable for low to moderate load.

IO::AIO::max_parallel $nthreads

Sets the maximum number of AIO threads to $nthreads. If more than the specified number of threads are currently running, this function kills them. This function blocks until the limit is reached.

While $nthreads are zero, aio requests get queued but not executed until the number of threads has been increased again.

This module automatically runs max_parallel 0 at program end, to ensure that all threads are killed and that there are no outstanding requests.

Under normal circumstances you don't need to call this function.

$oldmaxreqs = IO::AIO::max_outstanding $maxreqs

This is a very bad function to use in interactive programs because it blocks, and a bad way to reduce concurrency because it is inexact: Better use an aio_group together with a feed callback.

Sets the maximum number of outstanding requests to $nreqs. If you to queue up more than this number of requests, the next call to the poll_cb (and poll_some and other functions calling poll_cb) function will block until the limit is no longer exceeded.

The default value is very large, so there is no practical limit on the number of outstanding requests.

You can still queue as many requests as you want. Therefore, max_oustsanding is mainly useful in simple scripts (with low values) or as a stop gap to shield against fatal memory overflow (with large values).

FORK BEHAVIOUR

This module should do "the right thing" when the process using it forks:

Before the fork, IO::AIO enters a quiescent state where no requests can be added in other threads and no results will be processed. After the fork the parent simply leaves the quiescent state and continues request/result processing, while the child frees the request/result queue (so that the requests started before the fork will only be handled in the parent). Threads will be started on demand until the limit set in the parent process has been reached again.

In short: the parent will, after a short pause, continue as if fork had not been called, while the child will act as if IO::AIO has not been used yet.

MEMORY USAGE

Per-request usage:

Each aio request uses - depending on your architecture - around 100-200 bytes of memory. In addition, stat requests need a stat buffer (possibly a few hundred bytes), readdir requires a result buffer and so on. Perl scalars and other data passed into aio requests will also be locked and will consume memory till the request has entered the done state.

This is now awfully much, so queuing lots of requests is not usually a problem.

Per-thread usage:

In the execution phase, some aio requests require more memory for temporary buffers, and each thread requires a stack and other data structures (usually around 16k-128k, depending on the OS).

KNOWN BUGS

Known bugs will be fixed in the next release.

SEE ALSO

Coro::AIO.

AUTHOR

 Marc Lehmann <schmorp@schmorp.de>
 http://home.schmorp.de/

1 POD Error

The following errors were encountered while parsing the POD:

Around line 147:

You forgot a '=back' before '=head1'