AI::PredictionClient - A Perl Prediction client for Google TensorFlow Serving.


version 0.05


This is a package for creating Perl clients for TensorFlow Serving model servers. TensorFlow Serving is the system that allows TensorFlow neural network AI models to be moved from the research environment to your production environment.

Currently this package implements a client for the Predict service and a model specific Inception client.

The Predict service '' is the most versatile of the TensorFlow Serving Prediction services. A large portion of the model specific clients are implemented from this service.

The model specific client '' is implemented. This is the most popular client.

Additionally, a command line Inception client '' is included as an example of a complete client built form this package.

Using the example client

The example client is installed in your local bin directory and will allow you to send an image to an Inception model server and display the classifications of what the Inception neural network model "thought" it saw.

This client implements a command line interface to the InceptionClient module 'AI::PredictionClient::InceptionClient', and provides a working example of using this module for building your own clients.

The commands for the Inception client can be displayed by running the client with no arguments.

 image_file is missing
 USAGE: [-h] [long options ...]

    --debug_camel               Test using camel image
    --debug_loopback_interface  Test loopback through dummy server
    --debug_verbose             Verbose output
    --host=String               IP address of the server [Default:
    --image_file=String         * Required: Path to image to be processed
    --model_name=String         Model to process image [Default: inception]
    --model_signature=String    API signature for model [Default:
    --port=String               Port number of the server [Default: 9000]
    -h                          show a compact help message

Some typical command line examples include: --image_file=anything --debug_camel --host=xx7.x11.xx3.x14 --port=9000 --image_file=grace_hopper.jpg --host=xx7.x11.xx3.x14 --port=9000 --image_file=anything --debug_camel --debug_loopback --port 2004 --host technologic

In the examples above, the following points are demonstrated:

If you don't have an image handy --debug_camel will provide a sample image to send to the server. The image file argument still needs to be provided to make the command line parser happy.

If you don't have a server to talk to, but want to see if most everything else is working use the --debug_loopback_interface. This will provide a sample response you can test the client with. The module can use the same loopback interface for debugging your bespoke clients.

The --debug_verbose option will dump the data structures of the request and response to allow you to see what is going on.

The response from a live server to the camel image looks like this: --image_file=zzzzz --debug_camel --port=9000    
 Sending image zzzzz to server at  port:9000
 | Class                                                     | Score         |
 | Arabian camel, dromedary, Camelus dromedarius             | 11.968746     |
 | triumphal arch                                            |  4.0692205    |
 | panpipe, pandean pipe, syrinx                             |  3.4675434    |
 | thresher, thrasher, threshing machine                     |  3.4537551    |
 | sorrel                                                    |  3.1359406    |
 | Classification Results for zzzzz                                           |


You can set up a server by following the instructions on the TensorFlow Serving site:

I have a prebuilt Docker container available here:

 docker pull mountaintom/tensorflow-serving-inception-docker-swarm-demo

This container has the Inception model already loaded and ready to go.

Start this container and run the following commands within it to get the server running:

 $ cd /serving
 $ bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=inception --model_base_path=inception-export &> inception_log &

A longer article on setting up a server is here:


The design of this client is to be fairly easy for a developer to see how the data is formed and received. The TensorFlow interface is based on Protocol Buffers and gRPC. That implementation is built on a complex architecture of nested protofiles.

In this design I flattened the architecture out and where the native data handling of Perl is best, the modules use plain old Perl data structures rather than creating another layer of accessors.

The Tensor interface is used repetitively so this package includes a simplified Tensor class to pack and unpack data to and from the models.

In the case of most clients, the Tensor class is simply sending and receiving rank one tensors - vectors. In the case of higher rank tensors, the tensor data is sent and received flattened. The size property would be used for importing/exporting the tensors in/out of a math package.

The design takes advantage of the native JSON serialization capabilities built into the C++ Protocol Buffers. Serialization allows a much simpler more robust interface to be created between the Perl environment and the C++ environment. One of the biggest advantages is for the developer who would like to quickly extend what this package does. You can see how the data structures are built and directly manipulate them in Perl. Of course, if you can be more forward looking, building the proper roles and classes and contributing them would be great.


This module is dependent on gRPC. This module will use the cpan module Alien::Google::GRPC to either use an existing gRPC installation on your system or if not found, the Alien::Google::GRPC module will download and build a private copy.

The system dependencies needed for this module to build are most often already installed. If not, the following dependencies need to be installed.

 $ [sudo] apt-get install build-essential make g++ curl
 $ [sudo] apt-get install git

Installing autotools is optional. If they are installed this package will use them, otherwise it will build and install its own local copies.

 $ [sudo] apt-get install autoconf automake libtool

See the Alien::Google::GRPC for potential additional build dependencies.

At this time only Linux builds are supported.

CPAN Testers Note

This module may fail CPAN Testers' tests. The build support tools needed by this module and especially the Alien::Google::GRPC module are normally installed on the CPAN Testers' machines, but not always.

The system build tools dependencies have been reduced, so hopefully a large number of machines will build without manually installing system dependencies.


This is a complex package with a lot of moving parts. Please pardon if this first release has some minor bug or missing dependency that went undiscovered in my testing.


Tom Stall <>


This software is copyright (c) 2017 by Tom Stall.

This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.