The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.

NAME

AI::TensorFlow::Libtensorflow::Manual::Quickstart - Start here for an overview of the library

DESCRIPTION

This provides a tour of libtensorflow to help get started with using the library.

CONVENTIONS

The library uses UpperCamelCase naming convention for method names in order to match the underlying C library (for compatibility with future API changes) and to make translating code from C easier as this is a low-level API.

As such, constructors for objects that correspond to libtensorflow data structures are typically called New. For example, a new AI::TensorFlow::Libtensorflow::Status object can be created as follows

  use AI::TensorFlow::Libtensorflow::Status;
  my $status = AI::TensorFlow::Libtensorflow::Status->New;

  ok defined $status, 'Created new Status';

These libtensorflow data structures use destructors where necessary.

OBJECT TYPES

AI::TensorFlow::Libtensorflow::Status

Used for error-handling. Many methods take this as the final argument which is then checked after the method call to ensure that it completed successfully.

AI::TensorFlow::Libtensorflow::Tensor, AI::TensorFlow::Libtensorflow::DataType

A TFTensor is a multi-dimensional data structure that stores the data for inputs and outputs. Each element has the same data type which is defined by AI::TensorFlow::Libtensorflow::DataType thus a TFTensor is considered to be "homogeneous data structure". See Introduction to Tensors for more.

AI::TensorFlow::Libtensorflow::OperationDescription, AI::TensorFlow::Libtensorflow::Operation

An operation is a function that has inputs and outputs. It has a user-defined name (such as MyAdder) and library-defined type (such as AddN). AI::TensorFlow::Libtensorflow::OperationDescription is used to build an operation that will be added to a graph of other operations where those other operations can set the operation's inputs and get the operation's outputs. These inputs and outputs have types and dimension specifications, so that the operations only accept and emit certain TFTensors.

AI::TensorFlow::Libtensorflow::Graph

A set of operations with inputs and outputs linked together. This computation can be serialized along with parameters as part of a SavedModel.

AI::TensorFlow::Libtensorflow::Session, AI::TensorFlow::Libtensorflow::SessionOptions

A session drives the execution of a AI::TensorFlow::Libtensorflow::Graph. Specifics of how the session executes can be set via AI::TensorFlow::Libtensorflow::SessionOptions.

TUTORIALS

The object types in "OBJECT TYPES" are used in the following tutorials:

InferenceUsingTFHubMobileNetV2Model: image classification tutorial

This tutorial demonstrates using a pre-trained SavedModel and creating a AI::TensorFlow::Libtensorflow::Session with the LoadFromSavedModel method. It also demonstrates how to prepare image data for use as an input TFTensor.

InferenceUsingTFHubEnformerGeneExprPredModel: gene expression prediction tutorial

This tutorial builds on InferenceUsingTFHubMobileNetV2Model. It shows how to convert a pre-trained SavedModel from one that does not have a usable signature to a new model that does. It also demonstrates how to prepare genomic data for use as an input TFTensor.

DOCKER IMAGES

Docker (or equivalent runtime) images for the library along with all the dependencies to run the above tutorials are available at Quay.io under various tags which can be run as

docker run --rm -it -p 8888:8888 quay.io/entropyorg/perl-ai-tensorflow-libtensorflow:latest-nb-omnibus

in order to connect to the Jupyter Notebook interface via the web browser.

latest: base image with only libtensorflow installed.
latest-nb-image-class: image containing dependencies needed to run

InferenceUsingTFHubMobileNetV2Model.

latest-nb-gene-expr-pred: image containing dependencies needed to run

InferenceUsingTFHubEnformerGeneExprPredModel.

latest-nb-omnibus: image containing dependencies for both of the above

notebooks.

SEE ALSO

TensorFlow home page

AUTHOR

Zakariyya Mughal <zmughal@cpan.org>

COPYRIGHT AND LICENSE

This software is Copyright (c) 2022-2023 by Auto-Parallel Technologies, Inc.

This is free software, licensed under:

  The Apache License, Version 2.0, January 2004