The Perl Toolchain Summit needs more sponsors. If your company depends on Perl, please support this very important event.

NAME

Plack::App::RDF::LinkedData - A Plack application for running RDF::LinkedData

SYNOPSIS

  my $linkeddata = Plack::App::RDF::LinkedData->new();
  $linkeddata->configure($config);
  my $rdf_linkeddata = $linkeddata->to_app;

  builder {
     enable "Head";
          enable "ContentLength";
          enable "ConditionalGET";
          $rdf_linkeddata;
  };

DESCRIPTION

This module sets up a basic Plack application to use RDF::LinkedData to serve Linked Data, while making sure it does follow best practices for doing so.

MAKE IT RUN

Quick setup for a demo

One-liner

It is possible to make it run with a single command line, e.g.:

  PERLRDF_STORE="Memory;path/to/some/data.ttl" plackup -host localhost script/linked_data.psgi

This will start a server with the default config on localhost on port 5000, so the URIs you're going serve from the file data.ttl will have to have a base URI http://localhost:5000/.

Using perlrdf command line tool

A slightly longer example requires App::perlrdf, but sets up a persistent SQLite-based triple store, parses a file and gets the server with the default config running:

  export PERLRDF_STORE="DBI;mymodel;DBI:SQLite:database=rdf.db"
  perlrdf make_store
  perlrdf store_load path/to/some/data.ttl
  plackup -host localhost script/linked_data.psgi

Configuration

To configure the system for production use, create a configuration file rdf_linkeddata.json that looks something like:

  {
        "base_uri"  : "http://localhost:3000/",
        "store" : {
                   "storetype"  : "Memory",
                   "sources" : [ {
                                "file" : "/path/to/your/data.ttl",
                                "syntax" : "turtle"
                               } ]

                   },
        "endpoint": {
                "html": {
                         "resource_links": true
                        }
                    },
        "cors": {
                  "origins": "*"
                },
        "void": {
                  "pagetitle": "VoID Description for my dataset"
                }
  }

In your shell set

  export RDF_LINKEDDATA_CONFIG=/to/where/you/put/rdf_linkeddata.json

Then, figure out where your install method installed the <linked_data.psgi>, script, e.g. by using locate. If it was installed in /usr/local/bin, go:

  plackup /usr/local/bin/linked_data.psgi --host localhost --port 3000

The endpoint-part of the config sets up a SPARQL Endpoint. This requires the RDF::Endpoint module, which is recommended by this module. To use it, it needs to have some config, but will use defaults.

The last, cors-part of the config enables Cross-Origin Resource Sharing, which is a W3C Recommendation for relaxing security constraints to allow data to be shared across domains. In most cases, this is what you want when you are serving open data, but in some cases, notably intranets, this should be turned off by removing this part.

Details of the implementation

This server is a minimal Plack-script that should be sufficient for most linked data usages, and serve as a an example for most others.

A minimal example of the required config file is provided above. There is are longer examples in the distribtion, which is used to run tests. In the config file, there is a store parameter, which must contain the RDF::Trine::Store config hashref. It may also have a base_uri URI and a namespace hashref which may contain prefix - URI mappings to be used in serializations.

Note that this is a server that can only serve URIs of hosts you control, it is not a general purpose Linked Data manipulation tool, nor is it a full implementation of the Linked Data API.

The configuration is done using Config::JFDI and all its features can be used. Importantly, you can set the RDF_LINKEDDATA_CONFIG environment variable to point to the config file you want to use. See also Catalyst::Plugin::ConfigLoader for more information on how to use this config system.

Behaviour

The following documentation is adapted from the RDF::LinkedData::Apache, which preceeded this script.

  • http://host.name/rdf/example

    Will return an HTTP 303 redirect based on the value of the request's Accept header. If the Accept header contains a recognized RDF media type or there is no Accept header, the redirect will be to http://host.name/rdf/example/data, otherwise to http://host.name/rdf/example/page. If the URI has a foaf:homepage or foaf:page predicate, the redirect will in the latter case instead use the first encountered object URI.

  • http://host.name/rdf/example/data

    Will return a bounded description of the http://host.name/rdf/example resource in an RDF serialization based on the Accept header. If the Accept header does not contain a recognized media type, RDF/XML will be returned.

  • http://host.name/rdf/example/page

    Will return an HTML description of the http://host.name/rdf/example resource including RDFa markup, or, if the URI has a foaf:homepage or foaf:page predicate, a 301 redirect to that object.

If the RDF resource for which data is requested is not the subject of any RDF triples in the underlying triplestore, the /page and /data redirects will not take place, and a HTTP 404 (Not Found) will be returned.

The HTML description of resources will be enhanced by having metadata about the predicate of RDF triples loaded into the same triplestore. Currently, only a rdfs:label-predicate will be used for a title, as in this version, generation of HTML is done by RDF::RDFa::Generator.

Endpoint Usage

As stated earlier, this module can set up a SPARQL Endpoint for the data using RDF::Endpoint. Often, that's what you want, but if you don't want your users to have that kind of power, or you're worried it may overload your system, you may turn it off by simply having no endpoint section in your config. To use it, you just need to have an endpoint section with something in it, it doesn't really matter what, as it will use defaults for everything that isn't set.

RDF::Endpoint is recommended by this module, but as it is optional, you may have to install it separately. It has many configuration options, please see its documentation for details.

You may also need to set the RDF_ENDPOINT_SHAREDIR variable to whereever the endpoint shared files are installed to. These are some CSS and Javascript files that enhance the user experience. They are not strictly necessary, but it sure makes it pretty! RDF::Endpoint should do the right thing, though, so it shouldn't be necessary.

Finally, note that while RDF::Endpoint can serve these files for you, this module doesn't help you do that. That's mostly because this author thinks you should serve them using some other parts of the deployment stack. For example, to use Apache, put this in your Apache config in the appropriate VirtualHost section:

  Alias /js/ /path/to/share/www/js/
  Alias /favicon.ico /path/to/share/www/favicon.ico
  Alias /css/ /path/to/share/www/css/

VoID Generator Usage

Like a SPARQL Endpoint, this is something most users would want. In fact, it is an even stronger recommendation than an endpoint. To enable it, you must have RDF::Generator::Void installed, and just anything in the config file to enable it, like in the SYNOPSIS example.

You can set several things in the config, the property attributes of RDF::Generator::Void can all be set there somehow. You can also set pagetitle, which sets the title for the RDFa page that can be generated. Moreover, you can set titles in several languages for the dataset using titles as the key, pointing to an arrayref with titles, where each title is a two element arrayref, where the first element is the title itself and the second is the language for that title.

Please refer to the RDF::Generator::Void for more details about what can be set, and the rdf_linkeddata_void.json test config in the distribution for example.

By adding an add_void config key, you can make pass a file to the generator so that arbitrary RDF can be added to the VoID description. It will check the last modification time of the file and only update the VoID description if it has been modified. This is usefil since as much of the VoID description cannot be simply generated. To use it, the configuration would in JSON look something like this:

        "add_void": { "file": "/data/add.ttl", "syntax": "turtle" }

where file is the full path to RDF that should be added and syntax is needed by the parser to parse it.

Normally, the VoID description is cached in RAM and the store ETag is checked on every request to see if the description must be regenerated. If you use the add_void feature, you can force regeneration on the next request by touching the file.

FEEDBACK WANTED

Please contact the author if this documentation is unclear. It is really very simple to get it running, so if it appears difficult, this documentation is most likely to blame.

METHODS

You would most likely not need to call these yourself, but rather use the linked_data.psgi script supplied with the distribution.

configure

This is the only method you would call manually, as it can be used to pass a hashref with configuration to the application.

prepare_app

Will be called by Plack to set the application up.

call

Will be called by Plack to process the request.

AUTHOR

Kjetil Kjernsmo, <kjetilk@cpan.org>

COPYRIGHT & LICENSE

Copyright 2010,2011,2012,2013,2014 Kjetil Kjernsmo

This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.