NAME

Developer::Dashboard - a local home for development work

VERSION

0.94

INTRODUCTION

Developer::Dashboard gives a developer one place to organize the moving parts of day-to-day work.

Without it, local development usually ends up spread across shell history, ad-hoc scripts, browser bookmarks, half-remembered file paths, one-off health checks, and project-specific Docker commands. With it, those pieces can live behind one entrypoint: a browser home, a prompt status layer, and a CLI toolchain that all read from the same runtime.

It brings together browser pages, saved notes, helper actions, collectors, prompt indicators, path aliases, open-file shortcuts, data query tools, and Docker Compose helpers so local development can stay centered around one consistent home instead of a pile of disconnected scripts and tabs.

When the current project contains ./.developer-dashboard, that tree becomes the first runtime lookup root for dashboard-managed files. The home runtime under ~/.developer-dashboard stays as the fallback base, so project-local bookmarks, config, CLI hooks, helper users, sessions, and isolated docker service folders can override home defaults without losing shared fallback data that is not redefined locally.

Release tarballs contain installable runtime artifacts only; local Dist::Zilla release-builder configuration is kept out of the shipped archive. Frequently used built-in commands such as of, open-file, pjq, pyq, ptomq, and pjp are also installed as standalone executables so they can run directly without loading the full dashboard runtime. Before publishing a release, the built tarball should be smoke-tested with cpanm from the artifact itself so the shipped archive matches the fixed source tree. Repository metadata should also keep explicit repository links, shipped module provides, and root SECURITY.md / CONTRIBUTING.md policy files aligned for CPAN and Kwalitee consumers.

It provides a small ecosystem for:

  • saved and transient dashboard pages built from the original bookmark-file shape

  • legacy bookmark syntax compatibility using the original :--------------------------------------------------------------------------------: separator plus directives such as TITLE:, STASH:, HTML:, FORM.TT:, FORM:, and CODE1:

  • Template Toolkit rendering for HTML: and FORM.TT:, with access to stash, ENV, and SYSTEM

  • legacy CODE* execution with captured STDOUT rendered into the page and captured STDERR rendered as visible errors

  • legacy-style per-page sandpit isolation so one bookmark run can share runtime variables across CODE* blocks without leaking them into later page runs

  • old-style root editor behavior with a free-form bookmark textarea when no path is provided

  • file-backed collectors and indicators

  • prompt rendering for PS1

  • project/path discovery helpers

  • a lightweight local web interface

  • action execution with trusted and safer page boundaries

  • config-backed providers, path aliases, and compose overlays

  • update scripts and release packaging for CPAN distribution

Developer Dashboard is meant to become the developer's working home:

  • shared nav fragments from saved nav/*.tt bookmarks rendered between the top chrome and the main page body on other saved pages

  • a local dashboard page that can hold links, notes, forms, actions, and rendered output

  • a prompt layer that shows live status for the things you care about

  • a command surface for opening files, jumping to known paths, querying data, and running repeatable local tasks

  • a configurable runtime that can adapt to each codebase without losing one familiar entrypoint

Shared Nav Fragments

If nav/*.tt files exist under the saved bookmark root, every non-nav page render includes them between the top chrome and the main page body.

For the default runtime that means files such as:

  • ~/.developer-dashboard/dashboards/nav/foo.tt

  • ~/.developer-dashboard/dashboards/nav/bar.tt

And with route access such as:

  • /app/nav/foo.tt

  • /page/nav/foo.tt/edit

  • /page/nav/foo.tt/source

The bookmark editor can save those nested ids directly, for example BOOKMARK: nav/foo.tt. On a page like /app/index, the direct nav/*.tt files are loaded in sorted filename order, rendered through the normal page runtime, and inserted above the page body. Non-.tt files and subdirectories under nav/ are ignored by that shared-nav renderer.

Shared nav fragments and normal bookmark pages both render through Template Toolkit with env.current_page set to the active request path, such as /app/index. The same path is also available as env.runtime_context.current_page, alongside the rest of the request-time runtime context.

What You Get

  • a browser interface on port 7890 for pages, status, editing, and helper access

  • a shell entrypoint for file navigation, page operations, collectors, indicators, auth, and Docker Compose

  • saved runtime state that lets the browser, prompt, and CLI all see the same prepared information

  • a place to collect project-specific shortcuts without rebuilding your daily workflow for every repo

Web Interface And Access Model

Run the web interface with:

dashboard serve

By default it listens on 0.0.0.0:7890, so you can open it in a browser at:

http://127.0.0.1:7890/

The access model is deliberate:

  • exact numeric loopback admin access on 127.0.0.1 does not require a password

  • helper access is for everyone else, including localhost, other hosts, and other machines on the network

  • helper logins let you share the dashboard safely without turning every browser request into full local-admin access

In practice that means the developer at the machine gets friction-free local admin access, while shared or forwarded access is forced through explicit helper accounts.

Collectors, Indicators, And PS1

Collectors are background or on-demand jobs that prepare state for the rest of the dashboard. A collector can run a shell command or a Perl snippet, then store stdout, stderr, exit code, and timestamps as file-backed runtime data.

That prepared state drives indicators. Indicators are the short status records used by:

  • the shell prompt rendered by dashboard ps1

  • the top-right status strip in the web interface

  • CLI inspection commands such as dashboard indicator list

This matters because prompt and browser status should be cheap to render. Instead of re-running a Docker check, VPN probe, or project health command every time the prompt draws, a collector prepares the answer once and the rest of the system reads the cached result.

Why It Works As A Developer Home

The pieces are designed to reinforce each other:

  • pages give you a browser home for links, notes, forms, and actions

  • collectors prepare state for indicators and prompt rendering

  • indicators summarize that state in both the browser and the shell

  • path aliases, open-file helpers, and data query commands shorten the jump from I know what I need to I am at the file or value now

  • Docker Compose helpers keep recurring container workflows behind the same dashboard entrypoint

That combination makes the dashboard useful as a real daily base instead of just another utility script.

Not Just For Perl

Developer Dashboard is implemented in Perl, but it is not only for Perl developers.

It is useful anywhere a developer needs:

  • a local browser home

  • repeatable health checks and status indicators

  • path shortcuts and file-opening helpers

  • JSON, YAML, TOML, or properties inspection from the CLI

  • a consistent Docker Compose wrapper

The toolchain already understands Perl module names, Java class names, direct files, structured-data formats, and project-local compose flows, so it suits mixed-language teams and polyglot repositories as well as Perl-heavy work.

Project-specific behavior is added through configuration, saved pages, and user CLI extensions.

DOCUMENTATION

Main Concepts

  • Path Registry

    Developer::Dashboard::PathRegistry resolves the runtime roots that everything else depends on, such as dashboards, config, collectors, indicators, CLI hooks, logs, and cache.

  • File Registry

    Developer::Dashboard::FileRegistry resolves stable file locations on top of the path registry so the rest of the system can read and write well-known runtime files without duplicating path logic.

  • Page Model

    Developer::Dashboard::PageDocument and Developer::Dashboard::PageStore implement the saved and transient page model, including bookmark-style source documents, encoded transient pages, and persistent bookmark storage.

  • Page Resolver

    Developer::Dashboard::PageResolver resolves saved pages and provider pages so browser pages and actions can come from both built-in and config-backed sources.

  • Actions

    Developer::Dashboard::ActionRunner executes built-in actions and trusted local command actions with cwd, env, timeout, background support, and encoded action transport, letting pages act as operational dashboards instead of static documents.

  • Collectors

    Developer::Dashboard::Collector and Developer::Dashboard::CollectorRunner implement file-backed prepared-data jobs with managed loop metadata, timeout/env handling, interval and cron-style scheduling, process-title validation, duplicate prevention, and collector inspection data. This is the prepared-state layer that feeds indicators, prompt status, and operational pages.

  • Indicators and Prompt

    Developer::Dashboard::IndicatorStore and Developer::Dashboard::Prompt expose cached state to shell prompts and dashboards, including compact versus extended prompt rendering, stale-state marking, generic built-in indicator refresh, and page-header status payloads for the web UI.

  • Web Layer

    Developer::Dashboard::Web::App and Developer::Dashboard::Web::Server provide the browser interface on port 7890, including the root editor, page rendering, login/logout, helper sessions, and the exact-loopback admin trust model.

  • Open File Commands

    dashboard of and dashboard open-file resolve direct files, file:line references, Perl module names, Java class names, and recursive file-pattern matches under a resolved scope so the dashboard can shorten navigation work across different stacks.

  • Data Query Commands

    dashboard pjq, dashboard pyq, dashboard ptomq, and dashboard pjp parse JSON, YAML, TOML, and Java properties input, then optionally extract a dotted path and print a scalar or canonical JSON, giving the CLI a small data-inspection toolkit that fits naturally into shell workflows.

  • Standalone CLI Commands

    Standalone of, open-file, pjq, pyq, ptomq, and pjp provide the same behavior directly, without proxying through the main dashboard command, for lighter-weight shell usage.

  • Runtime Manager

    Developer::Dashboard::RuntimeManager manages the background web service and collector lifecycle with process-title validation, pkill-style fallback shutdown, and restart orchestration, tying the browser and prepared-state loops together as one runtime.

  • Update Manager

    Developer::Dashboard::UpdateManager runs ordered update scripts and restarts validated collector loops when needed, giving the runtime a controlled bootstrap and upgrade path.

  • Docker Compose Resolver

    Developer::Dashboard::DockerCompose resolves project-aware compose files, explicit overlay layers, services, addons, modes, env injection, and the final docker compose command so container workflows can live inside the same dashboard ecosystem instead of in separate wrapper scripts.

Environment Variables

The distribution supports these compatibility-style customization variables:

  • DEVELOPER_DASHBOARD_BOOKMARKS

    Override the saved page root.

  • DEVELOPER_DASHBOARD_CHECKERS

    Filter enabled collector/checker names.

  • DEVELOPER_DASHBOARD_CONFIGS

    Override the config root.

User CLI Extensions

Unknown top-level subcommands can be provided by executable files under ./.developer-dashboard/cli first and then ~/.developer-dashboard/cli. For example, dashboard foobar a b will exec the first matching cli/foobar with a b as argv, while preserving stdin, stdout, and stderr.

Per-command hook files can live under either ./.developer-dashboard/cli/<command> or ./.developer-dashboard/cli/<command>.d first, then the same paths under ~/.developer-dashboard/cli/. Executable files in the selected directory are run in sorted filename order before the real command runs, non-executable files are skipped, and each hook now streams its own stdout and stderr live to the terminal while still accumulating those channels into RESULT as JSON. Built-in commands such as dashboard pjq use the same hook directory. A directory-backed custom command can provide its real executable as ~/.developer-dashboard/cli/<command>/run, and that runner receives the final RESULT environment variable. After each hook finishes, the updated RESULT JSON is written back into the environment before the next sorted hook starts, so later hook scripts can react to earlier hook output.

Perl hook code can use Runtime::Result to decode RESULT safely and read per-hook stdout, stderr, exit codes, or the last recorded hook entry.

Open File Commands

dashboard of is the shorthand name for dashboard open-file.

These commands support:

  • direct file paths

  • file:line references

  • Perl module names such as My::Module

  • Java class names such as com.example.App

  • recursive pattern searches inside a resolved directory alias or path

If VISUAL or EDITOR is set, dashboard of and dashboard open-file will exec that editor unless --print is used.

Data Query Commands

These built-in commands parse structured text and optionally extract a dotted path:

  • dashboard pjq [path] [file] for JSON

  • dashboard pyq [path] [file] for YAML

  • dashboard ptomq [path] [file] for TOML

  • dashboard pjp [path] [file] for Java properties

If the selected value is a hash or array, the command prints canonical JSON. If the selected value is a scalar, it prints the scalar plus a trailing newline.

The file path and query path are order-independent, and $d selects the whole parsed document. For example, cat file.json | dashboard pjq '$d' and dashboard pjq file.json '$d' return the same result. The same contract applies to pyq, ptomq, and pjp.

MANUAL

Installation

Install from CPAN with:

cpanm Developer::Dashboard

Or install from a checkout with:

perl Makefile.PL
make
make test
make install

Local Development

Build the distribution:

rm -rf Developer-Dashboard-* Developer-Dashboard-*.tar.gz
dzil build

Run the CLI directly from the repository:

perl -Ilib bin/dashboard init
perl -Ilib bin/dashboard auth add-user <username> <password>
perl -Ilib bin/dashboard version
perl -Ilib bin/dashboard of --print My::Module
perl -Ilib bin/dashboard open-file --print com.example.App
printf '{"alpha":{"beta":2}}' | perl -Ilib bin/dashboard pjq alpha.beta
printf 'alpha:\n  beta: 3\n' | perl -Ilib bin/dashboard pyq alpha.beta
mkdir -p ~/.developer-dashboard/cli/update.d
printf '#!/usr/bin/env perl\nuse Runtime::Result;\nprint Runtime::Result::stdout(q{01-runtime});\nprint $ENV{RESULT} // q{}\n' > ~/.developer-dashboard/cli/update
chmod +x ~/.developer-dashboard/cli/update
printf '#!/bin/sh\necho runtime-update\n' > ~/.developer-dashboard/cli/update.d/01-runtime
chmod +x ~/.developer-dashboard/cli/update.d/01-runtime
perl -Ilib bin/dashboard update
perl -Ilib bin/dashboard serve
perl -Ilib bin/dashboard stop
perl -Ilib bin/dashboard restart

User CLI extensions can be tested from the repository too:

mkdir -p ~/.developer-dashboard/cli
printf '#!/bin/sh\ncat\n' > ~/.developer-dashboard/cli/foobar
chmod +x ~/.developer-dashboard/cli/foobar
printf 'hello\n' | perl -Ilib bin/dashboard foobar

mkdir -p ~/.developer-dashboard/cli/pjq
printf '#!/usr/bin/env perl\nprint "seed\\n";\n' > ~/.developer-dashboard/cli/pjq/00-seed.pl
chmod +x ~/.developer-dashboard/cli/pjq/00-seed.pl
printf '{"alpha":{"beta":2}}' | perl -Ilib bin/dashboard pjq alpha.beta

Each top-level dashboard command can also use an optional hook directory at ~/.developer-dashboard/cli/<command>. Executable files from that directory run in sorted filename order before the real command starts, non-executable files are skipped, and the captured stdout/stderr from the hook files are accumulated into $ENV{RESULT} as JSON for later hooks and the final command. Directory-backed custom commands can use ~/.developer-dashboard/cli/<command>/run as the actual executable. If a subcommand does not have a built-in implementation, the real command can be supplied as ~/.developer-dashboard/cli/<command> or ~/.developer-dashboard/cli/<command>/run.

If you want dashboard update, provide it as a normal user command at ~/.developer-dashboard/cli/update or ~/.developer-dashboard/cli/update/run. Its hook files can live under ~/.developer-dashboard/cli/update or ~/.developer-dashboard/cli/update.d, and the real command receives the final RESULT JSON through the environment after those hook files run. Each later hook also sees the latest rewritten RESULT from the earlier hook set, and Perl code can read that payload through Runtime::Result.

Use dashboard version to print the installed Developer Dashboard version.

The blank-container integration harness applies fake-project dashboard override environment variables only after cpanm finishes installing the tarball so the shipped test suite still runs against a clean runtime.

First Run

Initialize the runtime:

dashboard init

Inspect resolved paths:

dashboard paths
dashboard path resolve bookmarks_root
dashboard path add foobar /tmp/foobar
dashboard path del foobar

Custom path aliases are stored in the effective dashboard config root so shell helpers such as cdr foobar and which_dir foobar keep working across sessions. When a project-local ./.developer-dashboard tree exists, alias writes go there first; otherwise they go to the home runtime. When an alias points inside the current home directory, the stored config uses $HOME/... instead of a hard-coded absolute home path so a shared fallback runtime remains portable across different developer accounts. Re-adding an existing alias updates it without error, and deleting a missing alias is also safe.

Legacy Folder compatibility also accepts the modern root-style names through AUTOLOAD, so older code can use either Folder->dd or Folder->runtime_root, and likewise bookmarks_root and config_root. Before Folder->configure(...) runs, those runtime-backed names lazily bootstrap a default dashboard path registry from $HOME instead of dying.

Render shell bootstrap:

dashboard shell bash

Resolve or open files from the CLI:

dashboard of --print My::Module
dashboard open-file --print com.example.App
dashboard open-file --print path/to/file.txt
dashboard open-file --print bookmarks welcome

Query structured files from the CLI:

printf '{"alpha":{"beta":2}}' | dashboard pjq alpha.beta
printf 'alpha:\n  beta: 3\n' | dashboard pyq alpha.beta
printf '[alpha]\nbeta = 4\n' | dashboard ptomq alpha.beta
printf 'alpha.beta=5\n' | dashboard pjp alpha.beta
dashboard pjq file.json '$d'

Start the local app:

dashboard serve

Open the root path with no bookmark path to get the free-form bookmark editor directly.

Stop the local app and collector loops:

dashboard stop

Restart the local app and configured collector loops:

dashboard restart

Create a helper login user:

dashboard auth add-user <username> <password>

Remove a helper login user:

dashboard auth remove-user helper

Helper sessions show a Logout link in the page chrome. Logging out removes both the helper session and that helper account. Helper page views also show the helper username in the top-right chrome instead of the local system account. Exact-loopback admin requests do not show a Logout link.

Working With Pages

Create a starter page document:

dashboard page new sample "Sample Page"

Save a page:

dashboard page new sample "Sample Page" | dashboard page save sample

List saved pages:

dashboard page list

Render a saved page:

dashboard page render sample

Encode and decode transient pages:

dashboard page show sample | dashboard page encode
dashboard page show sample | dashboard page encode | dashboard page decode

Run a page action:

dashboard action run system-status paths

Bookmark documents use the original separator-line format with directive headers such as TITLE:, STASH:, HTML:, FORM.TT:, FORM:, and CODE1:.

Posting a bookmark document with BOOKMARK: some-id back through the root editor now saves it to the bookmark store so /app/some-id resolves it immediately.

The browser editor highlights directive sections, HTML, CSS, JavaScript, and Perl CODE* content directly inside the editing surface rather than in a separate preview pane.

Edit and source views preserve raw Template Toolkit placeholders inside HTML: and FORM.TT: sections, so values such as [% title %] are kept in the bookmark source instead of being rewritten to rendered HTML after a browser save.

Template Toolkit rendering exposes the page title as title, so a bookmark with TITLE: Sample Dashboard can reference it directly inside HTML: or FORM.TT: with [% title %]. Transient play and view-source links are also encoded from the raw bookmark instruction text when it is available, so [% stash.foo %] stays in source views instead of being baked into the rendered scalar value after a render pass.

Legacy CODE* blocks now run before Template Toolkit rendering during prepare_page, so a block such as CODE1: { a = 1 }> can feed [% stash.a %] in the page body. Returned hash and array values are also dumped into the runtime output area, so CODE1: { a = 1 }> both populates stash and shows the legacy-style dumped value below the rendered page body. The hide helper no longer discards already-printed STDOUT, so CODE2: hide print $a keeps the printed value while suppressing the Perl return value from affecting later merge logic.

Page TITLE: values only populate the HTML <title> element. If a bookmark should show its title in the page body, add it explicitly inside HTML:, for example with [% title %].

/apps redirects to /app/index, and /app/<name> can load either a saved bookmark document or a saved ajax/url bookmark file.

Working With Collectors

Initialize example collector config:

dashboard config init

Run a collector once:

dashboard collector run example.collector

List collector status:

dashboard collector list

Collector jobs support two execution fields:

  • command runs a shell command string through sh -c

  • code runs Perl code directly inside the collector runtime

Example collector definitions:

{
  "collectors": [
    {
      "name": "shell.example",
      "command": "printf 'shell collector\n'",
      "cwd": "home",
      "interval": 60
    },
    {
      "name": "perl.example",
      "code": "print qq(perl collector\n); return 0;",
      "cwd": "home",
      "interval": 60,
      "indicator": {
        "icon": "P"
      }
    }
  ]
}

Collector indicators follow the collector exit code automatically: 0 stores an ok indicator state and any non-zero exit code stores error. When indicator.name is omitted, the collector name is reused automatically. When indicator.label is omitted, it defaults to that same name. Configured collector indicators are seeded immediately, so prompt and page status strips show them before the first collector run. Before a collector has produced real output it appears as missing. Prompt output renders an explicit status glyph in front of the collector icon, so successful checks show fragments such as ✅🔑 while failing or not-yet-run checks show fragments such as 🚨🔑. The blank-environment integration flow also keeps a regression for mixed collector health: one intentionally broken Perl collector must stay red without stopping a second healthy collector from staying green in dashboard indicator list, dashboard ps1, and /system/status.

Docker Compose

Inspect the resolved compose stack without running Docker:

dashboard docker compose --dry-run config

Include addons or modes:

dashboard docker compose --addon mailhog --mode dev up -d
dashboard docker compose config green
dashboard docker compose config

The resolver also supports old-style isolated service folders without adding entries to dashboard JSON config. If ./.developer-dashboard/docker/green/compose.yml exists in the current project it wins; otherwise the resolver falls back to ~/.developer-dashboard/config/docker/green/compose.yml. dashboard docker compose config green or dashboard docker compose up green will pick it up automatically by inferring service names from the passthrough compose args before the real docker compose command is assembled. If no service name is passed, the resolver scans isolated service folders and preloads every non-disabled folder. If a folder contains disabled.yml it is skipped. Each isolated folder contributes development.compose.yml when present, otherwise compose.yml.

During compose execution the dashboard exports DDDC as the effective config-root docker directory for the current runtime, so compose YAML can keep using ${DDDC} paths inside the YAML itself. Wrapper flags such as --service, --addon, --mode, --project, and --dry-run are consumed first, and all remaining docker compose flags such as -d and --build pass straight through to the real docker compose command. When --dry-run is omitted, the dashboard hands off with exec so the terminal sees the normal streaming output from docker compose itself instead of a dashboard JSON wrapper.

Prompt Integration

Render prompt text directly:

dashboard ps1 --jobs 2

Generate bash bootstrap:

dashboard shell bash

Browser Access Model

The browser security model follows the legacy local-first trust concept:

  • requests from exact 127.0.0.1 with a numeric Host of 127.0.0.1 are treated as local admin

  • requests from other IPs or from hostnames such as localhost are treated as helper access

  • helper access requires a login backed by local file-based user and session records

  • helper sessions are file-backed, bound to the originating remote address, and expire automatically

  • helper passwords must be at least 8 characters long

This keeps the fast path for exact loopback access while making non-canonical or remote access explicit.

The editor and rendered pages also include a shared top chrome with share and source links on the left and the original status-plus-alias indicator strip on the right, refreshed from /system/status. That top-right area also includes the local username, the current host or IP link, and the current date/time in the same spirit as the old local dashboard chrome. The displayed address is discovered from the machine interfaces, preferring a VPN-style address when one is active, and the date/time is refreshed in the browser with JavaScript. The bookmark editor also follows the old auto-submit flow, so the form submits when the textarea changes and loses focus instead of showing a manual update button.

The default web bind is 0.0.0.0:7890. Trust is still decided from the request origin and host header, not from the listen address.

Runtime Lifecycle

The runtime manager follows the legacy local-service pattern:

  • dashboard serve starts the web service in the background by default

  • dashboard serve --foreground keeps the web service attached to the terminal

  • dashboard stop stops both the web service and managed collector loops

  • dashboard restart stops both, starts configured collector loops again, then starts the web service

  • web shutdown and duplicate detection do not trust pid files alone; they validate managed processes by environment marker or process title and use a pkill-style scan fallback when needed

Environment Customization

After installing with cpanm, the runtime can be customized with these environment variables:

  • DEVELOPER_DASHBOARD_BOOKMARKS

    Overrides the saved page or bookmark directory.

  • DEVELOPER_DASHBOARD_CHECKERS

    Limits enabled collector or checker jobs to a colon-separated list of names.

  • DEVELOPER_DASHBOARD_CONFIGS

    Overrides the config directory.

Collector definitions come only from dashboard configuration JSON, so config remains the single source of truth for path aliases, providers, collectors, and Docker compose overlays.

Testing And Coverage

Run the test suite:

prove -lr t

Measure library coverage with Devel::Cover:

cpanm --local-lib-contained ./.perl5 Devel::Cover
export PERL5LIB="$PWD/.perl5/lib/perl5${PERL5LIB:+:$PERL5LIB}"
export PATH="$PWD/.perl5/bin:$PATH"
cover -delete
HARNESS_PERL_SWITCHES=-MDevel::Cover prove -lr t
cover -report text -select_re '^lib/' -coverage statement -coverage subroutine

The repository target is 100% statement and subroutine coverage for lib/.

The coverage-closure suite includes managed collector loop start/stop paths under Devel::Cover, including wrapped fork coverage in t/14-coverage-closure-extra.t, so the covered run stays green without breaking TAP from daemon-style child processes.

Updating Runtime State

Run your user-provided update command:

dashboard update

If ./.developer-dashboard/cli/update or ./.developer-dashboard/cli/update/run exists in the current project it is used first; otherwise the home runtime fallback is used. dashboard update runs that command after any sorted hook files from update/ or update.d.

dashboard init seeds three editable starter bookmarks when they are missing: welcome, api-dashboard, and db-dashboard.

Blank Environment Integration

Run the host-built tarball integration flow with:

integration/blank-env/run-host-integration.sh

This integration path builds the distribution tarball on the host with dzil build, starts a blank container with only that tarball mounted into it, installs the tarball with cpanm, and then exercises the installed dashboard command inside the clean Perl container.

Before uploading a release artifact, remove older tarballs first so only the current release artifact remains, then validate the exact tarball that will ship:

rm -f Developer-Dashboard-*.tar.gz
dzil build
tar -tzf Developer-Dashboard-0.94.tar.gz | grep run-host-integration.sh
cpanm /tmp/Developer-Dashboard-0.94.tar.gz -v

The harness also:

- creates a fake project with its own ./.developer-dashboard runtime tree - verifies the installed CLI works against that fake project through the mounted tarball install - seeds a user-provided fake-project ./.developer-dashboard/cli/update command plus update.d hooks inside the container so dashboard update exercises the same top-level command-hook path as every other subcommand, including later-hook reads through Runtime::Result - verifies collector failure isolation with one intentionally broken Perl config collector and one healthy config collector, and confirms the healthy indicator still stays green after dashboard restart - starts the installed web service - uses headless Chromium to verify the root editor, a saved fake-project bookmark page from the fake project bookmark directory, and the helper login page - verifies helper logout cleanup and runtime restart and stop behavior

FAQ

Is this tied to a specific company or codebase?

No. The core distribution is intended to be reusable for any project.

Where should project-specific behavior live?

In configuration, saved pages, and user CLI extensions. The core should stay generic.

Is the software spec implemented?

The current distribution implements the core runtime, page engine, action runner, provider loader, prompt and collector system, web lifecycle manager, and Docker Compose resolver described by the software spec.

What remains intentionally lightweight is breadth, not architecture:

- provider pages and action handlers are implemented in a compact v1 form - legacy bookmarks are supported, with Template Toolkit rendering and one clean sandpit package per page run so CODE* blocks can share state within a bookmark render without leaking runtime globals into later requests

Does it require a web framework?

No. The current distribution includes a minimal HTTP layer implemented with core Perl-oriented modules.

Why does localhost still require login?

This is intentional. The trust rule is exact and conservative: only numeric loopback on 127.0.0.1 receives local-admin treatment.

Why is the runtime file-backed?

Because prompt rendering, dashboards, and wrappers should consume prepared state quickly instead of re-running expensive checks inline.

How are CPAN releases built?

The repository is set up to build release artifacts with Dist::Zilla, including explicit provides metadata generation, and upload them to PAUSE from GitHub Actions.

Runtime release metadata pins JSON::XS explicitly in Dist::Zilla so built tarballs declare the JSON backend dependency even if automatic prerequisite scanning misses it during PAUSE installs.

What JSON implementation does the project use?

The project uses JSON::XS for JSON encoding and decoding, including shell helper decoding paths.

What does the project use for command capture and HTTP clients?

The project uses Capture::Tiny for command-output capture via capture, with exit codes returned from the capture block rather than read separately. There is currently no outbound HTTP client in the core runtime, so LWP::UserAgent is not yet required by an active code path.

SEE ALSO

Developer::Dashboard::PathRegistry, Developer::Dashboard::PageStore, Developer::Dashboard::CollectorRunner, Developer::Dashboard::Prompt

AUTHOR

Developer Dashboard Contributors

LICENSE

This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.