NAME
Langertha::Knarr - LLM Proxy with Langfuse Tracing
VERSION
version 0.004
SYNOPSIS
# 1. Create a .env with your Langfuse credentials
# (free tier at https://cloud.langfuse.com)
LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_SECRET_KEY=sk-lf-...
LANGFUSE_BASE_URL=https://cloud.langfuse.com
# 2. Start the proxy
docker run --env-file .env -p 8080:8080 raudssus/langertha-knarr
# 3. Point your client at it
ANTHROPIC_BASE_URL=http://localhost:8080 claude # Claude Code
OPENAI_BASE_URL=http://localhost:8080/v1 my-app # OpenAI SDK apps
# Every API call is now traced in Langfuse.
# The proxy forwards requests 1:1 using the client's own API key.
# Knarr doesn't need one.
DESCRIPTION
Knarr is an LLM proxy that sits between your client and the API, forwarding requests transparently while recording everything in Langfuse. The client's own API key is used — Knarr doesn't need API keys for the LLM providers, only Langfuse credentials to write the traces.
The simplest use case: debug your AI coding agent. Start Knarr, point Claude Code or any other LLM client at it, and see every prompt, response, token count and error in your Langfuse dashboard.
Named after the Norse cargo ship, Knarr carries your LLM calls safely to their destination — with full cargo documentation.
Quick Start
Create a .env file with your Langfuse credentials:
LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_SECRET_KEY=sk-lf-...
LANGFUSE_BASE_URL=https://cloud.langfuse.com
Start the proxy:
docker run --env-file .env -p 8080:8080 raudssus/langertha-knarr
Use it with Claude Code:
ANTHROPIC_BASE_URL=http://localhost:8080 claude
Use it with any OpenAI SDK application:
OPENAI_BASE_URL=http://localhost:8080/v1 python my_app.py
Every API call now shows up in your Langfuse dashboard with full input, output, token usage, latency, and error tracking. The proxy doesn't touch the API key — it just passes it through to the upstream API.
Additional Docker Examples
With provider API keys for engine routing (not just passthrough):
docker run --env-file .env \
-e OPENAI_API_KEY=sk-... \
-p 8080:8080 \
raudssus/langertha-knarr
With a mounted config file:
docker run --env-file .env \
-v ./knarr.yaml:/app/knarr.yaml \
-p 8080:8080 \
raudssus/langertha-knarr \
container
Local usage (without Docker):
knarr init > knarr.yaml
knarr start
Programmatic usage:
use Langertha::Knarr;
my $app = Langertha::Knarr->build_app(config_file => 'knarr.yaml');
Request Flow
┌─────────────────────────────────┐
Client │ Knarr Proxy │ Backend
────── │ ──────────── │ ───────
OpenAI format ───► │ /v1/chat/completions │
Anthropic format───► │ /v1/messages ──Router──►│ ──► Langertha Engine ──► API
Ollama format ───► │ /api/chat │
│ │ │
│ ▼ │
│ Langfuse Tracing │
└─────────────────────────────────┘
Every request is traced: the model name, engine used, full message input, output text, token usage, and any errors are sent to Langfuse automatically.
API Formats and Routes
Knarr listens on port 8080 for OpenAI and Anthropic requests, and port 11434 for Ollama requests (matching the Ollama default).
OpenAI format (port 8080):
POST /v1/chat/completions— Chat completionsPOST /v1/embeddings— EmbeddingsGET /v1/models— List available models
Anthropic format (port 8080):
POST /v1/messages— Messages API
Ollama format (port 11434):
POST /api/chat— ChatGET /api/tags— List modelsGET /api/ps— Running models (always returns empty)
Health check (any port):
GET /health— Returns{"status":"ok","proxy":"knarr"}
Passthrough Mode
By default, when passthrough: true is set (the default in container mode), requests are forwarded transparently to the upstream API using the client's own API key. This means you can point any OpenAI or Anthropic client at Knarr and it will just work, while Knarr adds Langfuse tracing on top.
Passthrough defaults:
openaipassthrough →https://api.openai.comanthropicpassthrough →https://api.anthropic.com
Ollama requests are never passed through (no upstream Ollama passthrough URL).
Engine Routing
When a model is explicitly configured in the config file (or discovered via auto_discover), Knarr routes requests through the corresponding Langertha engine. This allows routing to alternative backends, local models, or services that do not natively speak the protocol the client is using.
Example: an Ollama client can request gpt-4o, and Knarr will route it through the OpenAI Langertha engine, returning an Ollama-formatted response.
Routing Priority
For each incoming request, Knarr resolves the target in this order:
- 1. Explicit model config or auto-discovered model → route via Langertha engine
- 2. Passthrough enabled for this format → forward to upstream API
- 3. Default engine configured → route via default Langertha engine
- 4. None of the above → 404 error
Streaming
All three formats support streaming:
OpenAI — SSE (Server-Sent Events), ends with
data: [DONE]Anthropic — SSE, ends with
event: message_stopOllama — NDJSON (newline-delimited JSON), ends with
{"done":true}
For passthrough requests, the stream is piped byte-for-byte from the upstream API to the client with no buffering.
Configuration File
The config file is YAML. All string values support ${ENV_VAR} interpolation.
listen:
- "127.0.0.1:8080"
- "127.0.0.1:11434"
models:
gpt-4o:
engine: OpenAI
model: gpt-4o
api_key_env: OPENAI_API_KEY
local:
engine: OllamaOpenAI
url: http://localhost:11434/v1
model: llama3.2
default:
engine: OpenAI
auto_discover: true
passthrough:
openai: https://api.openai.com
anthropic: https://api.anthropic.com
proxy_api_key: ${KNARR_API_KEY}
langfuse:
url: https://cloud.langfuse.com
public_key: ${LANGFUSE_PUBLIC_KEY}
secret_key: ${LANGFUSE_SECRET_KEY}
trace_name: my-app
Model config keys:
engine(required) — Langertha engine name (e.g.OpenAI,Anthropic,OllamaOpenAI)model— Model name to pass to the engineapi_key_env— Environment variable name holding the API keyapi_key— Literal API key (preferapi_key_env)url— Custom base URL (for self-hosted or OpenAI-compatible endpoints)system_prompt— Default system prompt for all requests to this modeltemperature— Default temperatureresponse_size— Default max response tokens
Langfuse Tracing
When LANGFUSE_PUBLIC_KEY and LANGFUSE_SECRET_KEY are set (or configured in the config file), Knarr automatically traces every request:
Trace created with model name, engine, format, and input messages
Generation recorded with start time, end time, output, and token usage
Errors recorded with level ERROR and the error message
Tags:
knarradded to every trace
Traces are sent synchronously after each request. Configure the trace name with KNARR_TRACE_NAME or langfuse.trace_name in the config file.
Programmatic Usage
use Langertha::Knarr;
use Langertha::Knarr::Config;
# From a config file
my $app = Langertha::Knarr->build_app(config_file => 'knarr.yaml');
# From a pre-built config object
my $config = Langertha::Knarr::Config->new(file => 'knarr.yaml');
my $app = Langertha::Knarr->build_app(config => $config);
# Use with any Mojolicious server
use Mojo::Server::Daemon;
my $daemon = Mojo::Server::Daemon->new(
app => $app,
listen => ['http://127.0.0.1:8080'],
);
$daemon->run;
Environment Variables
OPENAI_API_KEY— OpenAI API key (auto-detected byknarr container)ANTHROPIC_API_KEY— Anthropic API key (auto-detected)GROQ_API_KEY— Groq API key (auto-detected)MISTRAL_API_KEY— Mistral API key (auto-detected)DEEPSEEK_API_KEY— DeepSeek API key (auto-detected)GEMINI_API_KEY— Google Gemini API key (auto-detected)OPENROUTER_API_KEY— OpenRouter API key (auto-detected)LANGFUSE_PUBLIC_KEY— Langfuse public key (enables tracing)LANGFUSE_SECRET_KEY— Langfuse secret key (enables tracing)LANGFUSE_URL— Langfuse server URL (default:https://cloud.langfuse.com)KNARR_TRACE_NAME— Name for Langfuse traces (default:knarr-proxy)KNARR_API_KEY— Require this key inAuthorizationorx-api-keyheaders
For CLI documentation, see knarr.
SEE ALSO
knarr — Command-line interface
Langertha::Knarr::Config — Configuration loading and validation
Langertha::Knarr::Router — Model-to-engine routing
Langertha::Knarr::Tracing — Langfuse tracing
Langertha::Knarr::Proxy::OpenAI — OpenAI format handler
Langertha::Knarr::Proxy::Anthropic — Anthropic format handler
Langertha::Knarr::Proxy::Ollama — Ollama format handler
Langertha::Knarr::CLI — CLI entry point
build_app
my $app = Langertha::Knarr->build_app(%opts);
Build and return a Mojolicious application with all proxy routes wired up.
Options:
config— A pre-built Langertha::Knarr::Config objectconfig_file— Path to a YAML config file (used ifconfignot given)
Returns a Mojolicious application ready to be passed to Mojo::Server::Daemon or any other Mojolicious-compatible server.
SUPPORT
Issues
Please report bugs and feature requests on GitHub at https://github.com/Getty/langertha-knarr/issues.
CONTRIBUTING
Contributions are welcome! Please fork the repository and submit a pull request.
AUTHOR
Torsten Raudssus <torsten@raudssus.de> https://raudssus.de/
COPYRIGHT AND LICENSE
This software is copyright (c) 2026 by Torsten Raudssus.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.