Keep agents, shells, and previews visible.
Run multiple AI CLIs, regular shells, prompt pads, and localhost web panes in one native split workspace.
$ pnpm test:ui
Error: CheckoutPanel overflow on mobile
badge created - route context?
send-to-ai.ready
watching vitest
12 passed, 1 failed
mcp grant: read_pane
cost: $0.42
$ git status --short
clean
$ pnpm build
ready for local verification
Local-first AI agent workstation
Tokenburner holds AI CLIs, regular shells, web previews, error context, screenshot routing, MCP grants, and cost visibility in one local desktop app.
BYO-AI. Use Claude Code, Codex CLI, Aider, Gemini CLI, OpenCode, or a custom command. Tokenburner does not bundle model usage or upload your project data.
The loop
Multi-agent coding is crowded now. Tokenburner is narrower: it keeps the local surfaces that agents need in one place, then makes context routing fast, visible, and permissioned.
Run multiple AI CLIs, regular shells, prompt pads, and localhost web panes in one native split workspace.
PTY output is scanned locally for error patterns so broken test runs and stack traces surface without tab hunting.
Route captured window context and terminal snippets to a chosen AI pane. Pane and region capture are planned next.
A loopback MCP server lets agents list panes, read permitted context, send messages, spawn panes, and capture panes with an audit trail.
Per-pane token and cost visibility keeps multi-agent work from turning into a mystery bill.
Truth check: v0.4 ships window screenshot capture today. Pane and region capture are planned because they are the modes that make screenshot routing feel complete.
Why it exists
Tokenburner is the local cockpit between them. It does not try to be your editor, your cloud runtime, or your model provider.
A recursive split workspace for AI agents, shells, prompt pads, and web previews.
Error badges and routing shortcuts turn failing output into a directed debugging prompt.
Agents can coordinate through Tokenburner tools only after explicit local grants.
Use the CLIs and accounts you already pay for. Tokenburner does not bundle model usage.
Privacy posture
Tokenburner is built around the assumption that terminal output, screenshots, costs, file paths, and project metadata are sensitive.
Terminal output is used inside the app for rendering, error detection, and optional local diagnostics.
Screenshot bytes remain on the machine unless the user deliberately shares or routes them.
Token and dollar estimates are local app data, not remote analytics.
The MCP server binds to localhost and every cross-pane action requires a grant.
The desktop app has no analytics package or hidden phone-home path.
Sentry is optional and configured to scrub paths, hostnames, and IP addresses.
Current status
v0.4 is the first installable release line: multi-pane workspace, auto error detection, web panes, cost tracking, loopback MCP tools, per-agent grants, and an audit log. The next work is about completing the local loop cleanly.