tokenburner / checkout-refactor $3.18 today
Claude Code frontend

$ pnpm test:ui

Error: CheckoutPanel overflow on mobile

badge created - route context?

send-to-ai.ready

Codex CLI tests

watching vitest

12 passed, 1 failed

mcp grant: read_pane

cost: $0.42

Web pane localhost:5173
Shell release

$ git status --short

clean

$ pnpm build

ready for local verification

Local-first AI agent workstation

Run your coding agents in one native workspace.

Tokenburner holds AI CLIs, regular shells, web previews, error context, screenshot routing, MCP grants, and cost visibility in one local desktop app.

BYO-AI. Use Claude Code, Codex CLI, Aider, Gemini CLI, OpenCode, or a custom command. Tokenburner does not bundle model usage or upload your project data.

The loop

The thing to sell is not another agent. It is the feedback loop.

Multi-agent coding is crowded now. Tokenburner is narrower: it keeps the local surfaces that agents need in one place, then makes context routing fast, visible, and permissioned.

Watch

Keep agents, shells, and previews visible.

Run multiple AI CLIs, regular shells, prompt pads, and localhost web panes in one native split workspace.

Catch

Detect failures where they happen.

PTY output is scanned locally for error patterns so broken test runs and stack traces surface without tab hunting.

Route

Send the right context to the right agent.

Route captured window context and terminal snippets to a chosen AI pane. Pane and region capture are planned next.

Govern

Let agents coordinate with explicit grants.

A loopback MCP server lets agents list panes, read permitted context, send messages, spawn panes, and capture panes with an audit trail.

Budget

See what each agent is costing.

Per-pane token and cost visibility keeps multi-agent work from turning into a mystery bill.

Truth check: v0.4 ships window screenshot capture today. Pane and region capture are planned because they are the modes that make screenshot routing feel complete.

Why it exists

Editors, terminals, cloud agents, and CLIs all solve different pieces.

Tokenburner is the local cockpit between them. It does not try to be your editor, your cloud runtime, or your model provider.

Multi-agent cockpit

A recursive split workspace for AI agents, shells, prompt pads, and web previews.

Local error feedback

Error badges and routing shortcuts turn failing output into a directed debugging prompt.

Loopback MCP control

Agents can coordinate through Tokenburner tools only after explicit local grants.

BYO-AI

Use the CLIs and accounts you already pay for. Tokenburner does not bundle model usage.

Privacy posture

Local by default is the product boundary, not a setting buried later.

Tokenburner is built around the assumption that terminal output, screenshots, costs, file paths, and project metadata are sensitive.

PTY content stays local

Terminal output is used inside the app for rendering, error detection, and optional local diagnostics.

Screenshots stay local

Screenshot bytes remain on the machine unless the user deliberately shares or routes them.

Cost data stays local

Token and dollar estimates are local app data, not remote analytics.

MCP is loopback only

The MCP server binds to localhost and every cross-pane action requires a grant.

No telemetry SDK

The desktop app has no analytics package or hidden phone-home path.

Opt-in crash reports only

Sentry is optional and configured to scrub paths, hostnames, and IP addresses.

Current status

Useful now, still honest about the gaps.

v0.4 is the first installable release line: multi-pane workspace, auto error detection, web panes, cost tracking, loopback MCP tools, per-agent grants, and an audit log. The next work is about completing the local loop cleanly.