Stop guessing what your AI costs. Tinker shows every token, every dollar, every context byte — in real time.
You're running Opus through OpenClaw. A single deep conversation burns $20+ in tokens with zero warning. You check your provider dashboard three days later and wonder what happened.
That's not a billing problem. That's a visibility problem.
Tinker is a real-time command center that sits on top of your OpenClaw gateway. Not a chat skin — a control panel for operators who want to see what's actually happening.
Interactive squarified treemap of your context window. See exactly what takes space: system prompt, workspace files, conversation history, tool results. Drill down from categories → messages → raw text. When you wonder "why is my context 180K tokens?" — this tells you in one glance.
Same visualization for model output. How much is text, how much is thinking, how much is tool calls? Per LLM call within a run, so you see the real cost of that 8-step tool loop.
Stacked bar chart showing context composition over time. Watch your conversation grow, see compaction events, identify which turns are the token hogs.
Sub-agent health monitoring. See which sub-agents are running, their progress, staleness detection — all in a force-directed graph.
Per-provider token usage. Daily and monthly estimates. The 5-hour Claude rate-limit window with countdown timer. Per-auth-key model rows with provider logos and breathing glow on the active model.
Not just a dashboard — it's a complete webchat with session switching, markdown rendering, tool call inspection (expand inline, never in a sidebar), and real-time streaming. Use it as your daily driver or just for monitoring.
Tinker connects to your running OpenClaw gateway WebSocket.
# Clone this repo
git clone https://github.com/globalcaos/tinker.git
cd tinker
# Install deps
pnpm install
# Development (hot reload)
pnpm dev
# → http://localhost:18790
# Production build
pnpm build
# → dist/ folder, serve however you likeIf you're using the globalcaos fork, Tinker ships as a built-in plugin served directly from the gateway:
http://localhost:18789/tinker/
No separate server needed.
tinker/
├── src/
│ ├── app.ts ← Main shell: sidebar, sessions, WebSocket, chat
│ ├── styles/
│ │ └── base.css ← Dark theme, information-dense
│ └── panels/
│ ├── context-treemap.ts ← What fills your context window
│ ├── response-treemap.ts ← What each response costs
│ ├── context-timeline.ts ← Context usage over time (stacked bars)
│ └── overseer-graph.ts ← Sub-agent health graph
├── dist/ ← Pre-built production bundle
├── index.html
├── vite.config.ts
└── package.json
Stack: TypeScript + Lit + Vite. No React. No heavy frameworks. ~5,700 lines of focused code.
Zero upstream overlap — nothing in this repo exists in OpenClaw's ui/ directory. No merge conflicts, ever.
Tinker connects to the OpenClaw gateway WebSocket (default ws://localhost:18789/ws).
Authentication: reads the gateway token from your OpenClaw config, or accepts it via URL parameter.
Key API methods used:
chat.history— message history per sessionchat.send— send messagessessions.list— list all sessionssessions.usage— per-provider token usageusage.cost— daily cost breakdownstatus/health— gateway status
Events:
chat— real-time message streaming (deltas, finals, errors)agent— tool calls, lifecycle eventsanatomy— context window composition data (for treemaps)
These are the API costs Tinker tracks:
| Model | Input (per 1M) | Output (per 1M) | Watch out? |
|---|---|---|---|
| Claude Opus 4 / 4.5 | $15.00 | $75.00 | |
| Claude Sonnet 4 / 3.5 | $3.00 | $15.00 | Sweet spot |
| Claude Haiku 3.5 | $0.80 | $4.00 | Background tasks |
| GPT-5.2 Pro | $2.50 | $10.00 | Good failover |
| Gemini 3 Pro | $1.25 | $5.00 | Large context window |
| Gemini Flash | $0.10 | $0.40 | Near-free |
- Dark theme, information-dense — this is for operators, not consumers
- Panels, not pages — everything visible at once, resize and collapse
- Inline expansion — clicking expands detail in place, never opens a useless sidebar
- Real-time — WebSocket-driven, no polling
- Operator-first — you should know what your agent is spending before the bill arrives
- OpenClaw gateway running (any version)
- Node.js 22+
- pnpm (for development)
- OpenClaw — the AI agent framework
- ClawHub — agent skills marketplace
- The Field Guide for New AI Agents — everything we learned running agents 24/7
MIT
Built by globalcaos. Because your AI shouldn't cost more than your rent — and if it does, you should at least know about it.