Dynamic resource limit management for Docker containers. Set, monitor, and enforce cumulative limits on CPU time, RAM, disk, network, I/O, and API spending per container — with automatic enforcement and in-container self-querying.
| Limit type | Unit | Enforcement |
|---|---|---|
| CPU | Cumulative seconds | Container pause |
| RAM | Bytes | cgroup memory.max + Docker API |
| Disk | Bytes (writable layer) | Container pause |
| Network | Cumulative bytes | Network disconnect |
| Disk I/O bytes | Cumulative bytes | cgroup io.max throttle |
| Disk I/O ops | Cumulative operations | cgroup io.max throttle |
| Spending | USD milli-cents | HTTP proxy budget block |
| RAM usage B·s | Byte-seconds (actual RAM × time) | Container kill |
| Disk usage B·s | Byte-seconds (actual disk × time) | Container kill |
| RAM request B·s | Byte-seconds (ddl RAM limit × time) | Container kill |
| Disk request B·s | Byte-seconds (ddl disk limit × time) | Container kill |
- Per-container limits — set, increase, or decrease any limit at any time
- Automatic enforcement — daemon polls every second and applies/releases enforcement actions
- Spending tracking — transparent HTTP proxy intercepts OpenAI and Anthropic API calls, extracts token usage from responses, and calculates costs using built-in model pricing
- Container cloning — clone a running container with all its limits copied over
- In-container self-query — containers can check their own limits and usage via REST API or
ddl-guestbinary - Web dashboard — real-time browser UI for monitoring and managing containers
┌──────────────────────────────────────────┐
│ ddld (daemon) │
┌──────────┐ unix sock │ │
│ ddl CLI ├─────────────┤ Full API (unix socket /run/ddl/ddl.sock)│
└──────────┘ │ register, limits, clone, delete, ... │
│ │
┌──────────┐ TCP :7123 │ Read-only API (TCP) │
│containers├─────────────┤ GET /containers, /usage, /limits │
└──────────┘ (by src IP)│ Container identified by source IP │
│ │
┌──────────┐ TCP :7124 │ ┌──────────┐ ┌──────────────────┐ │
│ dashboard├─────────────┤ │ SQLite │ │ Enforcement Mgr │ │
└──────────┘ │ │ Store │ └──┬───────────┬───┘ │
│ └──────────┘ ┌───┴────┐ ┌────┴──┐ │
│ │ Docker │ │ cgroup │ │
│ │ Client │ │ Reader │ │
│ └────────┘ └───────┘ │
│ ┌──────────────────┐ │
│ │ Spending Proxy │ │
│ │ (per-container) │ │
│ └──────────────────┘ │
└──────────────────────────────────────────┘
On macOS (Docker Desktop), the unix socket is not accessible from the host. The CLI automatically falls back to docker exec to reach the daemon's socket from inside the container.
go install github.com/keneo/docker-dynamic-limits/cmd/ddl@latest
go install github.com/keneo/docker-dynamic-limits/cmd/ddld@latest
go install github.com/keneo/docker-dynamic-limits/cmd/ddl-guest@latestgit clone https://github.com/keneo/docker-dynamic-limits.git
cd docker-dynamic-limits
make install # builds and installs to /usr/local/binTo install to a different location:
make install PREFIX=~/.local/binmake build # produces ./ddl, ./ddld, ./ddl-guest in the repo rootThe recommended way to run ddld is as a Docker container:
ddl daemon start # build image (first time) and start container
ddl daemon start --build # force rebuild of the image
ddl daemon status # check if running
ddl daemon stop # stop and remove containerThis starts ddld in a container named ddl-daemon with:
- TCP API on port 7123 (read-only, for containers)
- Unix socket at
/run/ddl/ddl.sock(full API, for host management) - SQLite database on a persistent Docker volume
# Default: TCP on :7123, socket at /run/ddl/ddl.sock
sudo ddld
# Custom options
ddld -addr :8080 -db ./ddl.db -sock /tmp/ddl.sock
# No socket (full API on TCP, useful for development)
ddld -addr :8080 -db ./ddl.db -sock ""ddl register <container_id>ddl limits set <container> cpu 1h # 1 hour of CPU time
ddl limits set <container> ram 512m # 512 MiB RAM
ddl limits set <container> disk 10g # 10 GiB disk
ddl limits set <container> net 1g # 1 GiB network transfer
ddl limits set <container> disk-io-bytes 5g
ddl limits set <container> disk-io-ops 1000000
ddl limits set <container> spending 10.00 # $10.00 USD
ddl limits set <container> ram-usage-bsec 100g # 100 GB·s of RAM usage over time
ddl limits set <container> disk-usage-bsec 500g # 500 GB·s of disk usage over time
ddl limits set <container> ram-request-bsec 1t # 1 TB·s of RAM reservation over time
ddl limits set <container> disk-request-bsec 1t # 1 TB·s of disk reservation over timeddl limits increase <container> cpu 30m
ddl limits decrease <container> ram 128mddl usage <container> # usage vs limits with percentages
ddl limits get <container>
ddl ls # list all managed containersddl clone <container> [new-name]ddl remove <container>A browser-based UI for real-time monitoring and management:
ddl dashboard # start on :7124
ddl dashboard --open # start and open browser
ddl dashboard stop # stop the dashboardThe dashboard shows all containers with their limits, usage, and enforcement status. You can register, clone, remove containers and set limits directly from the UI. An offline banner appears when the daemon is unreachable.
Containers are automatically identified by their source IP address (refreshed every 5 seconds). No tokens or headers needed.
The ddl-guest binary is included in the daemon container image and can be copied into managed containers:
# Copy ddl-guest into a running container
docker cp ddl-daemon:/ddl-guest /tmp/ddl-guest
docker cp /tmp/ddl-guest <container>:/usr/local/bin/ddl-guest
# Inside the container
ddl-guest # formatted table output
ddl-guest -json # raw JSONddl-guest auto-discovers the daemon by trying host.docker.internal:7123 and 172.17.0.1:7123, or use DDL_API_URL to override.
# From inside a container (identified by source IP)
curl http://host.docker.internal:7123/usage
curl http://host.docker.internal:7123/limitsThe daemon exposes two interfaces:
Full API (unix socket — for host management):
| Method | Endpoint | Description |
|---|---|---|
GET |
/containers |
List all managed containers with status |
POST |
/register |
Register a container {"container_id": "..."} |
GET |
/containers/{id} |
Get container status (limits, usage, enforcement) |
DELETE |
/containers/{id} |
Stop managing a container |
GET |
/containers/{id}/limits |
Get all limits |
PUT |
/containers/{id}/limits |
Set/increase/decrease a limit |
GET |
/containers/{id}/usage |
Get current usage |
POST |
/containers/{id}/clone |
Clone container with limits |
GET |
/usage |
In-container usage self-query |
GET |
/limits |
In-container limits self-query |
Read-only API (TCP — for containers):
| Method | Endpoint | Description |
|---|---|---|
GET |
/containers |
List all managed containers |
GET |
/usage |
Self-query usage + limits (by source IP) |
GET |
/limits |
Self-query limits (by source IP) |
The daemon includes a per-container HTTP proxy that tracks and enforces spending budgets on LLM API calls. Containers make plain HTTP requests through the proxy; the proxy upgrades them to HTTPS, injects the real API key, tracks token usage from responses, and blocks requests once the budget is exhausted.
Container ddld Spending Proxy LLM API
│ │ │
│ curl http://api.anthropic │ │
│ .com/v1/messages │ │
│ (no API key, plain HTTP) │ │
│─────────────────────────────>│ │
│ │ POST https://api.anthropic │
│ │ .com/v1/messages │
│ │ x-api-key: sk-ant-... │
│ │─────────────────────────────>│
│ │ │
│ │ 200 OK {usage: ...} │
│ │<─────────────────────────────│
│ │ │
│ 200 OK {usage: ...} │ (track tokens + cost) │
│<─────────────────────────────│ │
The proxy:
- Receives HTTP requests from the container
- Upgrades the connection to HTTPS for the real API
- Strips any auth headers the container sent and injects the daemon-configured API key
- Forwards the request and reads the response to extract token usage
- Calculates costs using built-in model pricing and accumulates spending
- Returns HTTP 429
{"error":"spending budget exceeded"}once the budget is hit
This means containers never see the real API key and cannot bypass the spending limit.
| Provider | Host | Auth header | Models with built-in pricing |
|---|---|---|---|
| Anthropic | api.anthropic.com |
x-api-key |
claude-3-opus, claude-3-sonnet, claude-3-haiku, claude-haiku-4-5 |
| OpenAI | api.openai.com |
Authorization: Bearer |
gpt-4, gpt-4-turbo, gpt-4o, gpt-4o-mini, gpt-3.5-turbo |
Unknown models are charged at a conservative default rate. Custom pricing can be loaded via LoadPrices().
Pass API keys as environment variables when starting the daemon:
# Anthropic only
DDL_ANTHROPIC_API_KEY=sk-ant-... ddl daemon start --build
# OpenAI only
DDL_OPENAI_API_KEY=sk-... ddl daemon start --build
# Both
DDL_ANTHROPIC_API_KEY=sk-ant-... DDL_OPENAI_API_KEY=sk-... ddl daemon start --buildThe keys are forwarded into the daemon container automatically.
After registering a container, the API returns a proxy_addr field. Configure the container to use this as its HTTP proxy:
# Register and get proxy address
ddl register <container>
# Response includes: "proxy_addr": "0.0.0.0:12345"
# Set a spending budget
ddl limits set <container> spending 1.00 # $1.00
# From inside the container, use the proxy (no API key needed):
export http_proxy=http://<daemon-ip>:<proxy-port>
curl -X POST http://api.anthropic.com/v1/messages \
-H "Content-Type: application/json" \
-d '{"model":"claude-haiku-4-5-20251001","max_tokens":100,"messages":[{"role":"user","content":"Hello"}]}'The container sends plain HTTP with no API key. The proxy handles HTTPS and authentication transparently.
A self-contained demo script shows the full flow — start daemon with API key, create a container, make API calls through the proxy, and watch spending accumulate until the budget is hit with a 429:
# First run (builds Docker image + CLI):
BUILD=1 ANTHROPIC_API_KEY=sk-ant-... bash examples/llm-budget-demo.sh
# Subsequent runs (reuses Docker image, only rebuilds CLI):
ANTHROPIC_API_KEY=sk-ant-... bash examples/llm-budget-demo.shExample output:
Request #1: "What is the tallest mountain on Earth? Answer in one sentence."
=> Mount Everest is the tallest mountain on Earth.
[model=claude-haiku-4-5-20251001 tokens: 16 in / 10 out]
Spending: $0.0001 / $0.0005 (13 / 50 milli-cents)
...
Request #4: HTTP 429 — Budget exceeded!
{"error":"spending budget exceeded"}
Requests to non-tracked hosts (anything other than api.openai.com and api.anthropic.com) pass through the proxy unmodified and are never blocked by the spending budget. Only LLM API calls count toward spending.
| Type | Format examples |
|---|---|
| CPU time | 3600s, 60m, 1h |
| Bytes (RAM, disk, network, I/O) | 1024, 512k, 256m, 1g, 1.5t |
| Byte-seconds (usage/request B·s) | 100g, 1.5t (same byte suffixes, displayed as e.g. 1.5G·s) |
| I/O operations | Plain integer |
| Spending | 10.00 (USD, stored as milli-cents) |
- Linux with cgroup v2 (for enforcement; daemon runs in Docker)
- Docker Engine
- Go 1.21+
# Unit tests (80 tests across 8 packages)
go test ./...
# E2E: CLI + spending proxy
bash e2e/cli_proxy_test.sh
# E2E: Docker-in-Docker (full integration)
docker build -t ddl-e2e -f e2e/Dockerfile . && docker run --privileged ddl-e2eMIT