-
Notifications
You must be signed in to change notification settings - Fork 15
Description
Problem
After running a workflow, the two most important questions are:
- "What did it create?" — resource IDs, IPs, names, statuses
- "What went wrong?" — which API call failed, why, and what to do about it
Neither question is answerable from the current workflow run output without a multi-step scavenger hunt.
Current experience: getting resource data
After deploying infrastructure (e.g. a load balancer), getting its IP requires:
swamp model output search web-lb --json— find the output UUIDswamp model output data <uuid> --json— get the actual data- Manually parse nested JSON to find the field (e.g.
ip)
That's 3 commands to answer "what's my LB IP?" — and the data is often stale from creation time, so you need a 4th command (swamp model method run web-lb get) to refresh it first.
Current experience: understanding errors
When a step fails, the error is a log line with nested JSON:
DigitalOcean API error: POST /v2/vpcs returned 422: {"id":"unprocessable_entity","message":"This range/size overlaps with another VPC network...","request_id":"46901edf-..."}
The actual useful message is buried inside JSON inside a log line. In a multi-step workflow with parallel jobs, correlating which step produced which error requires reading wall-of-text logs.
Current experience: "View produced data" section
The workflow run result includes a data artifacts summary, but it only shows internal metadata (data IDs, version numbers, spec names). It doesn't show the actual resource attributes that the user cares about — the IPs, the status, the resource IDs in the cloud provider. You get:
"dataArtifacts": [
{ "dataId": "acabaedb-...", "name": "web-1", "version": 3 }
]When what you want is: "web-1: id=556223958, ip=10.10.0.5, status=active".
Proposed solution
1. Workflow run summary table
After a workflow completes, show a structured summary instead of (or in addition to) raw logs:
deploy-web-infra completed in 45s
foundations
✓ create-tag managed-by-swamp-staging (0.8s)
✓ create-vpc staging-web-vpc → id: 379edfed (1.2s)
compute
✓ create-droplet[0] staging-web-1 → id: 556223958, ip: 10.10.0.2 (32s)
✓ create-droplet[1] staging-web-2 → id: 556223959, ip: 10.10.0.3 (31s)
✓ create-droplet[2] staging-web-3 → id: 556223960, ip: 10.10.0.4 (33s)
networking
✓ create-lb staging-web-lb → id: d8624a18, ip: pending (12s)
security
✓ create-firewall staging-web-firewall → id: 883d89d4 (2s)
On failure:
networking
✗ create-lb 422: check interval must be between 3 and 300 (0.1s)
2. Direct data access by model name
swamp data get web-lb # latest data, full JSON
swamp data get web-lb --field ip # single field
swamp data get web-droplet web-1 --field id # specific instanceSkip the output-ID indirection entirely. The data is already stored and indexed by model name + instance name — expose that directly.
3. Error extraction from API responses
When an API error is a JSON body, extract the message field and show it as the primary error. Keep the full JSON in --verbose or --json output but don't force users to visually parse nested JSON to understand what failed.
Impact
This affects every workflow run. The current UX means that after every deployment, users must run 3-4 follow-up commands per resource to confirm what was actually created. In a workflow with 5+ resources, that's 15-20 commands to answer "did it work and what are my endpoints?" — turning a 45-second deployment into a 10-minute investigation.
Summary
Changes would be needed in:
- Workflow execution output rendering — add a structured summary view after run completion that surfaces key resource attributes (IDs, IPs, status) inline with step results
- Data access CLI — add a direct
swamp data get <model> [instance]path that bypasses the output-ID indirection - Error formatting — extract
messagefrom JSON API error bodies and surface it as the primary error text, keeping raw JSON for--json/--verbosemodes