feat: DMX (Art-Net) In/Out support for live show professionals#641
feat: DMX (Art-Net) In/Out support for live show professionals#641livepeer-tessa wants to merge 1 commit intomainfrom
Conversation
Implements H174: DMX In/Out support to unlock adoption among live show
professionals by making Scope a controllable visual processing layer
inside existing lighting pipelines.
## Features
### DMX Input (Console → Scope)
- Art-Net UDP listener on port 6454 (standard Art-Net port)
- Channel-to-parameter mapping with category grouping
- Parameters grouped by: Generation, LoRA, Color, Analysis
- Universe and start channel configuration
- Live activity indicator when receiving Art-Net signal
- Value scaling from 0-255 to parameter ranges
### DMX Output (Scope → Fixtures)
- Art-Net output to control fixtures reactively
- Analysis values: color RGB, brightness, motion, beat
- HTP/LTP merge modes for multi-source environments
- Output enable toggle (safe default: disabled)
- Test ramp function (0→255→0 over 2 seconds)
### UI/UX
- DMX tab in Settings alongside OSC
- Direction tabs for In/Out configuration
- Grouped parameter lists by category
- Channel mapping dialogs with parameter dropdowns
- Status indicator showing connection state
- Warning when no Art-Net signal detected
### API Endpoints
- GET /api/v1/dmx/status - Server status
- GET /api/v1/dmx/config - Configuration
- PUT /api/v1/dmx/config - Update configuration
- POST /api/v1/dmx/input-mappings - Add input mapping
- DELETE /api/v1/dmx/input-mappings/{id} - Remove input mapping
- POST /api/v1/dmx/output-mappings - Add output mapping
- DELETE /api/v1/dmx/output-mappings/{id} - Remove output mapping
- POST /api/v1/dmx/test-output - Test output ramp
- GET /api/v1/dmx/parameters - Available parameters (grouped)
- GET /api/v1/dmx/analysis-sources - Available analysis sources
### Architecture
- Follows existing OSC server pattern
- Uses broadcast_parameter_update() for pipeline integration
- Persistent configuration in ~/.daydream-scope/dmx_config.json
- Rate-limited parameter broadcasts (~60fps max)
Related: H174
Closes #621
Signed-off-by: livepeer-robot <robot@livepeer.org>
📝 WalkthroughWalkthroughThis PR introduces comprehensive Art-Net DMX server support for external parameter control. It adds a DMX settings UI component, a complete backend DMX server implementation with UDP Art-Net protocol handling, and multiple API endpoints for status, configuration, mapping management, and testing. Both input and output DMX capabilities are included with parameter scaling and merge mode support. Changes
Sequence DiagramssequenceDiagram
participant Client as DMX Console
participant UDP as UDP Socket
participant DMXServer as DMX Server
participant ParamMgr as Parameter Manager
participant WebRTC as WebRTC Manager
Client->>UDP: Art-Net DMX Packet
UDP->>DMXServer: _handle_artnet_packet()
DMXServer->>DMXServer: _handle_dmx_packet()
DMXServer->>DMXServer: _process_dmx_input()
DMXServer->>DMXServer: scale(raw_value) for each mapping
DMXServer->>ParamMgr: broadcast_parameter_update()
ParamMgr->>WebRTC: send to subscribers
DMXServer->>DMXServer: emit SSE event
sequenceDiagram
participant Frontend as DmxTab UI
participant API as FastAPI App
participant DMXServer as DMX Server
participant UDP as UDP Socket
participant Monitor as Analysis Monitor
Frontend->>API: update_dmx_config() / add_output_mapping()
API->>DMXServer: update_config() / add_output_mapping()
DMXServer->>DMXServer: persist to dmx_config.json
Monitor->>DMXServer: update_analysis_values()
DMXServer->>DMXServer: scale() & merge (HTP/LTP)
DMXServer->>DMXServer: _send_dmx_output()
DMXServer->>DMXServer: _build_artnet_dmx_packet()
DMXServer->>UDP: broadcast Art-Net packet
UDP->>Frontend: SSE status update
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@frontend/src/components/settings/DmxTab.tsx`:
- Around line 569-576: The icon-only delete Button lacks an accessible name;
update the Button(s) that call handleDeleteInputMapping (the one using
mapping.id and the other similar occurrence) to include a descriptive accessible
label such as aria-label (or aria-labelledby/title) e.g. "Delete mapping
{mapping.name || mapping.id}" so screen readers announce the button purpose;
ensure both instances (the Button rendering <Trash2 /> and the similar delete
Button later in the file) get the same treatment.
- Around line 860-869: The onChange handler for the Max Value input uses
"parseFloat(...) || 1" which forces legitimate 0 values back to 1; update the
handler in the DmxTab component so it parses the input with parseFloat, checks
for NaN (e.g. isNaN(parsed) ? 1 : parsed) and passes that to setNewInputMapping
to allow 0, and apply the same change to the analogous handler at the other
occurrence (lines referencing setNewInputMapping and newInputMapping.max_value
in this file).
- Around line 171-187: The status fetch only runs once when the tab becomes
active so the live input_active flag goes stale; modify the useEffect that
currently calls fetchStatus(), fetchConfig(), fetchParameters(),
fetchAnalysisSources() so that while isActive is true it polls fetchStatus() on
an interval (e.g., every 3–5s) instead of only once — use setInterval to call
fetchStatus() repeatedly, clear the interval in the effect cleanup, and keep the
initial Promise.all call (or run
fetchConfig/fetchParameters/fetchAnalysisSources once) while ensuring
setIsLoading is handled only for the initial load and not flipped on every poll;
reference fetchStatus, isActive, setIsLoading, fetchConfig, fetchParameters, and
fetchAnalysisSources to locate where to implement the polling.
In `@src/scope/server/app.py`:
- Around line 816-832: The request models (e.g., DMXConfigUpdateRequest)
currently accept raw strings for fields like output_merge_mode (and other
models' category) which causes ValueError at runtime when the handlers later
coerce with MergeMode(...) / ParameterCategory(...); change those Pydantic model
field types to the actual enums (e.g., output_merge_mode: MergeMode | None,
category: ParameterCategory | None) so validation emits 4xx errors, and update
handlers (e.g., update_dmx_config and the other handlers in the 835-918 range)
to use the already-validated enum values directly instead of re-wrapping with
MergeMode(...) / ParameterCategory(...).
- Around line 362-368: The lifespan() function assigns a new DMXServer to a
local name instead of the module-level dmx_server, so get_dmx_server() still
sees None; add a global declaration for dmx_server at the top of lifespan() (or
otherwise assign to the module-level symbol) before creating the instance so the
module-level variable used by get_dmx_server() is updated, i.e., ensure the
symbol dmx_server in lifespan() refers to the shared module-level dmx_server
rather than a local variable.
- Around line 807-813: The endpoint dmx_config currently returns a JSON error
payload with HTTP 200 when get_dmx_server() is None; change it to return an
actual 503 Service Unavailable status instead. Update the dmx_config handler to
raise an HTTPException(status_code=503, detail="DMX server not running") or
return a Response with status_code=503 so callers see a non-2xx response; keep
the existing success path returning srv.get_config() unchanged. Ensure you
reference get_dmx_server and the dmx_config function when making the change.
In `@src/scope/server/dmx_server.py`:
- Around line 404-412: The code ignores the persisted input_start_channel
setting when computing the DMX slot index; update the index calculation used
when reading dmx_data so it incorporates the mapping.input_start_channel offset
(e.g., compute idx using mapping.input_start_channel together with
mapping.channel, then validate bounds against len(dmx_data)). Update the block
that sets idx (the loop over self._config.input_mappings and variables mapping,
universe, idx, and dmx_data) so the start channel affects runtime slot lookup
and preserve the existing bounds checks.
- Around line 476-499: The code always uses mapping.universe and ignores the
global setting _config.output_universe, so changing output_universe via the API
doesn't affect live output; update both DMX emission paths (the loop handling
self._config.output_mappings and the test ramp path referenced around
_build_artnet_dmx_packet) to compute an effective_universe =
self._config.output_universe if it is set (non-null/ non-zero/ non-default) else
mapping.universe, then use effective_universe when indexing/creating universes
and when building packets (and when calling _build_artnet_dmx_packet/sendto) so
the global output_universe overrides per-mapping universes as the UI expects.
- Around line 105-113: The scale method currently clamps using
self.min_value/self.max_value assuming min<=max which pins outputs when
reversed; change clamping to use lower = min(self.min_value,self.max_value) and
upper = max(self.min_value,self.max_value) so clamped = max(lower, min(upper,
value)), compute normalized = 0.0 if upper==lower else (clamped - lower) /
(upper - lower), and if self.min_value > self.max_value invert the normalized
mapping (normalized = 1.0 - normalized) before returning int(normalized * 255)
so descending ranges produce the expected reversed DMX output.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 13cab6fe-5677-44a7-9033-6371575948d1
📒 Files selected for processing (4)
frontend/src/components/SettingsDialog.tsxfrontend/src/components/settings/DmxTab.tsxsrc/scope/server/app.pysrc/scope/server/dmx_server.py
| useEffect(() => { | ||
| if (isActive) { | ||
| setIsLoading(true); | ||
| Promise.all([ | ||
| fetchStatus(), | ||
| fetchConfig(), | ||
| fetchParameters(), | ||
| fetchAnalysisSources(), | ||
| ]).finally(() => setIsLoading(false)); | ||
| } | ||
| }, [ | ||
| isActive, | ||
| fetchStatus, | ||
| fetchConfig, | ||
| fetchParameters, | ||
| fetchAnalysisSources, | ||
| ]); |
There was a problem hiding this comment.
The live signal indicator won't actually stay live.
This effect only loads /api/v1/dmx/status once when the tab opens. Since the backend flips input_active based on a 5-second timeout, the green/yellow state goes stale immediately afterward and the “no signal” warning won't update unless something else refetches.
Suggested fix
useEffect(() => {
- if (isActive) {
- setIsLoading(true);
- Promise.all([
- fetchStatus(),
- fetchConfig(),
- fetchParameters(),
- fetchAnalysisSources(),
- ]).finally(() => setIsLoading(false));
- }
+ if (!isActive) return;
+
+ setIsLoading(true);
+ Promise.all([
+ fetchStatus(),
+ fetchConfig(),
+ fetchParameters(),
+ fetchAnalysisSources(),
+ ]).finally(() => setIsLoading(false));
+
+ const statusTimer = window.setInterval(fetchStatus, 1000);
+ return () => window.clearInterval(statusTimer);
}, [🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/settings/DmxTab.tsx` around lines 171 - 187, The
status fetch only runs once when the tab becomes active so the live input_active
flag goes stale; modify the useEffect that currently calls fetchStatus(),
fetchConfig(), fetchParameters(), fetchAnalysisSources() so that while isActive
is true it polls fetchStatus() on an interval (e.g., every 3–5s) instead of only
once — use setInterval to call fetchStatus() repeatedly, clear the interval in
the effect cleanup, and keep the initial Promise.all call (or run
fetchConfig/fetchParameters/fetchAnalysisSources once) while ensuring
setIsLoading is handled only for the initial load and not flipped on every poll;
reference fetchStatus, isActive, setIsLoading, fetchConfig, fetchParameters, and
fetchAnalysisSources to locate where to implement the polling.
| <Button | ||
| variant="ghost" | ||
| size="icon" | ||
| className="h-8 w-8 text-muted-foreground hover:text-destructive" | ||
| onClick={() => handleDeleteInputMapping(mapping.id)} | ||
| > | ||
| <Trash2 className="h-4 w-4" /> | ||
| </Button> |
There was a problem hiding this comment.
Give the icon-only delete buttons an accessible name.
Right now assistive tech will announce these as a generic “button”, which makes removing mappings guesswork for screen-reader users.
Suggested fix
<Button
variant="ghost"
size="icon"
+ aria-label={`Delete DMX input mapping for ${mapping.param_key}`}
className="h-8 w-8 text-muted-foreground hover:text-destructive"
onClick={() => handleDeleteInputMapping(mapping.id)}
>
@@
<Button
variant="ghost"
size="icon"
+ aria-label={`Delete DMX output mapping for ${mapping.source_key}`}
className="h-8 w-8 text-muted-foreground hover:text-destructive"
onClick={() =>
handleDeleteOutputMapping(mapping.id)Also applies to: 731-739
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/settings/DmxTab.tsx` around lines 569 - 576, The
icon-only delete Button lacks an accessible name; update the Button(s) that call
handleDeleteInputMapping (the one using mapping.id and the other similar
occurrence) to include a descriptive accessible label such as aria-label (or
aria-labelledby/title) e.g. "Delete mapping {mapping.name || mapping.id}" so
screen readers announce the button purpose; ensure both instances (the Button
rendering <Trash2 /> and the similar delete Button later in the file) get the
same treatment.
| <label className="text-sm font-medium">Max Value</label> | ||
| <Input | ||
| type="number" | ||
| step="0.01" | ||
| value={newInputMapping.max_value} | ||
| onChange={e => | ||
| setNewInputMapping(m => ({ | ||
| ...m, | ||
| max_value: parseFloat(e.target.value) || 1, | ||
| })) |
There was a problem hiding this comment.
max_value can never be set to 0.
parseFloat(...) || 1 treats a legitimate 0 as falsy and rewrites it back to 1. That blocks ranges like -1 → 0 and any other mapping whose upper bound is zero.
Suggested fix
onChange={e =>
- setNewInputMapping(m => ({
- ...m,
- max_value: parseFloat(e.target.value) || 1,
- }))
+ setNewInputMapping(m => {
+ const value = parseFloat(e.target.value);
+ return {
+ ...m,
+ max_value: Number.isNaN(value) ? 1 : value,
+ };
+ })
}
@@
onChange={e =>
- setNewOutputMapping(m => ({
- ...m,
- max_value: parseFloat(e.target.value) || 1,
- }))
+ setNewOutputMapping(m => {
+ const value = parseFloat(e.target.value);
+ return {
+ ...m,
+ max_value: Number.isNaN(value) ? 1 : value,
+ };
+ })
}Also applies to: 972-981
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/settings/DmxTab.tsx` around lines 860 - 869, The
onChange handler for the Max Value input uses "parseFloat(...) || 1" which
forces legitimate 0 values back to 1; update the handler in the DmxTab component
so it parses the input with parseFloat, checks for NaN (e.g. isNaN(parsed) ? 1 :
parsed) and passes that to setNewInputMapping to allow 0, and apply the same
change to the analogous handler at the other occurrence (lines referencing
setNewInputMapping and newInputMapping.max_value in this file).
| # Start DMX (Art-Net) server on standard port 6454 | ||
| from .dmx_server import DMXServer | ||
|
|
||
| dmx_config_dir = Path.home() / ".daydream-scope" | ||
| dmx_server = DMXServer(port=6454, config_dir=dmx_config_dir) | ||
| dmx_server.set_managers(pipeline_manager, webrtc_manager) | ||
| await dmx_server.start() |
There was a problem hiding this comment.
This initializes a local dmx_server, not the shared instance the API reads.
Because dmx_server is missing from the global declaration in lifespan(), this assignment only updates a local variable. get_dmx_server() keeps returning the module-level None, so the entire /api/v1/dmx/* surface behaves as if the server never started.
Suggested fix
- global \
- webrtc_manager, \
- pipeline_manager, \
- cloud_connection_manager, \
- kafka_publisher, \
- osc_server
+ global \
+ webrtc_manager, \
+ pipeline_manager, \
+ cloud_connection_manager, \
+ kafka_publisher, \
+ osc_server, \
+ dmx_server🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/scope/server/app.py` around lines 362 - 368, The lifespan() function
assigns a new DMXServer to a local name instead of the module-level dmx_server,
so get_dmx_server() still sees None; add a global declaration for dmx_server at
the top of lifespan() (or otherwise assign to the module-level symbol) before
creating the instance so the module-level variable used by get_dmx_server() is
updated, i.e., ensure the symbol dmx_server in lifespan() refers to the shared
module-level dmx_server rather than a local variable.
| @app.get("/api/v1/dmx/config") | ||
| async def dmx_config(): | ||
| """Return current DMX configuration.""" | ||
| srv = get_dmx_server() | ||
| if srv is None: | ||
| return {"error": "DMX server not running"} | ||
| return srv.get_config() |
There was a problem hiding this comment.
Return a real 503 here instead of a 200 error payload.
frontend/src/components/settings/DmxTab.tsx lines 137-145 treat any 2xx response as a valid config object. Returning {"error": ...} with status 200 hides the startup failure and leaves the client with the wrong shape in state.
Suggested fix
`@app.get`("/api/v1/dmx/config")
async def dmx_config():
"""Return current DMX configuration."""
srv = get_dmx_server()
if srv is None:
- return {"error": "DMX server not running"}
+ raise HTTPException(status_code=503, detail="DMX server not running")
return srv.get_config()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/scope/server/app.py` around lines 807 - 813, The endpoint dmx_config
currently returns a JSON error payload with HTTP 200 when get_dmx_server() is
None; change it to return an actual 503 Service Unavailable status instead.
Update the dmx_config handler to raise an HTTPException(status_code=503,
detail="DMX server not running") or return a Response with status_code=503 so
callers see a non-2xx response; keep the existing success path returning
srv.get_config() unchanged. Ensure you reference get_dmx_server and the
dmx_config function when making the change.
| class DMXConfigUpdateRequest(BaseModel): | ||
| input_universe: int | None = None | ||
| input_start_channel: int | None = None | ||
| output_universe: int | None = None | ||
| output_enabled: bool | None = None | ||
| output_merge_mode: str | None = None | ||
|
|
||
|
|
||
| @app.put("/api/v1/dmx/config") | ||
| async def update_dmx_config(request: DMXConfigUpdateRequest): | ||
| """Update DMX configuration.""" | ||
| srv = get_dmx_server() | ||
| if srv is None: | ||
| raise HTTPException(status_code=503, detail="DMX server not running") | ||
|
|
||
| updates = request.model_dump(exclude_none=True) | ||
| return srv.update_config(updates) |
There was a problem hiding this comment.
Invalid enum values will currently bubble out as 500s.
These request models accept raw strings for output_merge_mode and category, but the handlers later coerce them with MergeMode(...) / ParameterCategory(...). Any unexpected value becomes an unhandled ValueError instead of a 4xx validation error.
Suggested fix
+from .dmx_server import MergeMode, ParameterCategory
+
class DMXConfigUpdateRequest(BaseModel):
input_universe: int | None = None
input_start_channel: int | None = None
output_universe: int | None = None
output_enabled: bool | None = None
- output_merge_mode: str | None = None
+ output_merge_mode: MergeMode | None = None
@@
class DMXInputMappingRequest(BaseModel):
id: str
universe: int
channel: int
param_key: str
- category: str = "generation"
+ category: ParameterCategory = ParameterCategory.GENERATION
@@
class DMXOutputMappingRequest(BaseModel):
id: str
universe: int
channel: int
source_key: str
- category: str = "analysis"
+ category: ParameterCategory = ParameterCategory.ANALYSISThen pass request.category / request.output_merge_mode through directly instead of re-wrapping them.
Also applies to: 835-918
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/scope/server/app.py` around lines 816 - 832, The request models (e.g.,
DMXConfigUpdateRequest) currently accept raw strings for fields like
output_merge_mode (and other models' category) which causes ValueError at
runtime when the handlers later coerce with MergeMode(...) /
ParameterCategory(...); change those Pydantic model field types to the actual
enums (e.g., output_merge_mode: MergeMode | None, category: ParameterCategory |
None) so validation emits 4xx errors, and update handlers (e.g.,
update_dmx_config and the other handlers in the 835-918 range) to use the
already-validated enum values directly instead of re-wrapping with
MergeMode(...) / ParameterCategory(...).
| def scale(self, value: float) -> int: | ||
| """Convert source value to 0-255 DMX value.""" | ||
| # Clamp and normalize | ||
| clamped = max(self.min_value, min(self.max_value, value)) | ||
| if self.max_value == self.min_value: | ||
| normalized = 0.0 | ||
| else: | ||
| normalized = (clamped - self.min_value) / (self.max_value - self.min_value) | ||
| return int(normalized * 255) |
There was a problem hiding this comment.
Reversed output ranges currently pin the DMX slot to zero.
The UI lets users edit min_value/max_value freely, but this clamp only works when min_value <= max_value. With a descending range, clamped always lands on self.min_value, so the mapping silently stops driving the channel.
Suggested fix
def scale(self, value: float) -> int:
"""Convert source value to 0-255 DMX value."""
- # Clamp and normalize
- clamped = max(self.min_value, min(self.max_value, value))
- if self.max_value == self.min_value:
+ low = min(self.min_value, self.max_value)
+ high = max(self.min_value, self.max_value)
+ clamped = max(low, min(high, value))
+
+ if high == low:
normalized = 0.0
+ elif self.max_value >= self.min_value:
+ normalized = (clamped - self.min_value) / (self.max_value - self.min_value)
else:
- normalized = (clamped - self.min_value) / (self.max_value - self.min_value)
+ normalized = (self.min_value - clamped) / (self.min_value - self.max_value)
return int(normalized * 255)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def scale(self, value: float) -> int: | |
| """Convert source value to 0-255 DMX value.""" | |
| # Clamp and normalize | |
| clamped = max(self.min_value, min(self.max_value, value)) | |
| if self.max_value == self.min_value: | |
| normalized = 0.0 | |
| else: | |
| normalized = (clamped - self.min_value) / (self.max_value - self.min_value) | |
| return int(normalized * 255) | |
| def scale(self, value: float) -> int: | |
| """Convert source value to 0-255 DMX value.""" | |
| low = min(self.min_value, self.max_value) | |
| high = max(self.min_value, self.max_value) | |
| clamped = max(low, min(high, value)) | |
| if high == low: | |
| normalized = 0.0 | |
| elif self.max_value >= self.min_value: | |
| normalized = (clamped - self.min_value) / (self.max_value - self.min_value) | |
| else: | |
| normalized = (self.min_value - clamped) / (self.min_value - self.max_value) | |
| return int(normalized * 255) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/scope/server/dmx_server.py` around lines 105 - 113, The scale method
currently clamps using self.min_value/self.max_value assuming min<=max which
pins outputs when reversed; change clamping to use lower =
min(self.min_value,self.max_value) and upper =
max(self.min_value,self.max_value) so clamped = max(lower, min(upper, value)),
compute normalized = 0.0 if upper==lower else (clamped - lower) / (upper -
lower), and if self.min_value > self.max_value invert the normalized mapping
(normalized = 1.0 - normalized) before returning int(normalized * 255) so
descending ranges produce the expected reversed DMX output.
| for mapping in self._config.input_mappings.values(): | ||
| if not mapping.enabled: | ||
| continue | ||
| if mapping.universe != universe: | ||
| continue | ||
|
|
||
| idx = mapping.channel - 1 # 0-indexed | ||
| if idx < 0 or idx >= len(dmx_data): | ||
| continue |
There was a problem hiding this comment.
input_start_channel is persisted but never affects slot lookup.
This still indexes the incoming frame with mapping.channel - 1, so changing the Start Channel setting has no runtime effect. Right now that control is dead unless the mapping coordinates are meant to be relative and the offset is applied here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/scope/server/dmx_server.py` around lines 404 - 412, The code ignores the
persisted input_start_channel setting when computing the DMX slot index; update
the index calculation used when reading dmx_data so it incorporates the
mapping.input_start_channel offset (e.g., compute idx using
mapping.input_start_channel together with mapping.channel, then validate bounds
against len(dmx_data)). Update the block that sets idx (the loop over
self._config.input_mappings and variables mapping, universe, idx, and dmx_data)
so the start channel affects runtime slot lookup and preserve the existing
bounds checks.
| for mapping in self._config.output_mappings.values(): | ||
| if not mapping.enabled: | ||
| continue | ||
|
|
||
| source_value = self._analysis_values.get(mapping.source_key, 0.0) | ||
| dmx_value = mapping.scale(source_value) | ||
|
|
||
| universe = mapping.universe | ||
| if universe not in universes: | ||
| universes[universe] = bytearray(512) | ||
|
|
||
| idx = mapping.channel - 1 | ||
| if 0 <= idx < 512: | ||
| current = universes[universe][idx] | ||
| if self._config.output_merge_mode == MergeMode.HTP: | ||
| universes[universe][idx] = max(current, dmx_value) | ||
| else: # LTP | ||
| universes[universe][idx] = dmx_value | ||
|
|
||
| # Send Art-Net packets | ||
| for universe, data in universes.items(): | ||
| packet = self._build_artnet_dmx_packet(universe, bytes(data)) | ||
| # Broadcast to network | ||
| self._transport.sendto(packet, ("255.255.255.255", ARTNET_PORT)) |
There was a problem hiding this comment.
The global output universe setting is never used when sending DMX.
Both the normal output path and the test ramp route exclusively by mapping.universe. Updating output_universe through /api/v1/dmx/config therefore does not redirect existing output, even though the UI exposes it as a live setting.
Also applies to: 533-546
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/scope/server/dmx_server.py` around lines 476 - 499, The code always uses
mapping.universe and ignores the global setting _config.output_universe, so
changing output_universe via the API doesn't affect live output; update both DMX
emission paths (the loop handling self._config.output_mappings and the test ramp
path referenced around _build_artnet_dmx_packet) to compute an
effective_universe = self._config.output_universe if it is set (non-null/
non-zero/ non-default) else mapping.universe, then use effective_universe when
indexing/creating universes and when building packets (and when calling
_build_artnet_dmx_packet/sendto) so the global output_universe overrides
per-mapping universes as the UI expects.
🚀 fal.ai Preview Deployment
TestingConnect to this preview deployment by running this on your branch: 🧪 E2E tests will run automatically against this deployment. |
❌ E2E Tests failed
Test ArtifactsCheck the workflow run for screenshots, traces, and failure details. |
Summary
Implements H174: DMX In/Out support to unlock adoption among live show professionals by making Scope a controllable visual processing layer inside existing lighting pipelines.
This is a complete rewrite based on the detailed spec.
Features
DMX Input (Console → Scope)
DMX Output (Scope → Fixtures)
UI/UX (per spec)
Safe Defaults (per spec)
API Endpoints
/api/v1/dmx/status/api/v1/dmx/config/api/v1/dmx/config/api/v1/dmx/input-mappings/api/v1/dmx/input-mappings/{id}/api/v1/dmx/output-mappings/api/v1/dmx/output-mappings/{id}/api/v1/dmx/test-output/api/v1/dmx/parameters/api/v1/dmx/analysis-sourcesArchitecture
broadcast_parameter_update()for pipeline integration~/.daydream-scope/dmx_config.jsonTesting
Spec Compliance
Closes #621
/cc @thomshutt
Summary by CodeRabbit