Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 15 additions & 4 deletions template/server/messaging.py
Original file line number Diff line number Diff line change
Expand Up @@ -294,6 +294,14 @@ async def execute(
if self._ws is None:
raise Exception("WebSocket not connected")

message_id = str(uuid.uuid4())

# Phase A: prepare and send to kernel (under lock)
# The lock serializes the prepare+send phase to protect shared state
# (_global_env_vars init, _cleanup_task lifecycle, WebSocket sends).
# It is released before streaming results so that a client disconnect
# during streaming does not orphan the lock and block subsequent
# executions. See https://github.com/e2b-dev/code-interpreter/issues/213
async with self._lock:
# Wait for any pending cleanup task to complete
if self._cleanup_task and not self._cleanup_task.done():
Expand Down Expand Up @@ -327,7 +335,6 @@ async def execute(
)
complete_code = f"{indented_env_code}\n{complete_code}"

message_id = str(uuid.uuid4())
execution = Execution()
self._executions[message_id] = execution

Expand Down Expand Up @@ -362,11 +369,15 @@ async def execute(
)
await execution.queue.put(UnexpectedEndOfExecution())

# Stream the results
# Phase B: stream results (no lock held)
# Results are routed by unique message_id in _process_message(),
# so no shared state is accessed during streaming.
Comment on lines +372 to +374

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Serialize env-var cleanup before admitting next execution

Releasing self._lock before result streaming allows a second execute() call to enter Phase A and send its request before the first call schedules _cleanup_task in finally. When the first call used env_vars, that cleanup request is now queued after the second execute request, so the second execution can run with stale per-request env vars from the first execution. This is a correctness/isolation regression for overlapping requests on the same context; keep cleanup scheduling serialized with the next execute admission (or otherwise gate new requests until prior env cleanup is registered).

Useful? React with 👍 / 👎.

try:
async for item in self._wait_for_result(message_id):
yield item

del self._executions[message_id]
finally:
# Clean up execution entry even if generator is abandoned
self._executions.pop(message_id, None)

# Clean up env vars in a separate request after the main code has run
if env_vars:
Expand Down
Loading