Skip to content

feat: Add Ultralytics SAM3 backend with enhanced features#5

Open
Rajkisan wants to merge 5 commits intoagfianf:mainfrom
Rajkisan:feature/ultralytics-sam3-enhancements
Open

feat: Add Ultralytics SAM3 backend with enhanced features#5
Rajkisan wants to merge 5 commits intoagfianf:mainfrom
Rajkisan:feature/ultralytics-sam3-enhancements

Conversation

@Rajkisan
Copy link

@Rajkisan Rajkisan commented Feb 7, 2026

  • Implement api-inference-yolo backend using Ultralytics SAM3SemanticPredictor
  • Add FP16 inference support for faster processing on CUDA
  • Enable batch processing mode for both text and bbox prompts
  • Add auto-apply mode for automatic image processing
  • Implement per-label text prompt memory using localStorage
  • Add bbox exemplar-based segmentation for finding similar objects
  • Create setup and run scripts for both Ultralytics and HuggingFace backends
  • Update .gitignore to exclude venv and model weights
  • Enhance README with new features and expanded roadmap
  • Fix modal rendering issues in BboxPromptPanel

Features:

  • Text prompt segmentation with semantic understanding
  • Bounding box exemplar-based segmentation
  • Single, Auto-Apply, and Batch processing modes
  • Smart prompt memory per label class
  • Cross-platform setup scripts (Linux/Mac/Windows)

- Implement api-inference-yolo backend using Ultralytics SAM3SemanticPredictor
- Add FP16 inference support for faster processing on CUDA
- Enable batch processing mode for both text and bbox prompts
- Add auto-apply mode for automatic image processing
- Implement per-label text prompt memory using localStorage
- Add bbox exemplar-based segmentation for finding similar objects
- Create setup and run scripts for both Ultralytics and HuggingFace backends
- Update .gitignore to exclude venv and model weights
- Enhance README with new features and expanded roadmap
- Fix modal rendering issues in BboxPromptPanel

Features:
- Text prompt segmentation with semantic understanding
- Bounding box exemplar-based segmentation
- Single, Auto-Apply, and Batch processing modes
- Smart prompt memory per label class
- Cross-platform setup scripts (Linux/Mac/Windows)
@qodo-code-review
Copy link

Review Summary by Qodo

Add Ultralytics SAM3 backend with enhanced segmentation features and dual-backend support

✨ Enhancement 📝 Documentation

Grey Divider

Walkthroughs

Description
• Implement complete Ultralytics SAM3 backend with FastAPI REST API for semantic segmentation
• Add text prompt and bounding box-based inference endpoints with batch processing support
• Implement mask-to-polygon conversion with Douglas-Peucker simplification and visualization
  utilities
• Add per-label text prompt memory using localStorage for improved user experience
• Implement auto-apply and batch processing modes in BboxPromptPanel component
• Create cross-platform setup and run scripts for both Ultralytics and HuggingFace backends
  (Linux/Mac/Windows)
• Add comprehensive documentation for SAM3 model usage, API endpoints, and deployment
• Configure Docker support with NVIDIA CUDA 12.1 for GPU acceleration
• Implement standardized JSON response formatting and comprehensive error handling
• Add application configuration management with environment variable support
Diagram
flowchart LR
  A["User Interface<br/>TextPromptPanel<br/>BboxPromptPanel"] -->|"text/bbox prompts"| B["FastAPI Router<br/>sam3.py"]
  B -->|"inference requests"| C["SAM3 Inference<br/>inference.py"]
  C -->|"model prediction"| D["Ultralytics<br/>SAM3SemanticPredictor"]
  C -->|"mask processing"| E["Mask Utils<br/>mask_utils.py"]
  E -->|"polygon coords"| F["Visualizer<br/>visualizer.py"]
  F -->|"visualization"| B
  B -->|"JSON response"| A
  G["Config<br/>config.py"] -->|"settings"| C
  H["Schemas<br/>schemas.py"] -->|"validation"| B
Loading

Grey Divider

File Changes

1. apps/api-inference-yolo/src/app/integrations/sam3/inference.py ✨ Enhancement +332/-0

SAM3 inference implementation with text and bbox prompts

• Implements SAM3 inference class using Ultralytics SAM3SemanticPredictor with model path resolution
 and device detection
• Provides text-based and bounding box-based inference methods with confidence threshold support
• Includes batch processing capability for multiple images with sequential execution
• Generates polygon mask coordinates and optional visualization images with base64 encoding

apps/api-inference-yolo/src/app/integrations/sam3/inference.py


2. apps/api-inference-yolo/src/app/routers/sam3.py ✨ Enhancement +262/-0

FastAPI REST API endpoints for SAM3 segmentation

• Creates FastAPI router with three main inference endpoints: text prompt, bounding box, and batch
 processing
• Implements request validation, JSON parsing for bounding boxes and text prompts
• Provides comprehensive error handling with HTTP exceptions for validation and runtime errors
• Includes health check endpoint and proper response formatting with JsonResponse wrapper

apps/api-inference-yolo/src/app/routers/sam3.py


3. apps/api-inference-yolo/src/app/integrations/sam3/visualizer.py ✨ Enhancement +190/-0

Mask and bounding box visualization utilities

• Implements visualization utilities for drawing masks and bounding boxes on images using PIL and
 matplotlib
• Supports customizable colormaps, transparency levels, and selective drawing of masks/boxes
• Provides image encoding to bytes with format and quality options (PNG/JPEG)
• Creates complete visualization pipeline combining mask overlay and box drawing

apps/api-inference-yolo/src/app/integrations/sam3/visualizer.py


View more (30)
4. apps/api-inference-yolo/src/app/main.py ✨ Enhancement +179/-0

FastAPI application setup with model lifecycle management

• Sets up FastAPI application with lifespan context manager for model initialization and cleanup
• Implements CORS middleware configuration and custom exception handlers for validation and general
 errors
• Loads SAM3 model during startup and makes it available via application state
• Includes root endpoint and integrates SAM3 router with comprehensive logging

apps/api-inference-yolo/src/app/main.py


5. apps/api-inference-yolo/src/app/schemas/sam3.py ✨ Enhancement +91/-0

Pydantic schemas for SAM3 API validation

• Defines Pydantic models for API request/response validation including bounding boxes, masks, and
 detection results
• Provides separate schemas for single image inference and batch processing with metadata
• Includes form parameter models for multipart/form-data requests
• Structures polygon representation with area calculation and confidence scores

apps/api-inference-yolo/src/app/schemas/sam3.py


6. apps/api-inference-yolo/src/app/integrations/sam3/mask_utils.py ✨ Enhancement +95/-0

Mask to polygon coordinate conversion utilities

• Converts binary mask tensors to polygon coordinates using OpenCV contour detection
• Implements polygon simplification using Douglas-Peucker algorithm to reduce point count
• Calculates mask area and handles disconnected regions or holes in masks
• Provides batch processing for multiple masks with consistent output format

apps/api-inference-yolo/src/app/integrations/sam3/mask_utils.py


7. apps/api-inference-yolo/src/app/config.py ⚙️ Configuration changes +45/-0

Application configuration and environment settings

• Defines Settings class with Pydantic for environment variable configuration
• Includes SAM3 model settings (path, device, thresholds) and API limits (image size, batch size,
 dimensions)
• Configures visualization format/quality, logging level, and CORS settings
• Uses .env file support with case-sensitive environment variable mapping

apps/api-inference-yolo/src/app/config.py


8. apps/api-inference-yolo/src/app/helpers/response_api.py ✨ Enhancement +48/-0

Standardized JSON response formatting helpers

• Implements standardized JSON response format with generic data and metadata support
• Provides error response structure with detailed error information and field-level error tracking
• Includes pagination metadata model for list responses
• Uses Pydantic BaseModel for type-safe response serialization

apps/api-inference-yolo/src/app/helpers/response_api.py


9. apps/api-inference-yolo/example_sam.py 📝 Documentation +42/-0

Example SAM3 inference with HuggingFace transformers

• Demonstrates SAM3 usage with HuggingFace transformers for text-based image segmentation
• Shows image loading, text prompt processing, and post-processing of segmentation results
• Includes mask extraction and visualization example with PIL

apps/api-inference-yolo/example_sam.py


10. apps/api-inference-yolo/src/app/helpers/logger.py ⚙️ Configuration changes +27/-0

Logging configuration setup

• Configures standard Python logging with customizable log level from settings
• Sets up logging format with timestamp, logger name, level, and message
• Returns configured logger instance for application-wide use

apps/api-inference-yolo/src/app/helpers/logger.py


11. apps/api-inference-yolo/docs/sam3.md 📝 Documentation +716/-0

SAM3 model documentation and usage examples

• Comprehensive documentation for SAM3 model with usage examples for images and videos
• Covers text prompts, bounding box prompts, and combined prompt strategies
• Includes batch processing examples and semantic segmentation output
• Documents video tracking with single and multi-object support, streaming inference

apps/api-inference-yolo/docs/sam3.md


12. apps/web/src/components/BboxPromptPanel.tsx ✨ Enhancement +202/-12

Bbox prompt panel with auto-apply and batch modes

• Adds auto-apply mode that automatically runs bbox inference when image changes after first
 execution
• Implements batch processing mode with image selector modal and progress tracking
• Tracks processed images to avoid re-processing and handles per-label bbox grouping
• Adds batch progress modal showing processing status for each image with cancel support

apps/web/src/components/BboxPromptPanel.tsx


13. README.md 📝 Documentation +203/-26

Enhanced README with dual backend setup and features

• Adds comprehensive documentation for two backend options (Ultralytics and HuggingFace)
• Includes detailed setup instructions with setup/run scripts for both backends
• Documents SAM3 model weights download process and troubleshooting guide
• Expands roadmap with enhanced annotation tools, active learning, and collaboration features

README.md


14. apps/api-inference-yolo/README.md 📝 Documentation +180/-0

Ultralytics SAM3 backend documentation

• Provides backend-specific documentation for Ultralytics SAM3 implementation
• Documents API endpoints for text, bbox, and batch inference with curl examples
• Includes configuration options, Docker deployment, and performance notes
• Covers HuggingFace token requirements and local development setup

apps/api-inference-yolo/README.md


15. apps/web/src/components/TextPromptPanel.tsx ✨ Enhancement +35/-10

Text prompt memory per label class

• Implements per-label text prompt memory using localStorage with labelTextPrompts key
• Loads and saves text prompts for each label class automatically
• Saves prompt after successful inference (single, auto-apply, and batch modes)
• Syncs label selection with saved prompt loading

apps/web/src/components/TextPromptPanel.tsx


16. setup-yolo.sh ⚙️ Configuration changes +135/-0

Setup script for Ultralytics backend (Linux/Mac)

• Bash setup script for Ultralytics SAM3 backend on Linux/Mac
• Checks Python and Node.js versions, creates virtual environment, installs dependencies
• Handles CLIP package conflict resolution and .env file creation
• Validates SAM3 model weights presence with download instructions

setup-yolo.sh


17. setup-yolo.bat ⚙️ Configuration changes +138/-0

Setup script for Ultralytics backend (Windows)

• Windows batch setup script for Ultralytics SAM3 backend
• Performs same checks and setup as bash version (Python, Node.js, venv, dependencies)
• Handles CLIP package conflict and .env configuration
• Provides SAM3 model weights validation with download guidance

setup-yolo.bat


18. setup-hf.sh ⚙️ Configuration changes +113/-0

Setup script for HuggingFace backend (Linux/Mac)

• Bash setup script for HuggingFace SAM3 backend on Linux/Mac
• Validates Python and Node.js, creates virtual environment, installs HF dependencies
• Creates .env file with HuggingFace token placeholder and configuration
• Provides instructions for obtaining HuggingFace token and model access

setup-hf.sh


19. run-yolo.bat ⚙️ Configuration changes +71/-0

Run script for Ultralytics backend (Windows)

• Windows batch script to start both backend and frontend services
• Validates SAM3 model weights and virtual environment existence
• Kills existing processes on ports 8000 and 5173, starts backend and frontend in separate windows
• Provides URLs for accessing frontend, API, and documentation

run-yolo.bat


20. apps/api-inference-yolo/.python-version ⚙️ Configuration changes +1/-0

Python version specification

• Specifies Python version requirement as 3.12

apps/api-inference-yolo/.python-version


21. setup-hf.bat ⚙️ Configuration changes +115/-0

Windows setup script for HuggingFace SAM3 backend

• Windows batch script for setting up AnnotateANU with HuggingFace SAM3 backend
• Validates Python 3 and Node.js installation prerequisites
• Creates Python virtual environment and installs backend/frontend dependencies
• Generates .env file with HuggingFace token configuration and model settings
• Provides setup instructions and next steps for users

setup-hf.bat


22. run-yolo.sh ⚙️ Configuration changes +98/-0

Linux/Mac run script for Ultralytics SAM3 backend

• Bash script to start AnnotateANU with Ultralytics SAM3 backend on Linux/Mac
• Validates SAM3 model weights existence and virtual environment setup
• Manages backend and frontend processes with automatic port cleanup
• Implements graceful shutdown handling with Ctrl+C trap
• Provides user feedback with health checks and service status information

run-yolo.sh


23. run-hf.sh ⚙️ Configuration changes +94/-0

Linux/Mac run script for HuggingFace SAM3 backend

• Bash script to start AnnotateANU with HuggingFace SAM3 backend on Linux/Mac
• Validates virtual environment and .env file configuration
• Manages backend and frontend processes with automatic port cleanup
• Implements graceful shutdown handling with Ctrl+C trap
• Provides user feedback about model download time and service endpoints

run-hf.sh


24. run-hf.bat ⚙️ Configuration changes +69/-0

Windows run script for HuggingFace SAM3 backend

• Windows batch script to start AnnotateANU with HuggingFace SAM3 backend
• Validates virtual environment and .env file prerequisites
• Launches backend and frontend services in separate command windows
• Clears existing processes on ports 8000 and 5173
• Displays service URLs and instructions for stopping services

run-hf.bat


25. apps/api-inference-yolo/Dockerfile ⚙️ Configuration changes +45/-0

Docker configuration for Ultralytics SAM3 backend

• Docker configuration for Ultralytics SAM3 backend with NVIDIA CUDA 12.1 support
• Uses uv package manager for efficient dependency management with Python 3.12
• Installs system dependencies and configures CUDA environment variables
• Copies application code and exposes port 8000 for FastAPI service
• Runs application via uv run to ensure correct Python environment

apps/api-inference-yolo/Dockerfile


26. apps/api-inference-yolo/pyproject.toml ⚙️ Configuration changes +38/-0

Python project configuration for Ultralytics SAM3 API

• Python project configuration for SAM3 YOLO API with FastAPI framework
• Specifies dependencies including fastapi, ultralytics, torch, and image processing libraries
• Configures development tools with ruff linter targeting Python 3.12
• Sets code style rules with 120 character line length

apps/api-inference-yolo/pyproject.toml


27. apps/api-inference-yolo/.env.example ⚙️ Configuration changes +28/-0

Environment configuration template for SAM3 backend

• Environment configuration template for SAM3 YOLO backend
• Defines HuggingFace token requirement and SAM3 model settings
• Specifies API limits for image size, batch size, and dimensions
• Configures visualization format and CORS allowed origins
• Includes application host, port, and logging configuration

apps/api-inference-yolo/.env.example


28. apps/api-inference-yolo/package.json ⚙️ Configuration changes +11/-0

Node.js package metadata for Python backend

• Node.js package metadata for SAM3 API inference backend
• Defines placeholder scripts indicating Python backend usage
• Includes cleanup script for removing Python cache directories

apps/api-inference-yolo/package.json


29. apps/web/tsconfig.app.json ⚙️ Configuration changes +1/-0

TypeScript configuration deprecation handling

• Adds ignoreDeprecations compiler option set to 6.0 for TypeScript configuration
• Suppresses TypeScript 6.0 deprecation warnings in the application build

apps/web/tsconfig.app.json


30. apps/api-inference-yolo/src/app/exceptions/__init__.py Additional files +0/-0

...

apps/api-inference-yolo/src/app/exceptions/init.py


31. apps/api-inference-yolo/src/app/integrations/sam3/__init__.py Additional files +0/-0

...

apps/api-inference-yolo/src/app/integrations/sam3/init.py


32. apps/api-inference-yolo/src/app/middleware/__init__.py Additional files +0/-0

...

apps/api-inference-yolo/src/app/middleware/init.py


33. apps/api-inference-yolo/src/app/services/__init__.py Additional files +0/-0

...

apps/api-inference-yolo/src/app/services/init.py


Grey Divider

Qodo Logo

@qodo-code-review
Copy link

qodo-code-review bot commented Feb 7, 2026

Code Review by Qodo

🐞 Bugs (7) 📘 Rule violations (6) 📎 Requirement gaps (0)

Grey Divider


Action required

1. lifespan() yields twice 📘 Rule violation ⛯ Reliability
Description
• The FastAPI lifespan() context manager yields twice, which can break startup/shutdown flow and
make startup state unreliable. • Route handlers assume sam3_inference exists on request.state,
but the lifespan code does not explicitly attach it there, risking runtime AttributeError and 500s
instead of graceful degradation. • This violates the requirement to handle failure points and edge
cases explicitly, especially around dependency initialization.
Code

apps/api-inference-yolo/src/app/main.py[R41-52]

+        yield {
+            "sam3_inference": sam3_inference,
+        }
+
+        logger.info("Application startup complete")
+
+    except Exception as e:
+        logger.error(f"Failed to initialize application: {e}")
+        raise
+
+    yield
+
Evidence
Compliance requires robust handling of edge cases and failure points. The lifespan implementation
yields twice, and the router accesses request.state.sam3_inference without a defensive check,
which can fail at runtime if the state is not present as expected.

Rule 3: Generic: Robust Error Handling and Edge Case Management
apps/api-inference-yolo/src/app/main.py[41-52]
apps/api-inference-yolo/src/app/routers/sam3.py[54-57]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`lifespan()` currently yields twice and the app’s `sam3_inference` instance is accessed via `request.state` without a guaranteed assignment path. This can lead to startup/shutdown issues and runtime `AttributeError` in request handlers.
## Issue Context
The SAM3 model is intended to be loaded once at startup and reused. Handlers should either access `request.app.state.sam3_inference` (if set during lifespan) or a dependency injection pattern should be used.
## Fix Focus Areas
- apps/api-inference-yolo/src/app/main.py[17-57]
- apps/api-inference-yolo/src/app/routers/sam3.py[54-57]
- apps/api-inference-yolo/src/app/routers/sam3.py[136-138]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Logs not JSON-structured 📘 Rule violation ⛨ Security
Description
• Backend logging is configured with a plain-text format string rather than structured (e.g., JSON)
logs. • This reduces audit/debug usefulness and does not meet the requirement for structured logging
outputs. • The current setup also makes it harder to consistently control/strip sensitive fields
because logs are not structured.
Code

apps/api-inference-yolo/src/app/helpers/logger.py[R18-22]

+    logging.basicConfig(
+        format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
+        stream=sys.stdout,
+        level=getattr(logging, settings.LOG_LEVEL.upper()),
+    )
Evidence
The compliance rule requires structured logs for auditing. The logger is configured with a simple
string formatter, not JSON/structured output.

Rule 5: Generic: Secure Logging Practices
apps/api-inference-yolo/src/app/helpers/logger.py[18-22]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Logging output is currently unstructured (plain text). Compliance requires structured logs (e.g., JSON) while avoiding sensitive data exposure.
## Issue Context
The settings already include `LOG_JSON_FORMAT`, but it is not used in the logger configuration.
## Fix Focus Areas
- apps/api-inference-yolo/src/app/helpers/logger.py[1-27]
- apps/api-inference-yolo/src/app/config.py[33-36]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. detail=str(e) exposed 📘 Rule violation ⛨ Security
Description
• API handlers return HTTPException(..., detail=str(e)) for ValueError, directly exposing
internal exception messages to clients. • Exception strings can unintentionally leak implementation
details (e.g., file paths, library messages) that should remain internal. • This violates the
requirement that user-facing errors remain generic while detailed information is kept in internal
logs.
Code

apps/api-inference-yolo/src/app/routers/sam3.py[R78-83]

+    except ValueError as e:
+        logger.error(f"Validation error in text inference: {e}")
+        raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=str(e))
+    except Exception as e:
+        logger.error(f"Error in text inference: {e}")
+        raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail="Inference failed")
Evidence
Secure error handling requires generic user-facing errors. The router sends raw exception messages
back to the client via detail=str(e), which can leak internal details.

Rule 4: Generic: Secure Error Handling
apps/api-inference-yolo/src/app/routers/sam3.py[78-83]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Handlers currently expose raw exception strings to the client through `detail=str(e)`. This can leak internal details.
## Issue Context
You already log the exception server-side. Client responses should be generic (or constrained to a safe, curated message) while logs keep full detail.
## Fix Focus Areas
- apps/api-inference-yolo/src/app/routers/sam3.py[78-83]
- apps/api-inference-yolo/src/app/routers/sam3.py[160-166]
- apps/api-inference-yolo/src/app/main.py[119-145]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (4)
4. bounding_boxes lacks validation 📘 Rule violation ⛨ Security
Description
bounding_boxes is parsed from user input JSON and then indexed as b[0]..b[3] without
validating item length/type, so malformed input can cause IndexError/type errors. • This turns a
client input problem into a server 500 path, rather than a controlled 400/422 with safe messaging. •
This violates the requirement for security-first input validation and explicit handling of
null/empty/boundary cases.
Code

apps/api-inference-yolo/src/app/routers/sam3.py[R125-135]

+    try:
+        # Parse bounding boxes JSON
+        try:
+            boxes_data = json.loads(bounding_boxes)
+        except json.JSONDecodeError:
+            raise ValueError("Invalid JSON format for bounding_boxes")
+
+        # Extract boxes and labels
+        boxes = [[b[0], b[1], b[2], b[3]] for b in boxes_data]
+        labels = [b[4] if len(b) > 4 else 1 for b in boxes_data]  # Default to positive
+
Evidence
The compliance rules require validation/sanitization of external inputs and explicit handling of
edge cases. The bbox parsing assumes a fixed structure and indexes list elements without validating
shape or types.

Rule 6: Generic: Security-First Input Validation and Data Handling
Rule 3: Generic: Robust Error Handling and Edge Case Management
apps/api-inference-yolo/src/app/routers/sam3.py[125-135]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The API trusts `bounding_boxes` JSON structure and indexes into elements without validating lengths/types. Malformed input can trigger server exceptions and 500 responses.
## Issue Context
This endpoint is externally facing (multipart form). It should strictly validate and reject invalid bbox payloads with a controlled error response.
## Fix Focus Areas
- apps/api-inference-yolo/src/app/routers/sam3.py[125-166]
- apps/api-inference-yolo/src/app/schemas/sam3.py[6-14]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Ruff F401 unused imports 📘 Rule violation ✓ Correctness
Description
• The backend code introduces unused imports (e.g., cv2, FastAPIResponse, and a local `import
os), which will trigger Ruff F401` violations under the configured lint selection. • This breaks
the requirement that backend changes are Ruff lint-clean.
Code

apps/api-inference-yolo/src/app/integrations/sam3/inference.py[R3-27]

+import io
+import time
+import base64
+
+import cv2
+import numpy as np
+import torch
+from fastapi import UploadFile
+from PIL import Image
+from ultralytics.models.sam import SAM3SemanticPredictor
+
+from app.config import settings
+from app.helpers.logger import logger
+from app.integrations.sam3.mask_utils import masks_to_polygon_data
+from app.integrations.sam3.visualizer import Sam3Visualizer
+
+
+class SAM3Inference:
+    """SAM3 inference implementation using Ultralytics."""
+
+    def __init__(self):
+        """Initialize SAM3 model configuration."""
+        import os
+        
+        # Support both local file paths and model names
Evidence
The compliance rule requires Ruff linting to pass. The diff shows multiple unused imports that Ruff
will flag as F401 given the project's Ruff configuration (select includes F).

CLAUDE.md
apps/api-inference-yolo/src/app/integrations/sam3/inference.py[3-8]
apps/api-inference-yolo/src/app/integrations/sam3/inference.py[23-26]
apps/api-inference-yolo/src/app/routers/sam3.py[5-7]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Backend files include unused imports that will fail Ruff linting (F401).
## Issue Context
`pyproject.toml` enables Ruff linting with `select = [&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;E&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;, &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;F&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;, &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;I&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;, &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;N&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;, &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;W&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;, &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;UP&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;]`, so unused imports are not allowed.
## Fix Focus Areas
- apps/api-inference-yolo/src/app/integrations/sam3/inference.py[1-30]
- apps/api-inference-yolo/src/app/routers/sam3.py[1-12]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


6. Docker CMD wrong path 🐞 Bug ⛯ Reliability
Description
• The api-inference-yolo image copies code under /code/src but runs uv run app/main.py, which
does not exist at that path. • This will cause the backend container to fail immediately on startup,
blocking the recommended backend option.
Code

apps/api-inference-yolo/Dockerfile[R35-45]

+# Copy application code
+COPY src/ ./src/
+
+EXPOSE 8000
+
+# Set Python path
+ENV PYTHONPATH=/code/src
+
+# 7. Run application
+# We use 'uv run' which ensures the correct python environment is used
+CMD ["uv", "run", "app/main.py"]
Evidence
The Dockerfile copies only src/ into /code/src, sets PYTHONPATH=/code/src, but the command
points to app/main.py under /code/app/main.py (not present).

apps/api-inference-yolo/Dockerfile[35-45]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The `apps/api-inference-yolo` Docker image will not start because the `CMD` points at `app/main.py`, but the file is located at `src/app/main.py` inside the container.
### Issue Context
The Dockerfile copies `src/` to `/code/src` and sets `PYTHONPATH=/code/src`, so Python imports should use `app.*`.
### Fix Focus Areas
- apps/api-inference-yolo/Dockerfile[35-45]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


7. Predictor shared mutable state 🐞 Bug ⛯ Reliability
Description
• A single SAM3SemanticPredictor instance is stored globally and mutated per request (args.conf,
set_image). • Concurrent requests can overwrite each other’s threshold/image, producing incorrect
masks/boxes and non-deterministic behavior.
Code

apps/api-inference-yolo/src/app/integrations/sam3/inference.py[R146-156]

+        # Update predictor confidence if specified
+        if conf_threshold is not None and hasattr(self.predictor, 'args'):
+            self.predictor.args.conf = conf_threshold
+            logger.info(f"Set confidence threshold to {conf_threshold}")
+        
+        # Set image (like predictor.set_image() in test script)
+        self.predictor.set_image(image_np)
+        
+        # Run prediction (like predictor(text=[...]) in test script)
+        results = self.predictor(**kwargs)
+        
Evidence
The predictor is created once on load and stored as self.predictor, then _run_inference mutates
shared predictor settings and image before calling self.predictor(**kwargs). The app lifecycle
loads one SAM3Inference instance, implying shared usage.

apps/api-inference-yolo/src/app/integrations/sam3/inference.py[96-115]
apps/api-inference-yolo/src/app/integrations/sam3/inference.py[137-156]
apps/api-inference-yolo/src/app/main.py[34-43]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A shared Ultralytics predictor is mutated per request (`args.conf`, `set_image`), which can cause cross-request contamination under concurrency.
### Issue Context
The predictor is loaded once in app lifespan and reused across requests.
### Fix Focus Areas
- apps/api-inference-yolo/src/app/integrations/sam3/inference.py[96-116]
- apps/api-inference-yolo/src/app/integrations/sam3/inference.py[137-156]
- apps/api-inference-yolo/src/app/main.py[34-43]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

8. Missing React hook deps 📘 Rule violation ✓ Correctness
Description
• The useEffect for auto-apply reads images, isLoading, and calls runBboxInference, but
these are not included in the dependency array. • With eslint-plugin-react-hooks enabled, this is
likely to raise an exhaustive-deps warning/error and can also lead to stale state/closures at
runtime.
Code

apps/web/src/components/BboxPromptPanel.tsx[R60-86]

+  // Auto-apply mode: Execute prompt when image changes
useEffect(() => {
-    if (promptMode !== 'single') {
-      setPromptMode('single')
+    if (promptMode !== 'auto-apply') return
+    if (!currentImage) return
+    if (!hasRunOnce) return
+    if (promptBboxes.length === 0) return
+    if (isLoading) return
+
+    // Don't re-process the same image
+    if (lastProcessedImageIdRef.current === currentImage.id) return
+
+    // Skip if image already has auto-generated annotations
+    const hasAutoAnnotations = images
+      .find(img => img.id === currentImage.id)
+      ?.annotations?.some((ann: any) => ann.isAutoGenerated)
+    if (hasAutoAnnotations) {
+      lastProcessedImageIdRef.current = currentImage.id
+      return
}
-  }, []) // Only run on mount
-  const handleSubmit = async (e: React.FormEvent) => {
-    e.preventDefault()
+    console.log('[AUTO-APPLY BBOX] Triggering for', currentImage.name)
+    runBboxInference().catch(err => {
+      console.error('[AUTO-APPLY BBOX] Error:', err)
+      setIsLoading(false)
+    })
+  }, [currentImage?.id, promptMode, hasRunOnce, promptBboxes.length])
Evidence
Frontend compliance requires ESLint to pass. The project enables eslint-plugin-react-hooks
recommended config, and the shown useEffect omits referenced values from its dependency list,
which exhaustive-deps flags.

CLAUDE.md
apps/web/eslint.config.js[8-17]
apps/web/src/components/BboxPromptPanel.tsx[61-86]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A React `useEffect` uses values/functions that are not included in its dependency array, which is flagged by `eslint-plugin-react-hooks/exhaustive-deps` and may cause stale closures.
## Issue Context
The project ESLint config enables `reactHooks.configs.flat.recommended`, so this rule is active.
## Fix Focus Areas
- apps/web/src/components/BboxPromptPanel.tsx[60-86]
- apps/web/eslint.config.js[8-17]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


9. Bbox auto-apply duplicates 🐞 Bug ✓ Correctness
Description
• Bbox auto-apply attempts to detect existing AI annotations via images[].annotations, but
ImageData has no annotations field in this codebase. • The skip check will never trigger, so
revisiting an already-auto-annotated image in auto-apply mode can create duplicate annotations and
extra backend load.
Code

apps/web/src/components/BboxPromptPanel.tsx[R71-77]

+    // Skip if image already has auto-generated annotations
+    const hasAutoAnnotations = images
+      .find(img => img.id === currentImage.id)
+      ?.annotations?.some((ann: any) => ann.isAutoGenerated)
+    if (hasAutoAnnotations) {
+      lastProcessedImageIdRef.current = currentImage.id
+      return
Evidence
BboxPromptPanel checks images.find(...).annotations, but ImageData only contains metadata and
the blob. Annotations are stored separately (and in TextPromptPanel auto-apply they correctly check
currentAnnotations).

apps/web/src/components/BboxPromptPanel.tsx[60-78]
apps/web/src/types/annotations.ts[70-80]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Bbox auto-apply uses `images[].annotations` to skip already AI-annotated images, but `ImageData` does not include annotations, so the check never works and can generate duplicates.
### Issue Context
Annotations are stored separately and `TextPromptPanel` already receives `currentAnnotations` for the same purpose.
### Fix Focus Areas
- apps/web/src/components/BboxPromptPanel.tsx[60-85]
- apps/web/src/types/annotations.ts[70-80]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


10. Batch toast uses stale state 🐞 Bug ✓ Correctness
Description
• Batch completion counts are computed from batchProgress state captured by the async function,
not from the final updated progress. • The completion toast can report 0 succeeded, 0 failed (or
otherwise incorrect numbers), reducing trust in batch mode results.
Code

apps/web/src/components/BboxPromptPanel.tsx[R323-325]

+    const successCount = batchProgress.filter(p => p.status === 'success').length
+    const errorCount = batchProgress.filter(p => p.status === 'error').length
+    toast.success(`Batch complete: ${successCount} succeeded, ${errorCount} failed`)
Evidence
The code updates progress via multiple setBatchProgress calls inside an async loop but then reads
batchProgress directly at the end of the same async function, which commonly reads a stale closure
value rather than the latest state.

apps/web/src/components/BboxPromptPanel.tsx[241-325]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Batch completion toast uses `batchProgress` from a stale closure, producing incorrect success/error counts.
### Issue Context
Progress is updated via `setBatchProgress` within an async for-loop.
### Fix Focus Areas
- apps/web/src/components/BboxPromptPanel.tsx[241-326]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (3)
11. Health check URL mismatch 🐞 Bug ⛯ Reliability
Description
run-yolo.sh/run-hf.sh probe http://localhost:8000/health, but the backend health endpoint is
under /api/v1/sam3/health. • This produces false warnings and fails to actually validate backend
readiness.
Code

run-yolo.sh[R56-60]

+# Check if backend is running
+if ! curl -s http://localhost:8000/health > /dev/null 2>&1; then
+    echo "⚠️  Backend may still be loading the model..."
+    echo "   This can take 30-60 seconds on first run"
+fi
Evidence
The scripts call /health, while the FastAPI router defines health at /api/v1/sam3/health.

run-yolo.sh[52-60]
run-hf.sh[53-56]
apps/api-inference-yolo/src/app/routers/sam3.py[249-262]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Startup scripts check the wrong health endpoint and always get a 404.
### Issue Context
Backend health endpoint is mounted under the router prefix `/api/v1/sam3`.
### Fix Focus Areas
- run-yolo.sh[52-60]
- run-hf.sh[53-56]
- apps/api-inference-yolo/src/app/routers/sam3.py[249-262]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


12. BBox params ignored 🐞 Bug ✓ Correctness
Description
• The yolo backend accepts mask_threshold and box_labels but does not apply them during
inference. • This makes the API contract misleading (UI mask threshold slider has no effect;
negative box labels can’t work).
Code

apps/api-inference-yolo/src/app/integrations/sam3/inference.py[R243-263]

+    async def inference_bbox(
+        self,
+        image_file: UploadFile,
+        bounding_boxes: list[list[int]],
+        box_labels: list[int],
+        threshold: float,
+        mask_threshold: float,
+        return_visualization: bool = False,
+    ) -> dict:
+        """Bounding box inference."""
+        start_time = time.perf_counter()
+        image_np = await self._load_image_from_upload(image_file)
+        
+        logger.info(f"Running bbox inference with {len(bounding_boxes)} box(es), threshold: {threshold}")
+        
+        result_obj, masks_tensor, boxes, scores, masks_poly = await self._run_inference(
+            image_np,
+            conf_threshold=threshold,  # Pass threshold to inference
+            bboxes=bounding_boxes
+        )
+        
Evidence
inference_bbox takes box_labels and mask_threshold but only forwards bounding_boxes to
_run_inference, and _run_inference only adjusts confidence via self.predictor.args.conf.

apps/api-inference-yolo/src/app/integrations/sam3/inference.py[243-263]
apps/api-inference-yolo/src/app/integrations/sam3/inference.py[137-152]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`mask_threshold` and `box_labels` are accepted by the yolo backend but are not used, making the API/UI misleading.
### Issue Context
HuggingFace backend uses both thresholds and bbox labels; yolo backend currently does not.
### Fix Focus Areas
- apps/api-inference-yolo/src/app/integrations/sam3/inference.py[137-156]
- apps/api-inference-yolo/src/app/integrations/sam3/inference.py[243-263]
- apps/api-inference-yolo/src/app/integrations/sam3/mask_utils.py[10-58]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


13. CORS credentials wildcard 🐞 Bug ⛨ Security
Description
• Default CORS settings allow any origin ("*") while also enabling allow_credentials=True. • If
cookie/session auth is added later (or if browsers/middleware enforce strict CORS), this becomes
insecure or breaks cross-origin requests in surprising ways.
Code

apps/api-inference-yolo/src/app/main.py[R70-77]

+# Add CORS middleware
+app.add_middleware(
+    CORSMiddleware,
+    allow_origins=settings.ALLOWED_ORIGINS,
+    allow_credentials=True,
+    allow_methods=settings.ALLOWED_METHODS,
+    allow_headers=settings.ALLOWED_HEADERS,
+)
Evidence
The backend enables credentialed CORS responses while defaulting allowed origins to *. This is
generally incompatible with credentialed browser requests and broadens cross-origin access
unnecessarily.

apps/api-inference-yolo/src/app/config.py[37-40]
apps/api-inference-yolo/src/app/main.py[70-77]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
CORS is configured with wildcard origins and `allow_credentials=True`, which is an unsafe/brittle combination.
### Issue Context
Even if the app is currently unauthenticated, this configuration can become a security footgun as features evolve.
### Fix Focus Areas
- apps/api-inference-yolo/src/app/config.py[37-40]
- apps/api-inference-yolo/src/app/main.py[70-77]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment on lines +41 to +52
yield {
"sam3_inference": sam3_inference,
}

logger.info("Application startup complete")

except Exception as e:
logger.error(f"Failed to initialize application: {e}")
raise

yield

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. lifespan() yields twice 📘 Rule violation ⛯ Reliability

• The FastAPI lifespan() context manager yields twice, which can break startup/shutdown flow and
  make startup state unreliable.
• Route handlers assume sam3_inference exists on request.state, but the lifespan code does not
  explicitly attach it there, risking runtime AttributeError and 500s instead of graceful
  degradation.
• This violates the requirement to handle failure points and edge cases explicitly, especially
  around dependency initialization.
Agent Prompt
## Issue description
`lifespan()` currently yields twice and the app’s `sam3_inference` instance is accessed via `request.state` without a guaranteed assignment path. This can lead to startup/shutdown issues and runtime `AttributeError` in request handlers.

## Issue Context
The SAM3 model is intended to be loaded once at startup and reused. Handlers should either access `request.app.state.sam3_inference` (if set during lifespan) or a dependency injection pattern should be used.

## Fix Focus Areas
- apps/api-inference-yolo/src/app/main.py[17-57]
- apps/api-inference-yolo/src/app/routers/sam3.py[54-57]
- apps/api-inference-yolo/src/app/routers/sam3.py[136-138]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +78 to +83
except ValueError as e:
logger.error(f"Validation error in text inference: {e}")
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=str(e))
except Exception as e:
logger.error(f"Error in text inference: {e}")
raise HTTPException(status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail="Inference failed")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. detail=str(e) exposed 📘 Rule violation ⛨ Security

• API handlers return HTTPException(..., detail=str(e)) for ValueError, directly exposing
  internal exception messages to clients.
• Exception strings can unintentionally leak implementation details (e.g., file paths, library
  messages) that should remain internal.
• This violates the requirement that user-facing errors remain generic while detailed information is
  kept in internal logs.
Agent Prompt
## Issue description
Handlers currently expose raw exception strings to the client through `detail=str(e)`. This can leak internal details.

## Issue Context
You already log the exception server-side. Client responses should be generic (or constrained to a safe, curated message) while logs keep full detail.

## Fix Focus Areas
- apps/api-inference-yolo/src/app/routers/sam3.py[78-83]
- apps/api-inference-yolo/src/app/routers/sam3.py[160-166]
- apps/api-inference-yolo/src/app/main.py[119-145]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +125 to +135
try:
# Parse bounding boxes JSON
try:
boxes_data = json.loads(bounding_boxes)
except json.JSONDecodeError:
raise ValueError("Invalid JSON format for bounding_boxes")

# Extract boxes and labels
boxes = [[b[0], b[1], b[2], b[3]] for b in boxes_data]
labels = [b[4] if len(b) > 4 else 1 for b in boxes_data] # Default to positive

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

4. bounding_boxes lacks validation 📘 Rule violation ⛨ Security

bounding_boxes is parsed from user input JSON and then indexed as b[0]..b[3] without
  validating item length/type, so malformed input can cause IndexError/type errors.
• This turns a client input problem into a server 500 path, rather than a controlled 400/422 with
  safe messaging.
• This violates the requirement for security-first input validation and explicit handling of
  null/empty/boundary cases.
Agent Prompt
## Issue description
The API trusts `bounding_boxes` JSON structure and indexes into elements without validating lengths/types. Malformed input can trigger server exceptions and 500 responses.

## Issue Context
This endpoint is externally facing (multipart form). It should strictly validate and reject invalid bbox payloads with a controlled error response.

## Fix Focus Areas
- apps/api-inference-yolo/src/app/routers/sam3.py[125-166]
- apps/api-inference-yolo/src/app/schemas/sam3.py[6-14]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +3 to +27
import io
import time
import base64

import cv2
import numpy as np
import torch
from fastapi import UploadFile
from PIL import Image
from ultralytics.models.sam import SAM3SemanticPredictor

from app.config import settings
from app.helpers.logger import logger
from app.integrations.sam3.mask_utils import masks_to_polygon_data
from app.integrations.sam3.visualizer import Sam3Visualizer


class SAM3Inference:
"""SAM3 inference implementation using Ultralytics."""

def __init__(self):
"""Initialize SAM3 model configuration."""
import os

# Support both local file paths and model names

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

5. Ruff f401 unused imports 📘 Rule violation ✓ Correctness

• The backend code introduces unused imports (e.g., cv2, FastAPIResponse, and a local `import
  os), which will trigger Ruff F401` violations under the configured lint selection.
• This breaks the requirement that backend changes are Ruff lint-clean.
Agent Prompt
## Issue description
Backend files include unused imports that will fail Ruff linting (F401).

## Issue Context
`pyproject.toml` enables Ruff linting with `select = ["E", "F", "I", "N", "W", "UP"]`, so unused imports are not allowed.

## Fix Focus Areas
- apps/api-inference-yolo/src/app/integrations/sam3/inference.py[1-30]
- apps/api-inference-yolo/src/app/routers/sam3.py[1-12]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +35 to +45
# Copy application code
COPY src/ ./src/

EXPOSE 8000

# Set Python path
ENV PYTHONPATH=/code/src

# 7. Run application
# We use 'uv run' which ensures the correct python environment is used
CMD ["uv", "run", "app/main.py"]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

6. Docker cmd wrong path 🐞 Bug ⛯ Reliability

• The api-inference-yolo image copies code under /code/src but runs uv run app/main.py, which
  does not exist at that path.
• This will cause the backend container to fail immediately on startup, blocking the recommended
  backend option.
Agent Prompt
### Issue description
The `apps/api-inference-yolo` Docker image will not start because the `CMD` points at `app/main.py`, but the file is located at `src/app/main.py` inside the container.

### Issue Context
The Dockerfile copies `src/` to `/code/src` and sets `PYTHONPATH=/code/src`, so Python imports should use `app.*`.

### Fix Focus Areas
- apps/api-inference-yolo/Dockerfile[35-45]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +146 to +156
# Update predictor confidence if specified
if conf_threshold is not None and hasattr(self.predictor, 'args'):
self.predictor.args.conf = conf_threshold
logger.info(f"Set confidence threshold to {conf_threshold}")

# Set image (like predictor.set_image() in test script)
self.predictor.set_image(image_np)

# Run prediction (like predictor(text=[...]) in test script)
results = self.predictor(**kwargs)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

7. Predictor shared mutable state 🐞 Bug ⛯ Reliability

• A single SAM3SemanticPredictor instance is stored globally and mutated per request (args.conf,
  set_image).
• Concurrent requests can overwrite each other’s threshold/image, producing incorrect masks/boxes
  and non-deterministic behavior.
Agent Prompt
### Issue description
A shared Ultralytics predictor is mutated per request (`args.conf`, `set_image`), which can cause cross-request contamination under concurrency.

### Issue Context
The predictor is loaded once in app lifespan and reused across requests.

### Fix Focus Areas
- apps/api-inference-yolo/src/app/integrations/sam3/inference.py[96-116]
- apps/api-inference-yolo/src/app/integrations/sam3/inference.py[137-156]
- apps/api-inference-yolo/src/app/main.py[34-43]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

@Rajkisan Rajkisan force-pushed the feature/ultralytics-sam3-enhancements branch 2 times, most recently from 0cc6350 to ab8050e Compare February 11, 2026 10:39
…tence per label

- Install missing react-router-dom package for routing support
- Implement per-label text prompt persistence in localStorage
- Auto-load saved prompts when switching labels
- Save prompts after successful inference runs
- Prompts persist across image navigation within session
@Rajkisan Rajkisan force-pushed the feature/ultralytics-sam3-enhancements branch from b4974f4 to e7b63e7 Compare February 11, 2026 10:45
@Rajkisan Rajkisan force-pushed the feature/ultralytics-sam3-enhancements branch from ce0c2ab to ed8ce79 Compare February 12, 2026 09:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants