Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
.PHONY: chat dev console

install:
uv sync && uv run pre-commit install

chat:
uv run chat

console:
uv run textual console -x SYSTEM -x EVENT -x DEBUG -x INFO

dev:
uv run textual run --dev -c chat
20 changes: 18 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ This tool is for those who wish for slightly more control over their MCP servers
This app uses [uv](https://github.com/astral-sh/uv) for package management so first install that. Then:

- `git clone https://github.com/damassi/agent-chat-cli-python.git`
- `uv sync`
- `uv run chat`
- `make install`
- `make chat`

Additional MCP servers are configured in `agent-chat-cli.config.yaml` and prompts added within the `prompts` folder.

Expand All @@ -27,3 +27,19 @@ Additional MCP servers are configured in `agent-chat-cli.config.yaml` and prompt
- Typechecking is via [MyPy](https://github.com/python/mypy):
- `uv run mypy src`
- Linting and formatting is via [Ruff](https://docs.astral.sh/ruff/)

Textual has an integrated logging console which one can boot separately from the app to receive logs.

In one terminal pane boot the console:

```bash
make console
```

> Note: this command intentionally filters out more verbose notifications. See the Makefile to configure.

And then in a second, start the textual dev server:

```bash
make dev
```
2 changes: 1 addition & 1 deletion agent-chat-cli.config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
system_prompt: system.md

# Model to use (e.g., sonnet, opus, haiku)
model: sonnet
model: haiku

# Enable streaming responses
include_partial_messages: true
Expand Down
3 changes: 3 additions & 0 deletions src/agent_chat_cli/app.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import asyncio

from textual.app import App, ComposeResult
from textual.containers import VerticalScroll
from textual.binding import Binding
Expand All @@ -9,10 +10,12 @@
from agent_chat_cli.components.user_input import UserInput
from agent_chat_cli.utils import AgentLoop
from agent_chat_cli.utils.message_bus import MessageBus
from agent_chat_cli.utils.logger import setup_logging

from dotenv import load_dotenv

load_dotenv()
setup_logging()


class AgentChatCLIApp(App):
Expand Down
137 changes: 137 additions & 0 deletions src/agent_chat_cli/docs/architecture.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@
# Architecture

## Overview

Agent Chat CLI is a Python TUI application built with Textual that provides an interactive chat interface for Claude AI with MCP (Model Context Protocol) server support.

## Core Components

### App Layer (`app.py`)
Main Textual application that initializes and coordinates all components.

### Components Layer
Textual widgets responsible for UI rendering:
- **ChatHistory**: Container that displays message widgets
- **Message widgets**: SystemMessage, UserMessage, AgentMessage, ToolMessage
- **UserInput**: Handles user text input and submission
- **ThinkingIndicator**: Shows when agent is processing

### Utils Layer

#### Agent Loop (`agent_loop.py`)
Manages the conversation loop with Claude SDK:
- Maintains async queue for user queries
- Handles streaming responses
- Parses SDK messages into structured AgentMessage objects
- Emits AgentMessageType events (STREAM_EVENT, ASSISTANT, RESULT)

#### Message Bus (`message_bus.py`)
Routes agent messages to appropriate UI components:
- Handles streaming text updates
- Mounts tool use messages
- Controls thinking indicator state
- Manages scroll-to-bottom behavior

#### Config (`config.py`)
Loads and validates YAML configuration:
- Filters disabled MCP servers
- Loads prompts from files
- Expands environment variables
- Combines system prompt with MCP server prompts

## Data Flow

```
User Input
UserInput.on_input_submitted
MessagePosted event → ChatHistory (immediate UI update)
AgentLoop.query (added to queue)
Claude SDK (streaming response)
AgentLoop._handle_message
AgentMessage (typed message) → MessageBus.handle_agent_message
Match on AgentMessageType:
- STREAM_EVENT → Update streaming message widget
- ASSISTANT → Mount tool use widgets
- RESULT → Reset thinking indicator
```

## Key Types

### Enums (`utils/enums.py`)

**AgentMessageType**: Agent communication events
- ASSISTANT: Assistant message with content blocks
- STREAM_EVENT: Streaming text chunk
- RESULT: Response complete
- INIT, SYSTEM: Initialization and system events

**ContentType**: Content block types
- TEXT: Text content
- TOOL_USE: Tool call
- CONTENT_BLOCK_DELTA: SDK streaming event type
- TEXT_DELTA: SDK text delta type

**MessageType** (`components/messages.py`): UI message types
- SYSTEM, USER, AGENT, TOOL

### Data Classes

**AgentMessage** (`utils/agent_loop.py`): Structured message from agent loop
```python
@dataclass
class AgentMessage:
type: AgentMessageType
data: Any
```

**Message** (`components/messages.py`): UI message data
```python
@dataclass
class Message:
type: MessageType
content: str
metadata: dict[str, Any] | None = None
```

## Configuration System

Configuration is loaded from `agent-chat-cli.config.yaml`:
- **system_prompt**: Base system prompt (supports file paths)
- **model**: Claude model to use
- **include_partial_messages**: Enable streaming
- **mcp_servers**: MCP server configurations (filtered by enabled flag)
- **agents**: Named agent configurations
- **disallowed_tools**: Tool filtering
- **permission_mode**: Permission handling mode

MCP server prompts are automatically appended to the system prompt.

## Event Flow

### User Message Flow
1. User submits text → UserInput
2. MessagePosted event → App
3. App → MessageBus.on_message_posted
4. MessageBus → ChatHistory.add_message
5. MessageBus → Scroll to bottom

### Agent Response Flow
1. AgentLoop receives SDK message
2. Parse into AgentMessage with AgentMessageType
3. MessageBus.handle_agent_message (match/case on type)
4. Update UI components based on type
5. Scroll to bottom

## Notes

- Two distinct MessageType enums exist for different purposes (UI vs Agent events)
- Message bus manages stateful streaming (tracks current_agent_message)
- Config loading combines multiple prompts into final system_prompt
- Tool names follow format: `mcp__servername__toolname`
23 changes: 19 additions & 4 deletions src/agent_chat_cli/utils/agent_loop.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,12 @@
ClaudeAgentOptions,
ClaudeSDKClient,
)
from claude_agent_sdk.types import AssistantMessage, TextBlock, ToolUseBlock
from claude_agent_sdk.types import (
AssistantMessage,
SystemMessage,
TextBlock,
ToolUseBlock,
)

from agent_chat_cli.utils.config import load_config
from agent_chat_cli.utils.enums import AgentMessageType, ContentType
Expand All @@ -22,12 +27,16 @@ class AgentLoop:
def __init__(
self,
on_message: Callable[[AgentMessage], Awaitable[None]],
session_id: str | None = None,
) -> None:
self.config = load_config()
self.session_id = session_id

config_dict = self.config.model_dump()
if session_id:
config_dict["resume"] = session_id

self.client = ClaudeSDKClient(
options=ClaudeAgentOptions(**self.config.model_dump())
)
self.client = ClaudeSDKClient(options=ClaudeAgentOptions(**config_dict))

self.on_message = on_message
self.query_queue: asyncio.Queue[str] = asyncio.Queue()
Expand All @@ -49,6 +58,12 @@ async def start(self) -> None:
await self.on_message(AgentMessage(type=AgentMessageType.RESULT, data=None))

async def _handle_message(self, message: Any) -> None:
if isinstance(message, SystemMessage):
if message.subtype == AgentMessageType.INIT.value and message.data.get(
"session_id"
):
self.session_id = message.data["session_id"]

if hasattr(message, "event"):
event = message.event # type: ignore[attr-defined]

Expand Down
20 changes: 20 additions & 0 deletions src/agent_chat_cli/utils/logger.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
import json
import logging
from typing import Any

from textual.logging import TextualHandler


def setup_logging():
logging.basicConfig(
level="NOTSET",
handlers=[TextualHandler()],
)


def log(message: str):
logging.info(message)


def log_json(message: Any):
logging.info(json.dumps(message, indent=2))
12 changes: 6 additions & 6 deletions src/agent_chat_cli/utils/message_bus.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,12 +23,6 @@ def __init__(self, app: "App") -> None:
self.current_agent_message: AgentMessageWidget | None = None
self.current_response_text = ""

async def _scroll_to_bottom(self) -> None:
"""Scroll the container to the bottom after a slight pause."""
await asyncio.sleep(0.1)
container = self.app.query_one("#container")
container.scroll_end(animate=False, immediate=True)

async def handle_agent_message(self, message: AgentMessage) -> None:
match message.type:
case AgentMessageType.STREAM_EVENT:
Expand All @@ -38,6 +32,12 @@ async def handle_agent_message(self, message: AgentMessage) -> None:
case AgentMessageType.RESULT:
await self._handle_result()

async def _scroll_to_bottom(self) -> None:
"""Scroll the container to the bottom after a slight pause."""
await asyncio.sleep(0.1)
container = self.app.query_one("#container")
container.scroll_end(animate=False, immediate=True)

async def _handle_stream_event(self, message: AgentMessage) -> None:
text_chunk = message.data.get("text", "")

Expand Down