Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 15 additions & 2 deletions AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,25 @@ git clone https://github.com/cortexlinux/cortex.git
cd cortex
python3 -m venv venv
source venv/bin/activate
pip install -e .

# Configure API key
# Install Cortex with interactive Ollama setup
# Use python setup.py develop (NOT pip install -e .) for interactive prompts
python setup.py develop

# The setup will automatically:
# - Prompt if you want to install Ollama (y/n)
# - Install Ollama if you choose 'yes'
# - Let you select an AI model to download
# - Configure everything for first use

# Note: pip install -e . works but hides interactive prompts
# Use python setup.py develop for full interactive experience

# Optional: Configure API key for cloud providers (if not using Ollama)
echo 'ANTHROPIC_API_KEY=your-key-here' > .env

# Verify installation
cortex --help
cortex install nginx --dry-run
```

Expand Down
28 changes: 26 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,22 +8,46 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]

### Added
- **🚀 Ollama Integration - Local LLM Support**
- Privacy-first local LLM inference via Ollama
- Zero-cost, offline-capable package management
- No API keys required for basic usage
- Auto-installation and setup during `pip install`
- Smart model selection (prefers code-focused models)
- Streaming response support
- Automatic fallback to Claude/OpenAI when local models unavailable
- New `OllamaProvider` class in `cortex/providers/ollama_provider.py`
- Setup script `scripts/setup_ollama.py` for automated installation
- Comprehensive documentation in `docs/OLLAMA_INTEGRATION.md`
- Example usage in `examples/ollama_demo.py`
- Test suite in `tests/test_ollama_integration.py`
- Updated LLM router to prioritize local models
- CLI command `cortex-setup-ollama` for manual setup

- Comprehensive code assessment (ASSESSMENT.md)
- Detailed improvement roadmap (ROADMAP.md)
- Enhanced contribution guidelines (CONTRIBUTING.md)
- Professional README with full documentation
- This CHANGELOG file

### Changed
- Updated README with proper installation instructions
- Standardized Python version requirement to 3.10+
- **LLM Router** now defaults to Ollama for all tasks (privacy-first)
- Updated routing priority: Ollama → Claude → Kimi K2 → OpenAI
- API keys now optional (only needed for cloud fallback)
- Updated README with Ollama quickstart and features
- Enhanced sample configuration with LLM provider settings
- Modified `setup.py` to trigger Ollama installation post-install
- Updated Python version requirement to 3.10+
- Improved documentation structure

### Fixed
- (Pending) Shell injection vulnerability in coordinator.py
- (Pending) CI/CD pipeline test directory path

### Security
- Enhanced privacy with 100% local LLM processing option
- Zero data transmission to cloud when using Ollama
- No API key exposure risk for basic usage
- (Pending) Added additional dangerous command patterns to sandbox

---
Expand Down
1 change: 1 addition & 0 deletions MANIFEST.in
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,5 @@ include README.md
include LICENSE
recursive-include LLM *.py
recursive-include cortex *.py
recursive-include scripts *.py
include LLM/requirements.txt
91 changes: 73 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,13 +63,15 @@ cortex install "tools for video compression"

| Feature | Description |
|---------|-------------|
| **Natural Language** | Describe what you need in plain English |
| **Dry-Run Default** | Preview all commands before execution |
| **Sandboxed Execution** | Commands run in Firejail isolation |
| **Full Rollback** | Undo any installation with `cortex rollback` |
| **Audit Trail** | Complete history in `~/.cortex/history.db` |
| **Hardware-Aware** | Detects GPU, CPU, memory for optimized packages |
| **Multi-LLM Support** | Works with Claude, GPT-4, or local Ollama models |
| **🤖 Natural Language** | Describe what you need in plain English |
| **🔒 Privacy-First** | Local LLM support via Ollama - no API keys required |
| **📴 Offline Capable** | Works completely offline with local models |
| **🆓 Zero Cost** | Free local inference, optional cloud fallback |
| **🛡️ Sandboxed Execution** | Commands run in Firejail isolation |
| **⏮️ Full Rollback** | Undo any installation with `cortex rollback` |
| **📋 Audit Trail** | Complete history in `~/.cortex/history.db` |
| **🔧 Hardware-Aware** | Detects GPU, CPU, memory for optimized packages |
| **☁️ Multi-LLM Support** | Ollama (local), Claude, GPT-4, or Kimi K2 |

---

Expand All @@ -79,7 +81,7 @@ cortex install "tools for video compression"

- **OS:** Ubuntu 22.04+ / Debian 12+
- **Python:** 3.10 or higher
- **API Key:** [Anthropic](https://console.anthropic.com) or [OpenAI](https://platform.openai.com)
- **API Key (Optional):** [Anthropic](https://console.anthropic.com) or [OpenAI](https://platform.openai.com) for cloud fallback

### Installation

Expand All @@ -92,24 +94,37 @@ cd cortex
python3 -m venv venv
source venv/bin/activate

# 3. Install Cortex
pip install -e .
# 3. Upgrade pip and tools
pip install --upgrade pip setuptools wheel

# 4. Configure API key
echo 'ANTHROPIC_API_KEY=your-key-here' > .env
# 4. Install Cortex with interactive setup
python setup.py develop

# During installation, you'll be prompted to install Ollama (local AI)
# Choose 'y' for offline/private AI, or 'n' to use cloud providers
```
Comment on lines +103 to +105
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, find and examine the post-install hook file
find . -name "*post_install*" -type f | head -20

Repository: cortexlinux/cortex

Length of output: 87


🏁 Script executed:

# Get the exact content from lines 103-105 of README.md
sed -n '103,105p' README.md

Repository: cortexlinux/cortex

Length of output: 205


🏁 Script executed:

# Search for the setup.py or installation script to understand the installation flow
find . -name "setup.py" -o -name "setup.cfg" -o -name "pyproject.toml" | head -10

Repository: cortexlinux/cortex

Length of output: 89


🏁 Script executed:

# Search for any Ollama-related prompts or interactive setup code in the codebase
rg -i "ollama.*prompt|prompt.*ollama|you'll be prompted" --type py --type md

Repository: cortexlinux/cortex

Length of output: 740


🏁 Script executed:

# Examine the post-install hook implementation
cat -n cortex/_post_install.py

Repository: cortexlinux/cortex

Length of output: 1805


🏁 Script executed:

# Check setup.py to see how post-install is configured
cat setup.py

Repository: cortexlinux/cortex

Length of output: 5969


🏁 Script executed:

# Examine the first_run_wizard prompt logic
grep -A 10 -B 5 "Install Ollama now" cortex/first_run_wizard.py

Repository: cortexlinux/cortex

Length of output: 765


🏁 Script executed:

# Search for any issue references or PR discussions about the prompt
rg -i "suyashd|prompt.*doesnt|doesnt.*pop" --type md --type py --type txt

Repository: cortexlinux/cortex

Length of output: 44


🏁 Script executed:

# Examine the setup_ollama function to understand the full flow
cat -n scripts/setup_ollama.py | head -100

Repository: cortexlinux/cortex

Length of output: 3761


🏁 Script executed:

# Check if there are any GitHub issues or PR comments mentioning the prompt issue
git log --all --oneline --grep="ollama\|prompt" | head -20

Repository: cortexlinux/cortex

Length of output: 373


🏁 Script executed:

# Search for recent PRs or issues mentioning Ollama prompt issues
git log --all --oneline | grep -i "PR\|#\|ollama" | head -30

Repository: cortexlinux/cortex

Length of output: 2164


🏁 Script executed:

# Check if there are any discussion files or issue templates
find . -name "*.md" -type f | xargs grep -l "Suyashd999\|ollama prompt" 2>/dev/null

Repository: cortexlinux/cortex

Length of output: 46


🏁 Script executed:

# Look at the full setup_ollama.py to understand the complete flow
wc -l scripts/setup_ollama.py

Repository: cortexlinux/cortex

Length of output: 89


🏁 Script executed:

# Check what happens when setup_ollama is called - is there a wizard or direct setup?
grep -n "def setup_ollama\|def _prompt" scripts/setup_ollama.py | head -20

Repository: cortexlinux/cortex

Length of output: 85


🏁 Script executed:

# Read the complete setup_ollama function
sed -n '358,428p' scripts/setup_ollama.py

Repository: cortexlinux/cortex

Length of output: 2790


🏁 Script executed:

# Check if the wizard is being imported and called
grep -n "import.*wizard\|from.*wizard" scripts/setup_ollama.py

Repository: cortexlinux/cortex

Length of output: 44


🏁 Script executed:

# Look for any conditional logic that might skip the prompt
grep -n "isatty\|interactive\|TTY\|stdin" scripts/setup_ollama.py

Repository: cortexlinux/cortex

Length of output: 242


🏁 Script executed:

# Double-check README context around those lines to understand full installation instructions
sed -n '95,115p' README.md

Repository: cortexlinux/cortex

Length of output: 578


Document the interactive TTY requirement for Ollama setup prompt.

Lines 103-105 promise "you'll be prompted to install Ollama" during python setup.py develop, but the prompt only appears in interactive (TTY) mode. When run in non-interactive shells (piped input, CI/CD, some terminal emulators), the setup silently skips with no user interaction. This creates a gap between documented expectations and actual behavior.

Update the documentation to clarify the TTY requirement, or provide explicit instructions for non-interactive setups (e.g., use cortex-setup-ollama command or set environment variables to skip/customize Ollama installation).

🤖 Prompt for AI Agents
In README.md around lines 103 to 105, the doc claims "you'll be prompted to
install Ollama" during python setup but omits that the prompt only appears in an
interactive TTY; update the text to state the prompt requires an interactive
terminal and will be skipped in non-interactive environments (CI/piped shells),
and add a short note on alternatives: run the included cortex-setup-ollama
command for unattended installs or set environment variables/flags to skip or
preconfigure Ollama installation; keep the wording concise and add examples of
the two options (interactive vs non-interactive) so users know how to proceed in
CI.


### Optional: Configure Cloud Providers

# 5. Verify installation
cortex --version
```bash
# Only needed if you skipped Ollama or want cloud fallback
echo 'ANTHROPIC_API_KEY=your-key-here' > .env
# OR
echo 'OPENAI_API_KEY=your-key-here' > .env
```

### First Run

```bash
# Preview what would be installed (safe, no changes made)
# Uses local Ollama by default - no API calls!
cortex install nginx --dry-run

# Actually install
cortex install nginx --execute

# Check Ollama status and installed models
ollama list
```

---
Expand All @@ -119,7 +134,7 @@ cortex install nginx --execute
### Basic Commands

```bash
# Install with natural language
# Install with natural language (uses local LLM)
cortex install "web server for static sites" --dry-run
cortex install "image editing software like photoshop" --execute

Expand All @@ -131,6 +146,11 @@ cortex rollback <installation-id>

# Check system preferences
cortex check-pref

# Manage local LLM models
ollama list # Show available models
ollama pull llama3:8b # Download a model
cortex-setup-ollama # Re-run Ollama setup
```

### Command Reference
Expand All @@ -143,6 +163,7 @@ cortex check-pref
| `cortex history` | View all past installations |
| `cortex rollback <id>` | Undo a specific installation |
| `cortex check-pref` | Display current preferences |
| `cortex-setup-ollama` | Setup/reinstall Ollama integration |
| `cortex --version` | Show version information |
| `cortex --help` | Display help message |

Expand All @@ -157,6 +178,39 @@ Cortex stores configuration in `~/.cortex/`:
└── audit.log # Detailed audit trail
```

### Local LLM Support (Ollama)

**Privacy-First by Default**: Cortex uses local LLMs via Ollama for zero-cost, offline-capable operation.

**Benefits:**
- ✅ **100% Private**: All processing happens locally
- ✅ **Completely Offline**: Works without internet after setup
- ✅ **Zero Cost**: No API fees or subscriptions
- ✅ **No API Keys**: Get started immediately

**Recommended Models:**
- `phi3:mini` (1.9GB) - Lightweight, default
- `llama3:8b` (4.7GB) - Balanced performance
- `codellama:13b` (9GB) - Code-optimized
- `deepseek-coder-v2:16b` (10GB+) - Best for system tasks

**Manage Models:**
```bash
ollama list # Show installed models
ollama pull llama3:8b # Download a model
ollama rm phi3:mini # Remove a model
```

**Cloud Fallback:**
If local models are unavailable, Cortex automatically falls back to cloud providers (if configured):
```bash
# Optional: Set cloud API keys for fallback
export ANTHROPIC_API_KEY=your-claude-key
export OPENAI_API_KEY=your-openai-key
```

📖 **[Full Ollama Documentation](docs/OLLAMA_INTEGRATION.md)**

---

## Architecture
Expand All @@ -176,11 +230,12 @@ Cortex stores configuration in `~/.cortex/`:
┌─────────────────────────────────────────────────────────────────┐
│ LLM Router │
Claude / GPT-4 / Ollama
Ollama (Local) → Claude GPT-4 → Kimi K2
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Anthropic │ │ OpenAI │ │ Ollama │ │
│ │ Claude │ │ GPT-4 │ │ Local │ │
│ │ Ollama │ │ Anthropic │ │ OpenAI │ │
│ │ (Local) │ │ Claude │ │ GPT-4 │ │
│ │ PRIORITY │ │ Fallback 1 │ │ Fallback 2 │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────────┘
Expand Down
51 changes: 51 additions & 0 deletions cortex/_post_install.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
#!/usr/bin/env python3
"""
Post-install hook for Cortex Linux.
Automatically runs after pip install to setup Ollama.
"""

import os
import sys


def run_setup():
"""Run Ollama setup after installation."""
Comment on lines +11 to +12
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add missing return type hint.

The function run_setup() is missing a return type hint. Per coding guidelines, type hints are required for all functions.

🔎 Proposed fix
-def run_setup():
+def run_setup() -> None:
     """Run Ollama setup after installation."""

As per coding guidelines, type hints are required in Python code.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def run_setup():
"""Run Ollama setup after installation."""
def run_setup() -> None:
"""Run Ollama setup after installation."""
🤖 Prompt for AI Agents
In cortex/_post_install.py around lines 11-12, the function signature for
run_setup() is missing a return type hint; update the signature to include an
explicit return type (e.g., -> None) so it follows the project's typing
guidelines, and ensure any related imports or type annotations are added if
required.

# Skip if in CI or if explicitly disabled
if (
os.getenv("CI")
or os.getenv("GITHUB_ACTIONS")
or os.getenv("CORTEX_SKIP_OLLAMA_SETUP") == "1"
):
return

# Check if already ran setup (marker file in user's home)
marker_file = os.path.expanduser("~/.cortex/.setup_done")
if os.path.exists(marker_file):
return

print("\n" + "=" * 70)
print("🚀 Running Cortex post-installation setup...")
print("=" * 70 + "\n")

try:
# Import and run the setup function
from scripts.setup_ollama import setup_ollama

setup_ollama()

# Create marker file to prevent running again
os.makedirs(os.path.dirname(marker_file), exist_ok=True)
with open(marker_file, "w") as f:
f.write("Setup completed\n")

except Exception as e:
print(f"⚠️ Ollama setup encountered an issue: {e}")
print("ℹ️ You can run it manually later with: cortex-setup-ollama")
finally:
print("\n" + "=" * 70)
print("💡 TIP: To re-run setup anytime, execute: cortex-setup-ollama")
print("=" * 70)


if __name__ == "__main__":
run_setup()
8 changes: 4 additions & 4 deletions cortex/first_run_wizard.py
Original file line number Diff line number Diff line change
Expand Up @@ -372,15 +372,15 @@ def _setup_ollama(self) -> StepResult:
return StepResult(success=True, data={"api_provider": "none"})

# Pull a small model
print("\nPulling llama3.2 model (this may take a few minutes)...")
print("\nPulling codellama:7b model (this may take a few minutes)...")
try:
subprocess.run(["ollama", "pull", "llama3.2"], check=True)
subprocess.run(["ollama", "pull", "codellama:7b"], check=True)
print("\n✓ Model ready!")
except subprocess.CalledProcessError:
print("\n⚠ Could not pull model - you can do this later with: ollama pull llama3.2")
print("\n⚠ Could not pull model - you can do this later with: ollama pull codellama:7b")

self.config["api_provider"] = "ollama"
self.config["ollama_model"] = "llama3.2"
self.config["ollama_model"] = "codellama:7b"
Comment on lines +375 to +383
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify default model references across the codebase

echo "=== Checking for codellama:7b references ==="
rg -n "codellama:7b" --type=py --type=md

echo ""
echo "=== Checking for phi3:mini references ==="
rg -n "phi3:mini" --type=py --type=md

echo ""
echo "=== Checking setup_ollama.py for default model ==="
find . -name "setup_ollama.py" -type f | head -5

echo ""
echo "=== Checking ollama_provider.py ==="
find . -name "ollama_provider.py" -type f | head -5

Repository: cortexlinux/cortex

Length of output: 3407


🏁 Script executed:

cat -n cortex/providers/ollama_provider.py | head -80

Repository: cortexlinux/cortex

Length of output: 2876


🏁 Script executed:

sed -n '115,160p' scripts/setup_ollama.py

Repository: cortexlinux/cortex

Length of output: 1591


Critical model inconsistency: first_run_wizard.py hardcodes codellama:7b, which is not in OllamaProvider's preferred or fallback models.

The first-run wizard unconditionally pulls codellama:7b (lines 375-383), but:

  1. OllamaProvider does not list codellama:7b in PREFERRED_MODELS or FALLBACK_MODELS—it lists phi3:mini, llama3:8b, and mistral:7b as fallbacks
  2. Documentation consistently references phi3:mini as the default (lightweight, 1.9GB)
  3. The setup script (setup_ollama.py) offers user choice with codellama:7b as default, contradicting the wizard's approach

This creates a critical integration issue: the wizard pulls a model the provider code doesn't recognize in its preference hierarchy, potentially causing unexpected behavior during model selection fallback logic.

Resolution: first_run_wizard.py should either pull from the provider's PREFERRED_MODELS list (starting with deepseek-coder-v2:16b, fallback to phi3:mini) or consistently use the documented phi3:mini default to align with OllamaProvider's expectations and documentation.

🤖 Prompt for AI Agents
In cortex/first_run_wizard.py around lines 375-383, the wizard currently
hardcodes pulling "codellama:7b" which conflicts with OllamaProvider's model
lists; change it to derive the model choice from OllamaProvider instead (or at
minimum use the documented lightweight default "phi3:mini"). Import or reference
OllamaProvider, pick model = OllamaProvider.PREFERRED_MODELS[0] if present else
OllamaProvider.FALLBACK_MODELS[0] (or default to "phi3:mini"), use that variable
in the subprocess.run pull command and in self.config["ollama_model"], and
update the user-facing print messages to reflect the chosen model string.


return StepResult(success=True, data={"api_provider": "ollama"})

Expand Down
2 changes: 1 addition & 1 deletion cortex/kernel_features/hardware_detect.py
Original file line number Diff line number Diff line change
Expand Up @@ -364,7 +364,7 @@ def recommend_models(total_vram_gb: float, system_ram_gb: float, has_npu: bool)
recommendations.extend(["llama3.1-8b", "mistral-7b", "qwen2.5-14b", "codellama-13b"])

if available_gb >= 8:
recommendations.extend(["llama3.2-3b", "phi-3-mini", "gemma-2b", "qwen2.5-7b-q4"])
recommendations.extend(["phi3:mini", "llama3:8b", "gemma:2b", "qwen2.5:7b"])

if available_gb >= 4:
recommendations.extend(["tinyllama-1.1b", "phi-2", "qwen2.5-1.5b"])
Expand Down
Loading
Loading