Skip to content

[FEATURE] Ollama Integration - Local LLM Support #357

@Suyashd999

Description

@Suyashd999

Summary

Adds local LLM support via Ollama for privacy-first, offline-capable package management.

Features

  • ✅ Auto-detect Ollama installation
  • ✅ Smart model selection (prefers code-focused models)
  • ✅ Streaming responses
  • ✅ Fallback to Claude/OpenAI
  • ✅ Works completely offline
  • ✅ Zero data sent to cloud

Supported Models

  • CodeLlama (7B, 13B, 34B)
  • Llama 3.2
  • Mistral / Mixtral
  • DeepSeek Coder
  • Phi-2
  • And more...

Files Added

  • cortex/providers/ollama_integration.py - Core integration
  • tests/test_ollama_integration.py - Test suite
  • docs/README_OLLAMA.md - Documentation

Testing

pytest tests/test_ollama_integration.py -v

Bounty: $150 (+ $150 bonus after funding)

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Added local LLM support through Ollama integration with automatic model discovery and selection
    • Enabled offline and privacy-preserving AI capabilities with model management and configuration options
    • Introduced provider routing with automatic fallback support
  • Documentation

    • Added comprehensive integration guide covering setup, configuration, usage examples, troubleshooting, and architecture details
  • Tests

    • Added test coverage for LLM provider functionality and integration scenarios

✏️ Tip: You can customize this high-level summary in your review settings.

Metadata

Metadata

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions