Skip to content

Conversation

@sujay-d07
Copy link
Collaborator

@sujay-d07 sujay-d07 commented Dec 27, 2025

Related Issue

Closes #383

Summary

Removed hardcoded llama3.2 model from Ollama integration. Users can now configure any Ollama model via environment variables or config file.

Changes

Old Behaviour:

  • Ollama model was hardcoded to llama3.2 in ask.py and tests
  • Users couldn't use alternative models like mistral, phi3, llama3.1:8b, etc.
  • First-run wizard forced llama3.2 installation
  • Tests failed if llama3.2 wasn't installed

New Behaviour:

  • Ollama model reads from (priority order):
    • OLLAMA_MODEL environment variable
    • ~/.cortex/config.json → ollama_model field
    • Defaults to llama3.2 if neither set
  • First-run wizard offers 5 model choices (llama3.2, llama3.2:1b, mistral, phi3, custom)
  • Tests auto-detect available models or skip gracefully
  • Works with any installed Ollama model

Summary by CodeRabbit

  • New Features

    • Ollama model selection is now dynamic and configurable via environment variable, config file, or an interactive choice in the first-run wizard (including a Custom option).
  • Bug Fixes / Improvements

    • Model selection now falls back gracefully to a built-in default; success and error messages reference the chosen model.
  • Tests

    • Unit and integration tests updated to validate env/config-driven model selection and dynamic model detection.
  • Chores

    • Installer-related test adjusted to reflect Ollama fallback behavior.

✏️ Tip: You can customize this high-level summary in your review settings.

Copilot AI review requested due to automatic review settings December 27, 2025 13:14
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 27, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

📝 Walkthrough

Walkthrough

Replaces the hardcoded Ollama model ("llama3.2") with dynamic resolution (env var OLLAMA_MODEL → ~/.cortex/config.json → fallback "llama3.2"), adds interactive model selection in the first-run wizard, and updates tests to use the environment/config-driven Ollama model selection.

Changes

Cohort / File(s) Summary
Model resolution
cortex/ask.py
Added AskHandler._get_ollama_model() that returns OLLAMA_MODEL env var, or ~/.cortex/config.json's ollama_model, or falls back to "llama3.2". _default_model() now delegates to this helper for provider "ollama".
First-run wizard
cortex/first_run_wizard.py
Replaced hardcoded pull of llama3.2 with an interactive model selection (preset options + Custom). Uses selected model for ollama pull and saves it as ollama_model in config.
Unit tests (ask)
tests/test_ask.py
Updated to set/restore OLLAMA_MODEL and assert AskHandler(provider="ollama") reads env var or falls back to config/default instead of always expecting "llama3.2".
Integration tests (ollama)
tests/test_ollama_integration.py
Added get_available_ollama_model() and dynamic model selection for tests (uses OLLAMA_MODEL or first available Ollama model); tests skip if none available; removed hardcoded "llama3.2".
CLI tests
tests/test_cli.py
test_install_no_api_key patched to mock cortex.cli.CommandInterpreter (constructor patched) and updated to assert successful install flow using the mock interpreter.
Misc formatting/strings
.github/scripts/cla_check.py, cortex/dependency_check.py, cortex/llm/interpreter.py
Non-functional formatting/argument-layout changes and a minor string-join formatting update in installation instructions; behavior unchanged.

Sequence Diagram(s)

(omitted — changes are localized to model selection and interactive setup and do not introduce a multi-component sequential flow requiring visualization)

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Suggested labels

MVP

Suggested reviewers

  • mikejmorgan-ai
  • Suyashd999

Poem

🐰 I hopped through configs, sniffed the breeze,
Found env vars, files, and choices with ease.
The wizard asked nicely which model to pull,
No more one-size — now options are full.
hop hop

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 65.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Out of Scope Changes check ❓ Inconclusive Minor formatting changes in cla_check.py and llm/interpreter.py appear tangential to the main objective of removing hardcoded Ollama models. Clarify whether formatting-only changes in .github/scripts/cla_check.py and cortex/llm/interpreter.py should be included in this PR or separated into a dedicated formatting PR.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main change: removing hardcoded Ollama models and enabling support for any model.
Description check ✅ Passed The description is comprehensive, following the template with related issue, clear summary of changes, and detailed before/after comparison.
Linked Issues check ✅ Passed All requirements from issue #383 are fulfilled: hardcoded models removed, environment variable and config file support added, first-run wizard updated with model choices, and tests refactored for dynamic model detection.
✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2f01549 and a2e81e1.

📒 Files selected for processing (5)
  • cortex/ask.py
  • cortex/first_run_wizard.py
  • tests/test_ask.py
  • tests/test_cli.py
  • tests/test_ollama_integration.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • cortex/first_run_wizard.py
  • tests/test_ask.py
  • tests/test_cli.py
  • cortex/ask.py
  • tests/test_ollama_integration.py
tests/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Maintain >80% test coverage for pull requests

Files:

  • tests/test_ask.py
  • tests/test_cli.py
  • tests/test_ollama_integration.py
🧬 Code graph analysis (1)
tests/test_ollama_integration.py (1)
scripts/setup_ollama.py (1)
  • test_model (315-343)
🪛 GitHub Actions: CI
cortex/first_run_wizard.py

[error] 390-390: W293 Blank line contains whitespace

🪛 GitHub Check: lint
cortex/first_run_wizard.py

[failure] 390-390: Ruff (W293)
cortex/first_run_wizard.py:390:1: W293 Blank line contains whitespace

tests/test_ask.py

[failure] 267-267: Ruff (W293)
tests/test_ask.py:267:1: W293 Blank line contains whitespace


[failure] 261-261: Ruff (W293)
tests/test_ask.py:261:1: W293 Blank line contains whitespace


[failure] 256-256: Ruff (W293)
tests/test_ask.py:256:1: W293 Blank line contains whitespace


[failure] 253-253: Ruff (W293)
tests/test_ask.py:253:1: W293 Blank line contains whitespace

tests/test_cli.py

[failure] 68-68: Ruff (W293)
tests/test_cli.py:68:1: W293 Blank line contains whitespace

🪛 GitHub Check: Lint
cortex/first_run_wizard.py

[failure] 390-390: Ruff (W293)
cortex/first_run_wizard.py:390:1: W293 Blank line contains whitespace

tests/test_ask.py

[failure] 267-267: Ruff (W293)
tests/test_ask.py:267:1: W293 Blank line contains whitespace


[failure] 261-261: Ruff (W293)
tests/test_ask.py:261:1: W293 Blank line contains whitespace


[failure] 256-256: Ruff (W293)
tests/test_ask.py:256:1: W293 Blank line contains whitespace


[failure] 253-253: Ruff (W293)
tests/test_ask.py:253:1: W293 Blank line contains whitespace

tests/test_cli.py

[failure] 68-68: Ruff (W293)
tests/test_cli.py:68:1: W293 Blank line contains whitespace

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Agent
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.12)
🔇 Additional comments (6)
cortex/first_run_wizard.py (1)

374-407: LGTM! User-driven model selection aligns with PR objectives.

The implementation successfully replaces the hardcoded "llama3.2" default with a user-friendly selection menu. The chosen model is properly stored in config["ollama_model"] for later use by the dynamic model resolution logic in cortex/ask.py.

tests/test_ollama_integration.py (3)

27-46: LGTM! Robust helper for dynamic model detection.

The get_available_ollama_model() helper elegantly solves the problem of tests assuming specific models are installed. It parses ollama list output and returns the first available model, allowing tests to work with any Ollama installation.


49-53: LGTM! Enhanced skip condition prevents false test failures.

The updated pytestmark now checks both Ollama installation and model availability, ensuring tests skip gracefully when requirements aren't met rather than failing.


95-111: LGTM! Test respects environment-driven model selection.

The test now uses OLLAMA_MODEL environment variable first, then falls back to get_available_ollama_model(), matching the priority established in cortex/ask.py. This ensures tests verify the actual production behavior.

cortex/ask.py (2)

180-202: LGTM! Clean implementation of dynamic Ollama model resolution.

The _get_ollama_model() method correctly implements the three-tier priority described in the PR objectives:

  1. OLLAMA_MODEL environment variable
  2. ~/.cortex/config.jsonollama_model field
  3. Fallback to "llama3.2"

The silent exception handling (line 198-199) is appropriate for this fallback scenario, ensuring config read errors don't break the user experience.

Based on PR objectives and coding guidelines.


169-178: LGTM! Proper delegation to dynamic model resolver.

The change from hardcoded "llama3.2" to self._get_ollama_model() enables the flexible model selection that is the core objective of this PR.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR removes the hardcoded llama3.2 model from the Ollama integration, enabling users to configure any Ollama model via environment variables or configuration files. This resolves issue #383 where users were unable to specify alternative models like mistral, phi3, or llama3.1:8b.

Key Changes

  • Added configurable Ollama model selection with priority: OLLAMA_MODEL environment variable → ~/.cortex/config.json → default llama3.2
  • Enhanced first-run wizard with 5 model options (llama3.2, llama3.2:1b, mistral, phi3, custom)
  • Updated tests to auto-detect available Ollama models instead of assuming llama3.2

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 10 comments.

Show a summary per file
File Description
cortex/ask.py Added _get_ollama_model() method to read model from environment variable or config file with fallback to llama3.2
cortex/first_run_wizard.py Updated wizard to offer 5 model choices instead of forcing llama3.2 installation
tests/test_ask.py Modified test to verify model selection from environment variable and config
tests/test_ollama_integration.py Added get_available_ollama_model() helper and updated all tests to use detected models
tests/test_cli.py Updated test expectation to reflect that Ollama fallback allows install to succeed without API key

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
cortex/first_run_wizard.py (1)

390-390: Fix trailing whitespace on line 390.

Line 390 contains trailing whitespace, causing pipeline failures.

tests/test_ask.py (1)

253-253: Fix trailing whitespace on blank lines.

Multiple blank lines contain trailing whitespace (lines 253, 256, 261, 267), causing linting failures. These were flagged in the previous review and need to be addressed.

As per pipeline failures and coding guidelines.

Also applies to: 256-256, 261-261, 267-267

🧹 Nitpick comments (2)
tests/test_ask.py (2)

252-252: Remove redundant import.

The os module is already imported at the top of the file (line 3), so this import statement is unnecessary.

🔎 Proposed fix
     def test_default_model_ollama(self):
         """Test default model for Ollama."""
         # Test with environment variable
-        import os

         # Save and clear any existing OLLAMA_MODEL
         original_model = os.environ.get("OLLAMA_MODEL")

As per PEP 8 coding guidelines.


268-272: Consider verifying the default fallback value.

The test correctly validates that a model is configured, but it doesn't verify the specific fallback default. According to the PR objectives, when neither OLLAMA_MODEL nor the config file specifies a model, it should default to "llama3.2".

Consider adding a test case that explicitly verifies this fallback behavior, possibly by temporarily moving/mocking the config file to ensure the default is tested in isolation.

💡 Example approach

You could add an additional test or expand this one to verify the default:

# Test with mocked config that returns None for ollama_model
with patch('cortex.ask.load_config', return_value={}):
    handler3 = AskHandler(api_key="test", provider="ollama")
    self.assertEqual(handler3.model, "llama3.2")

This would ensure the fallback default is properly tested per the PR objectives.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a2e81e1 and d536865.

📒 Files selected for processing (4)
  • cortex/first_run_wizard.py
  • tests/test_ask.py
  • tests/test_cli.py
  • tests/test_ollama_integration.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/test_cli.py
  • tests/test_ollama_integration.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • cortex/first_run_wizard.py
  • tests/test_ask.py
tests/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Maintain >80% test coverage for pull requests

Files:

  • tests/test_ask.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Build Package
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.12)
🔇 Additional comments (1)
cortex/first_run_wizard.py (1)

374-409: Good implementation of dynamic model selection.

The model selection menu provides a good user experience with multiple preset options and custom model input. The implementation correctly:

  • Offers five sensible model choices with size information
  • Allows custom model names
  • Uses the selected model throughout (pull command, error messages)
  • Stores the selection in config["ollama_model"] for dynamic resolution

This aligns well with the PR objectives to remove hardcoded "llama3.2" references.

@jaysurse
Copy link
Collaborator

@Suyashd999 i have verified the changes made by this PR and it fulfills the requirements of issue #383

@Suyashd999
Copy link
Collaborator

@sujay-d07 please fix the conflicts

sujay-d07 and others added 9 commits December 30, 2025 12:22
…lines in various test files and the first_run_wizard module.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@github-actions
Copy link

github-actions bot commented Dec 30, 2025

CLA Verification Failed

The following contributors have not signed the Contributor License Agreement:

  • Copilot (175728472+Copilot@users.noreply.github.com)

How to Sign

  1. Read the CLA document
  2. Open a CLA signature request
  3. A maintainer will add you to the signers list
  4. Comment recheck on this PR to re-run verification

Verified Signers


This check runs automatically. Maintainers can update .github/cla-signers.json to add signers.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/test_cli.py (1)

62-82: Critical: Unresolved merge conflict markers.

The file contains Git merge conflict markers (<<<<<<< HEAD, =======, >>>>>>>), making it syntactically invalid Python. This is causing all pipeline failures and preventing tests from running.

You need to manually resolve this conflict by:

  1. Reviewing both versions of test_install_no_api_key (lines 63-70 vs 72-81)
  2. Deciding which implementation to keep or merging them
  3. Removing all conflict markers (<<<<<<<, =======, >>>>>>>)

Based on the PR objectives, the new version (lines 72-81) that mocks CommandInterpreter and expects success with Ollama appears to be the intended implementation.

🔎 Recommended resolution

Remove the HEAD version and conflict markers, keeping only the new implementation:

     @patch.dict(os.environ, {}, clear=True)
-<<<<<<< HEAD
-    def test_install_no_api_key(self):
-        # When no API key is set, the CLI falls back to Ollama.
-        # If Ollama is running, this should succeed. If not, it should fail.
-        # We'll mock Ollama to be unavailable to test the failure case.
-        with patch("cortex.llm.interpreter.CommandInterpreter.parse") as mock_parse:
-            mock_parse.side_effect = RuntimeError("Ollama not available")
-            result = self.cli.install("docker")
-            self.assertEqual(result, 1)
-=======
     @patch("cortex.cli.CommandInterpreter")
     def test_install_no_api_key(self, mock_interpreter_class):
         # Should work with Ollama (no API key needed)
         mock_interpreter = Mock()
         mock_interpreter.parse.return_value = ["apt update", "apt install docker"]
         mock_interpreter_class.return_value = mock_interpreter

         result = self.cli.install("docker")
         # Should succeed with Ollama as fallback provider
         self.assertEqual(result, 0)
->>>>>>> a2e81e1 (Removed the hardcoded Ollama Models and now works with any Ollama Model)
♻️ Duplicate comments (1)
cortex/first_run_wizard.py (1)

389-394: Critical: KeyError on invalid model choice.

Line 394 will raise KeyError if the user enters an invalid choice (e.g., "6", "abc", or any value not in "1"-"5"). The code only checks for "5" at line 391, but doesn't validate "1"-"4" before the dictionary lookup.

🔎 Proposed fix
         choice = self._prompt("\nEnter choice [1]: ", default="1")

         if choice == "5":
             model_name = self._prompt("Enter model name: ", default="llama3.2")
         else:
-            model_name = model_choices[choice]
+            model_name = model_choices.get(choice, "llama3.2")

Alternatively, add input validation:

         choice = self._prompt("\nEnter choice [1]: ", default="1")
+        
+        # Validate choice
+        while choice not in model_choices and choice != "5":
+            print("Invalid choice. Please enter a number between 1 and 5.")
+            choice = self._prompt("\nEnter choice [1]: ", default="1")

         if choice == "5":
             model_name = self._prompt("Enter model name: ", default="llama3.2")
         else:
             model_name = model_choices[choice]
🧹 Nitpick comments (1)
tests/test_ask.py (1)

245-257: Consider using @patch.dict for cleaner test isolation.

The test manually saves and restores the OLLAMA_MODEL environment variable. While this works, using @patch.dict(os.environ, ...) provides automatic cleanup even if the test fails and is more consistent with other tests in the file (e.g., test_install_no_api_key in test_cli.py).

🔎 Proposed refactor
-    def test_default_model_ollama(self):
+    @patch.dict(os.environ, {"OLLAMA_MODEL": "test-model"})
+    def test_default_model_ollama(self):
         """Test default model for Ollama."""
-        # Test with environment variable
-
-        # Save and clear any existing OLLAMA_MODEL
-        original_model = os.environ.get("OLLAMA_MODEL")
-
         # Test with custom env variable
-        os.environ["OLLAMA_MODEL"] = "test-model"
         handler = AskHandler(api_key="test", provider="ollama")
         self.assertEqual(handler.model, "test-model")
-
-        # Clean up
-        if original_model is not None:
-            os.environ["OLLAMA_MODEL"] = original_model
-        else:
-            os.environ.pop("OLLAMA_MODEL", None)

Note: You'll need to test the default behavior separately, potentially in a new test method without the env var set.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d536865 and 751edb8.

📒 Files selected for processing (5)
  • cortex/ask.py
  • cortex/first_run_wizard.py
  • tests/test_ask.py
  • tests/test_cli.py
  • tests/test_ollama_integration.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • cortex/ask.py
  • tests/test_ollama_integration.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • tests/test_ask.py
  • cortex/first_run_wizard.py
  • tests/test_cli.py
tests/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Maintain >80% test coverage for pull requests

Files:

  • tests/test_ask.py
  • tests/test_cli.py
🧬 Code graph analysis (1)
tests/test_cli.py (1)
cortex/cli.py (1)
  • install (320-566)
🪛 GitHub Actions: Cortex Automation
tests/test_cli.py

[error] 62-62: IndentationError: unexpected unindent. Possible merge conflict markers (e.g., '<<<<<<< HEAD') in the file.

🪛 GitHub Check: lint
tests/test_cli.py

[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:7: invalid-syntax: Expected a statement


[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:5: invalid-syntax: Expected a statement


[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:3: invalid-syntax: Expected a statement


[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:1: invalid-syntax: Expected class, function definition or async function definition after decorator


[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:7: invalid-syntax: Expected a statement


[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:5: invalid-syntax: Expected a statement


[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:3: invalid-syntax: Expected a statement


[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:1: invalid-syntax: Expected a statement

🪛 GitHub Check: Lint
tests/test_cli.py

[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:7: invalid-syntax: Expected a statement


[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:5: invalid-syntax: Expected a statement


[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:3: invalid-syntax: Expected a statement


[failure] 62-62: Ruff (invalid-syntax)
tests/test_cli.py:62:1: invalid-syntax: Expected class, function definition or async function definition after decorator


[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:7: invalid-syntax: Expected a statement


[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:5: invalid-syntax: Expected a statement


[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:3: invalid-syntax: Expected a statement


[failure] 71-71: Ruff (invalid-syntax)
tests/test_cli.py:71:1: invalid-syntax: Expected a statement

Comment on lines +261 to +264
with (
tempfile.TemporaryDirectory() as tmpdir,
patch("os.path.expanduser", return_value=tmpdir),
):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Wrong patch target breaks test isolation.

Line 263 patches os.path.expanduser, but the implementation in cortex/ask.py uses Path.home() to locate the config file:

config_file = Path.home() / ".cortex" / "config.json"

Patching os.path.expanduser won't affect Path.home(), so the test will still read from the real ~/.cortex/config.json instead of the temporary directory, making the test non-deterministic and environment-dependent.

🔎 Proposed fix
+        from pathlib import Path
+        
         # Test deterministic default behavior when no env var or config file exists.
         # Point the home directory to a temporary location without ~/.cortex/config.json
         with (
             tempfile.TemporaryDirectory() as tmpdir,
-            patch("os.path.expanduser", return_value=tmpdir),
+            patch("pathlib.Path.home", return_value=Path(tmpdir)),
         ):
             handler2 = AskHandler(api_key="test", provider="ollama")
             # When no env var and no config file exist, AskHandler should use its built-in default.
             self.assertEqual(handler2.model, "llama3.2")
🤖 Prompt for AI Agents
In tests/test_ask.py around lines 261 to 264, the test currently patches
os.path.expanduser but cortex/ask.py uses Path.home(), so the test still reads
the real ~/.cortex/config.json; change the patch to target pathlib.Path.home
(e.g. patch("pathlib.Path.home", return_value=Path(tmpdir)) or
patch.object(Path, "home", return_value=Path(tmpdir))) and ensure tmpdir is
converted to a Path before returning so the code under test resolves the config
file inside the temporary directory.

…pdate dependency check formatting; enhance CLI tests for Ollama fallback behavior
@sonarqubecloud
Copy link

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
.github/scripts/cla_check.py (1)

1-389: Inconsistency: File changes unrelated to PR objectives.

This PR is titled "Removed the hardcoded Ollama Models" and the objectives describe changes to Ollama model configuration, environment variable resolution, and test updates. However, the file under review (.github/scripts/cla_check.py) is a CLA enforcement script with only formatting and type hint improvements—completely unrelated to Ollama.

Best practice: separate unrelated changes into distinct PRs to maintain clear scope, simplify reviews, and keep git history clean.

🧹 Nitpick comments (1)
tests/test_cli.py (1)

62-71: Test correctly validates Ollama fallback behavior.

The test properly mocks CommandInterpreter and verifies that installation succeeds with Ollama when no API key is available. The mocking pattern is consistent with other similar tests in this file.

💡 Optional: Consider adding assertions to verify provider selection

To make the test more explicit about Ollama fallback behavior, you could add assertions to verify the provider and API key:

@patch.dict(os.environ, {}, clear=True)
@patch("cortex.cli.CommandInterpreter")
def test_install_no_api_key(self, mock_interpreter_class):
    # Should work with Ollama (no API key needed)
    mock_interpreter = Mock()
    mock_interpreter.parse.return_value = ["apt update", "apt install docker"]
    mock_interpreter_class.return_value = mock_interpreter
    
    result = self.cli.install("docker")
    # Should succeed with Ollama as fallback provider
    self.assertEqual(result, 0)
    
    # Verify Ollama was used as the provider
    mock_interpreter_class.assert_called_once()
    call_kwargs = mock_interpreter_class.call_args[1]
    self.assertEqual(call_kwargs.get("provider"), "ollama")
    self.assertEqual(call_kwargs.get("api_key"), "ollama-local")

This would make the Ollama fallback behavior more explicit in the test verification.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 751edb8 and 0d058db.

📒 Files selected for processing (4)
  • .github/scripts/cla_check.py
  • cortex/dependency_check.py
  • cortex/llm/interpreter.py
  • tests/test_cli.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Follow PEP 8 style guide
Type hints required in Python code
Docstrings required for all public APIs

Files:

  • cortex/llm/interpreter.py
  • tests/test_cli.py
  • cortex/dependency_check.py
tests/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Maintain >80% test coverage for pull requests

Files:

  • tests/test_cli.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Build Package
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.10)
  • GitHub Check: test (3.12)
🔇 Additional comments (6)
cortex/llm/interpreter.py (1)

114-117: LGTM! Clean formatting improvement.

The multi-line argument formatting with trailing comma follows PEP 8 style guidelines and improves readability without any semantic changes.

cortex/dependency_check.py (1)

46-46: LGTM! Consistent formatting.

The comma-space separator for the package list improves readability in the error message.

.github/scripts/cla_check.py (4)

137-137: LGTM: Formatting improvements enhance readability.

The reformatting of API calls and JSON parameters to single lines improves consistency and readability without changing behavior.

Also applies to: 242-242, 260-260, 265-265, 291-291, 304-304, 312-312


339-348: LGTM: Multi-line bot list improves maintainability.

Reorganizing the bot patterns list to multi-line format enhances readability and makes future diffs cleaner when adding or removing bot patterns.


11-11: LGTM: Import grouping follows PEP 8.

The blank line correctly separates standard library imports from third-party imports, adhering to PEP 8 style guidelines.


89-89: No action needed. The project requires Python 3.10+ as the minimum supported version, which fully supports the union type syntax (str | None) used in the type hints. The from __future__ import annotations workaround is unnecessary.

@sujay-d07
Copy link
Collaborator Author

recheck

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] Remove hardcoded llama3.2 model, allow configurable Ollama model selection

3 participants