Skip to content

feat: add MiniMax as first-class LLM provider (chat + embedding)#356

Open
octo-patch wants to merge 1 commit intovolcengine:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as first-class LLM provider (chat + embedding)#356
octo-patch wants to merge 1 commit intovolcengine:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

Summary

Add MiniMax as a fully supported LLM provider alongside OpenAI and Doubao, covering both chat completion (via OpenAI-compatible API) and embedding generation (via MiniMax native API).

Changes

Backend (opencontext/llm/llm_client.py):

  • Add MINIMAX enum to LLMProvider
  • Implement _minimax_embedding() / _minimax_embedding_async() for MiniMax native embedding API (embo-01 model, non-OpenAI-compatible format using texts/type/vectors)
  • Fix _request_embedding() to avoid NameError on undefined response variable in the MiniMax code path
  • Fix _request_embedding_async() to route MiniMax to native API instead of falling through to incompatible OpenAI SDK
  • Add MiniMax embedding validation in validate()
  • Add MiniMax error codes (invalid_api_key, insufficient_balance) to _extract_error_summary()

Frontend (settings/constants.tsx, settings.tsx):

  • Add MiniMax to ModelTypeList with M2.7 / M2.7-highspeed models
  • Add embo-01 embedding model and api.minimax.io base URL presets
  • Add MiniMax provider icon (minimax.svg)
  • Wire up API key link, base URL routing, and form rendering

Documentation (README.md, README_zh.md):

  • List MiniMax as supported provider in Quick Start and Backend Architecture sections
  • Add minimax option in config.yaml examples

Tests (22 unit + 3 integration):

  • Provider enum, client init, chat completion, streaming, thinking mode
  • Embedding: native API format, auth headers, error handling, dimension truncation
  • Validation: chat and embedding success/failure paths
  • Error handling: MiniMax-specific error code extraction
  • Integration tests with real MiniMax API (requires MINIMAX_API_KEY)

Test Plan

  • All 22 unit tests pass
  • 2/3 integration tests pass (embedding test may hit MiniMax rate limits)
  • Frontend builds successfully with new provider
  • Manual verification with MiniMax API key in the Settings UI

MiniMax Models

Model Context Use Case
MiniMax-M2.7 1M tokens Chat / VLM
MiniMax-M2.7-highspeed 204K tokens Fast chat
embo-01 1536 dims Embedding

Add MiniMax (https://www.minimaxi.com/) as a fully supported LLM
provider alongside OpenAI and Doubao, covering both chat completion
and embedding generation.

Backend changes (opencontext/llm/llm_client.py):
- Add MINIMAX enum to LLMProvider
- Implement _minimax_embedding() for native MiniMax embedding API
  (non-OpenAI-compatible: uses texts/type/vectors format)
- Implement _minimax_embedding_async() for async embedding support
- Fix _request_embedding() to properly handle MiniMax path
  (avoid NameError on undefined response variable)
- Fix _request_embedding_async() to route MiniMax to native API
  instead of falling through to incompatible OpenAI SDK
- Add MiniMax embedding validation in validate()
- Add MiniMax error codes (invalid_api_key, insufficient_balance)
  to _extract_error_summary()

Frontend changes:
- Add MiniMax to ModelTypeList enum with M2.7/M2.7-highspeed models
- Add embo-01 embedding model and api.minimax.io base URL presets
- Add MiniMax provider icon (minimax.svg)
- Wire up API key link, base URL routing, and form rendering
- Add MiniMax default model in form initialValues

Documentation:
- Update README.md and README_zh.md to list MiniMax as supported
- Add minimax option in config.yaml examples

Tests (22 unit + 3 integration):
- TestLLMProviderEnum: enum value validation
- TestMiniMaxChatClient: init, completion, streaming, thinking
- TestMiniMaxEmbeddingClient: native API format, auth, errors, truncation
- TestMiniMaxValidation: chat and embedding validation paths
- TestMiniMaxErrorHandling: MiniMax-specific error extraction
- TestMiniMaxIntegration: real API tests (requires MINIMAX_API_KEY)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant