- Introduced Xiaomi MiMo as a new provider in providerModels.js and providers.js.
- Updated model alias mapping in model.js to include Xiaomi MiMo.
- Enhanced validation route to support Xiaomi MiMo API endpoints.
- Added Xiaomi MiMo to APIKEY_PROVIDERS with relevant details.
This update expands the range of supported providers, improving integration capabilities.
- Introduced Cloudflare AI as a new provider with specific configurations in providerModels.js and providers.js.
- Updated DefaultExecutor to handle account ID resolution for Cloudflare AI connections.
- Enhanced AddApiKeyModal and EditConnectionModal to include account ID input for Cloudflare AI.
- Implemented validation for Cloudflare AI API key connections in testUtils.js and route.js.
- Updated UI components to reflect changes in provider management and connection handling.
- Added new image models for GPT 5.2, 5.3, and 5.4, including capabilities for text-to-image and editing.
- Updated embedding handling to include optional dimensions in requests.
- Introduced support for custom embedding providers, allowing dynamic fetching and validation of custom nodes.
- Improved image generation handling with Codex integration, including progress tracking and error handling.
- Enhanced UI components to support adding custom embeddings and displaying their status.
Add Volcengine Ark as a first-class API key provider with official model presets, endpoint configuration, API key validation, model discovery, connection testing, provider logo, and runtime alias mapping for `ark/*` model IDs.
Made-with: Cursor
Co-authored-by: kingsy <kingsylin@vip.qq.com>
- Introduced OpenCode Go provider with relevant configurations.
- Enhanced model management by allowing users to add and delete custom models.
- Updated UI components to support model selection for image types.
- Adjusted sidebar visibility to include image media kinds.
* fix: add multi-model support for Factory Droid CLI tool (closes#521)
* Add Claude Opus 4.7 to cc and cl provider lists
RESEARCH confirmed GA release April 16, 2026. Adding to:
- cc (Claude Code): claude-opus-4-7
- cl (Cline): anthropic/claude-opus-4.7
Refs: TICKET-20260416-ENG-O4.7-001
* fix: add Blackbox AI as a supported provider (closes#599)
- Integrated Google TTS languages from a separate module for better maintainability.
- Updated local device voice fetching to support both macOS and Windows, improving cross-platform compatibility.
- Enhanced dashboard route protection by adding dynamic settings for login requirements and tunnel access.
- Introduced UI elements for managing security settings related to API key requirements and dashboard access via tunnel.
- Added default TTS response example in the media provider page for better user guidance.
- Updated constants to reflect changes in TTS provider configurations.
This commit improves the overall user experience and security of the TTS features.
- Encode thoughtSignature into tool_call.id using _TSIG_ delimiter and base64url
- Decode _TSIG_ on request to restore thoughtSignature for Gemini multi-turn thinking
- Track pendingThoughtSignature across parts for deferred signature attachment
- Add LocalMutex (2-layer locking) to prevent ELOCKED on concurrent DB access
- Increase lockfile retries from 5 to 15 for multi-process robustness
- Restore db.json seed on first run to prevent ENOENT on lockfile.lock
- Use process.env.BASE_URL fallback in models test route
- Remove gemini-3-flash-lite-preview from provider models
Co-authored-by: kwanLeeFrmVi <quanle96@outlook.com>
Closes#450
Made-with: Cursor
Add MiniMax-M2.7 to provider models and pricing config alongside
existing M2.5. M2.7 is the latest reasoning model with 204K context.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Quan <quanle96@outlook.com>
PR: https://github.com/decolua/9router/pull/298
Thanks to @kwanLeeFrmVi for the original implementation. Here is a summary
of changes made during review integration:
- Replaced google-auth-library with jose (already a project dependency)
for SA JSON -> OAuth2 Bearer token minting (RS256 JWT assertion flow)
- Moved auth logic (parseSaJson, refreshVertexToken, token cache) from
executor into open-sse/services/tokenRefresh.js to match project pattern
- Fixed executor to use proxyAwareFetch instead of raw fetch (proxy support)
- Simplified buildUrl: use global aiplatform.googleapis.com endpoint for
both vertex (Gemini) and vertex-partner; removed region/modelFamily fields
- Added auto-detection of GCP project_id from raw API key via probe request
(vertex-partner only, cached per key)
- Added vertex/vertex-partner cases to /api/providers/validate/route.js
- Updated model lists based on live testing:
- vertex: gemini-3.1-pro-preview, gemini-3.1-flash-lite-preview,
gemini-3-flash-preview, gemini-2.5-flash (removed gemini-2.5-pro: 404)
- vertex-partner: deepseek-v3.2, qwen3-next-80b (instruct+thinking),
glm-5 (removed Mistral/Llama: not enabled in test project)
- gemini provider: added gemini-3.1-pro-preview, gemini-3.1-flash-lite-preview
- Removed bun.lock (project uses npm/package-lock.json)
- Removed region and modelFamily UI fields (global endpoint, auto-detect)
- Kiro token auto-refresh on AccessDeniedException (from commit 2)
Made-with: Cursor
- Added new provider models: DeepSeek 3.1, DeepSeek 3.2, and Qwen3 Coder Next.
- Implemented UI changes to support round-robin strategy with sticky limits in the provider detail page.
- Improved logging to display connection names instead of IDs for better clarity.
- Add buildQwenBaseUrl function to construct URLs for Qwen resources.
- Update buildProviderUrl to support Qwen model requests.
- Enhance token refresh logic to include provider-specific data for Qwen.
- Refactor CLI Tools page to exclude MITM tools and streamline model retrieval.
- Introduce new components for MITM server management.
- Update API routes to handle Qwen-specific resource URLs and improve error handling.
- Added new provider models: Claude 4.6 Opus Max, Claude 4.6 Sonnet Medium Thinking, Kimi K2.5, Gemini 3.1 Pro Preview, Gemini 3 Flash Preview, GPT 5.2, and GPT 5.3 Codex.
Rename alicloud to alicode to clearly indicate Aliyun's Coding Plan service.
- Provider ID: alicode (short for Aliyun Coding)
- Model format: alicode/qwen3.5-plus
- Simplified mapping - no more bidirectional aliases
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add support for Alibaba Cloud Bailian Coding Plan, a coding-focused AI service
that provides fixed monthly pricing for multiple models.
Changes:
- Add alicloud provider with OpenAI-compatible API endpoint
- Support 8 models: qwen3.5-plus, kimi-k2.5, glm-5, MiniMax-M2.5,
qwen3-max, qwen3-coder-next, qwen3-coder-plus, glm-4.7
- Use "ali" as provider alias (ali/model format)
- Add API key validation and connection testing
- Add frontend provider definition with "ALi" text icon
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Cherry-picked from decolua/9router PR #183.
Note: open-sse changes included but need further review due to extensive modifications.
Co-authored-by: Cursor <cursoragent@cursor.com>